-
Notifications
You must be signed in to change notification settings - Fork 3
Silva rerum
https://www.thegeekdiary.com/how-to-disable-the-default-apache-welcome-page-in-centos-rhel-7/
cd /etc/httpd/conf.d
mv welcome.conf welcome.conf.arch
systemctl restart httpd
CREATE ROLE <user> LOGIN PASSWORD '<password>';
CREATE DATABASE <database> OWNER <user> ENCODING 'UTF8';
vi /etc/profile
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_COLLATE=C
export LC_CTYPE=en_US.UTF-8
timedatectl set-timezone Europe/Warsaw
https://computingforgeeks.com/how-to-install-postgresql-13-on-centos-7/
yum -y install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
yum install postgresql13
https://www.postgresql.org/docs/12/backup-dump.html#BACKUP-DUMP-RESTORE
pg_dump -h broth1 -U <user> <dbname> > dump.backup
psql --set ON_ERROR_STOP=on <dbname> < dump.backup
- netstat -tulpn (listening ports)
- netstat -anp
- ss -tunlp4
- nc -zv db2a3 50000
SELECT name, current_setting(name) FROM pg_settings WHERE name = 'max_connections';
select pg_terminate_backend(pid) from pg_stat_activity where state = 'idle' and query_start < current_timestamp - interval '5 minutes';
In this example, the project "rook-ceph" got zombie status.
Go to Project->rook-ceph->Yaml
Go to "Message" section at the end and there is a reason for slowly dying, the project cannot dissapear because of dependencies.
message: >-
Some content in the namespace has finalizers remaining:
cephblockpool.ceph.rook.io in 1 resource instances,
cephfilesystem.ceph.rook.io in 1 resource instances
Next step is to find the dependent objects, they are not listed in the rook-ceph project scope.
Goto Explore->Find->Ceph (partial name) -> a number of resources are reported.
CephBlockPool -> Instances -> replicapool -> YAML
namespace: rook-ceph
finalizers:
- cephblockpool.ceph.rook.io
spec:
It looks, that replicapool cannot be terminated because finalizer hook is blocked for some unspecified reason. So the only solution is simply to remove cephblockpool.ceph.rook.io and release replicapool off the hook.
namespace: rook-ceph
finalizers: []
spec:
The same rescue mission for CephFilesystem.
https://docs.github.com/en/packages/guides/configuring-apache-maven-for-use-with-github-packages
Important: user lower case in artifact name
vi ~/.m2/settings.xml
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
http://maven.apache.org/xsd/settings-1.0.0.xsd">
<activeProfiles>
<activeProfile>github</activeProfile>
</activeProfiles>
<profiles>
<profile>
<id>github</id>
<repositories>
<repository>
<id>central</id>
<url>https://repo1.maven.org/maven2</url>
<releases><enabled>true</enabled></releases>
<snapshots><enabled>true</enabled></snapshots>
</repository>
<repository>
<id>github</id>
<name>GitHub stanislawnartkowski Apache Maven Packages</name>
<url>https://maven.pkg.github.com/stanislawbartkowski/RestService/</url>
<releases><enabled>true</enabled></releases>
<snapshots><enabled>true</enabled></snapshots>
</repository>
</repositories>
</profile>
</profiles>
<servers>
<server>
<id>github</id>
<username>stanislawbartkowski</username>
<password>token</password>
</server>
</servers>
</settings>
pom.xml
<distributionManagement>
<repository>
<id>github</id>
<name>GitHub stanislawnartkowski Apache Maven Packages</name>
<url>https://maven.pkg.github.com/stanislawbartkowski/RestService</url>
</repository>
</distributionManagement>
mvn deploy
firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-port 50000/tcp
Command | Description |
---|---|
openssl s_client -connect hdm2.sb.com:9091 | Secure connection, check certificate |
keytool -printcert -v -file DigiCertGlobalRootCA.crt | Display details about certificate |
https://sysadminxpert.com/how-to-mount-s3-bucket-on-linux-instance/
yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel
cd /usr/src
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure
make
make install
File with secrets.
vi /etc/passwd-s3fs
chmod 600 /etc/passwd-s3fs
Mount point.
vi /etc/fstab
s3fs#(bucket name) /mnt/s3 fuse _netdev,rw,nosuid,nodev,allow_other,nonempty,noauto 0 0
Import: it is case-sensitive. currentSchema is correct but if currentschema is used, no error is reported but simply ignored while executing!
jdbc:db2://jobbery-inf:50000/$DBNAME:currentSchema=$SCHEMA;
cat /run/user/1001/containers/auth.json
The default is 1 minute. To increase to 10 minutes, enter the infrastructure node and change HAProxy settings.
vi /etc/haproxy/haproxy.cfg
defaults
mode HTTP
...
timeout client 10m
timeout server 10m
systemctl reload haproxy
https://hub.docker.com/r/jupyter/pyspark-notebook/tags/?page=1&ordering=last_updated
podman pull jupyter/pyspark-notebook
podman run -d -p 8888:8888 --name jupyter pyspark-notebook
To access the notebook, open this file in a browser:
file:///home/jovyan/.local/share/jupyter/runtime/nbserver-7-open.html
Or copy and paste one of these URLs:
http://11aae84621e4:8888/?token=d4ceaa3ae4c7e0fe4f3ed44da85f5218e58e56ed659455fb
or http://127.0.0.1:8888/?token=d4ceaa3ae4c7e0fe4f3ed44da85f5218e58e56ed659455fb
I 10:16:08.341 NotebookApp 302 GET /?token=d4ceaa3ae4c7e0fe4f3ed44da85f5218e58e56ed659455fb (10.0.2.100) 1.720000ms
http://127.0.0.1:8888/?token=d4ceaa3ae4c7e0fe4f3ed44da85f5218e58e56ed659455fb
podman start jupyter
podman stop jupyter
https://github.com/jupyterlab/jupyterlab-git
pip install --upgrade jupyterlab jupyterlab-git
docker run --name postgres -e POSTGRES_PASSWORD=secret --restart=always -p 5432:5432 -d postgres
Create a permanent directory/storage.
mkdir pgdata
podman run -d --name postgres -e POSTGRES_PASSWORD=secret -v $PWD/pgdata:/var/lib/postgresql/data:Z -p 5432:5432 postgres
Open port.
firewall-cmd --permanent --add-port=postgres/tcp
systemctl reload firewalld
The first terminal:
dd if=/dev/zero of=testfile bs=1G count=10 oflag=direct,dsync
The second terminal:
iostat -xyz 1
Columns
- util : 100%
- wkB/s: good - at least 800MB/s
- w_wait: write latency - at least <20ms
Machine descr | util% | wkB/s | w_await (latency) |
---|---|---|---|
Good | 100% | > 800 MB | < 20 ms |
(notebook, P50,SSD) | 95-100% | 707630 (700 MB) 803840 (800MB) 979970.50 (970MB) |
228ms 242ms 236ms |
(notebook, HDDisk) | 95-100% | 120090 (120 MB) 123760 (123MB) 121985 (121MB) |
458ms 453ms |
(notebook, USDB disk) | 100% | 4559 (45MB) 30830 (20MB) 42951 (42MB) |
509ms 573ms 615ms |
( (notebook, SDD, thinkde) | 95-100% | 390975 (390MB) 401805 (401MB) 394446 (394MB) |
133.05 135.62 128.24 |
(ZLinux, VM) | 82% | 751460 (751MB) 774187 (774 MB) |
235.77ms 240.12ms 237ms |
OpenShift, ephemeral | 44-51% | 870430.30(870 MB) 887725.30(887MB) 1178368.70(1178MB) |
42.08ms 39.78ms 37.47ms |
OpenShift, rook-ceph-block + | 99.70% 94.20% 69.70% 89.10 |
188380.00(188MB) 184448.00(184MB) 168584.00(168 MB) |
933.08 597.15 722.00 |
OpenShift, rook-cephfs + | 90.24% 88.86% 89.10% |
44639.00 (44MB) 210544.00(210MB) 43259.80(43MB) |
8.09ms 14.03ms 9.55ms |
OpenShift, nfs + | 2.10% 1.24% 1.94% |
177.00 (0.177MB) 210.30(0.210MB) 213.60(0.213MB) |
1.81ms 0.93ms 0.53ms |
Z Linux (mainframe) | 54% 20% 15% |
2097148.00 (2GB) 1376253.00(1.3 GB) 1816576.00(1.8G) |
31.56ms 34.44ms 32.67ms |
Z Linux (mainframe), disk attached | 36% 8% 35% |
1998850.00(1.9B) 98301.50(98MB) 1938884.00(1.9GB) |
4.94ms 12.39ms 14.31ms |
Hadoop cluster (Cloudera) is installed on private network (10.x.x.x). NameNode (and WebHDFS) is listening on this private network and the client application cannot access it. One method to sort this problem is Knox Gateway and it is a recommended method because it gives more control over who and how the client can access Hadoop services.
Important: Access to all data hosts in the cluster is required, because WebHDFS redirects clients to the DataNode port (default port 9864)
The quick method is to use HAProxy Load Balancer on the edge node.
Install HAProxy
yum install haproxy
yum enable haproxy
yum start haproxy
Collect necessary informations
- NameNode private IP address: 10.11.17.12
- WebHDFS port number: 9870
- Public address of edge HAProxy node: 9.30.181.152
Verify that WebHDFS is responding from the edge node.
nc -zv 10.11.17.12 9870
curl -i -k --negotiate -u : -X GET http://10.11.17.12:9870/webhdfs/v1/tmp/?op=LISTSTATUS
...
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16387,"group":"supergroup","length":0,"modificationTime":1623492327183,"owner":"hdfs","pathSuffix":".cloudera_health_monitoring_canary_files","permission":"0","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16560,"group":"supergroup","length":0,"modificationTime":1622918018491,"owner":"yarn","pathSuffix":"hadoop-yarn","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":3,"fileId":16451,"group":"supergroup","length":0,"modificationTime":1623341949547,"owner":"hive","pathSuffix":"hive","permission":"773","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16539,"group":"hadoop","length":0,"modificationTime":1623343994607,"owner":"mapred","pathSuffix":"logs","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}
Reconfigure HAProxy.
vi /etc/haproxy/haproxy.cfg
..........
#-----------------------------------------------
# access to webhdfs on private network
#-----------------------------------------------
frontend webhdfs-tcp
bind 9.30.181.152:9870
default_backend webhdfs-tcp
mode tcp
option tcplog
backend webhdfs-tcp
balance source
mode tcp
server inimical1.fyre.ibm.com 10.11.17.12:9870 check
systemctl reload haproxy
Test again using a public address.
nc -zv 9.30.181.152 9870
curl -i -k --negotiate -u : -X GET http://9.30.181.152:9870/webhdfs/v1/tmp/?op=LISTSTATUS
https://github.com/minio/mc
https://docs.min.io/docs/minio-client-complete-guide.html
Install the minion client on RedHat/CentOS
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
mc --help
Prepare access information.
Information | Sample value |
---|---|
S3 like endpoint | sbstoraga.obj.fyre.ibm.com |
Access Key | KEY24mABCwio |
Secret | 67wAGqFfQS5axyi |
Create alias
mc alias set mymc http://sbstoraga.obj.fyre.ibm.com KEY24mABCwio 67wAGqFfQS5axyi
List aliases
mc alias list
Create directory
mc mb mymc/hello
List directories
mc ls mymc
[2021-06-17 15:18:28 CEST] 0B hello/
ssh ignores default_ccache_name in /etc/krb5.conf and sets KRB5CCNAME to PERSISTENT KEYRING. A workaround is to unset KRB5CCNAME.
vi /etc/profile
unset KRB5CCNAME
ktpass /princ dsxhi@FYRE.NET /pass secret /ptype KRB5_NT_PRINCIPAL /out dsxhi.keytab
Create Quarkus project
mvn io.quarkus.platform:quarkus-maven-plugin:2.2.2.Final:create -DprojectGroupId=com.redhat.training -DprojectArtifactId=multiplier -DplatformGroupId=com.redhat.quarkus -DplatformVersion=2.2.2.Final -DclassName="com.redhat.training.MultiplierResource" -Dpath="/multiplier" -Dextensions="rest-client"
Add OpenShift extension
mvn quarkus:add-extension -Dextension=openshift
Deploy to OpenShift (ignore cert validation, skip tests)
mvn package -Dquarkus.kubernetes.deploy=true -Dquarkus.s2i.base-jvm-image=registry.access.redhat.com/ubi8/openjdk-11 -Dquarkus.openshift.expose=true -Dquarkus.kubernetes-client.trust-certs=true -Dmaven.test.skip=true
qemu-img info kvmmachine.img
qemu-img resize kvmmachine.img +10G
https://www.cyberciti.biz/faq/how-to-fix-x11-forwarding-request-failed-on-channel-0/
sudo vi /etc/ssh/sshd_config
X11Forwarding yes
X11UseLocalhost no
sudo yum install xauth
https://computingforgeeks.com/install-and-configure-nfs-server-on-centos-rhel/
Install software
sudo yum -y install nfs-utils
systemctl enable --now nfs-server rpcbind
Test services
systemctl status nfs-server
systemctl status rpcbind
Define exported file systems
vi /etc/exports
/mnt/usb *(rw,no_root_squash)
/mnt/usb1 *(rw,no_root_squash)
Export file systems
exportfs -a
Test locally
showmount -e localhost
Export list for localhost:
/mnt/usb1 *
/mnt/usb *
Open ports
firewall-cmd --add-service=nfs --permanent
firewall-cmd --add-service={nfs3,mountd,rpc-bind} --permanent
firewall-cmd --reload
Test on the remote client machine
showmount -e nfsserverhost
Export list for localhost:
/mnt/usb1 *
/mnt/usb *
Define mount points on the client machine.
sudo vi /etc/fstab
nfsserverhost:/mnt/usb /mnt/usb nfs rw,sync,hard,intr,noauto 0 0
nfsserverhost:/mnt/usb1 /mnt/usb1 nfs rw,sync,hard,intr,noauto 0 0
Mount and enjoy
sudo mount /mnt/usb
curl -v telnet://api.kist.cp.fyre.ibm.com:5432
curl -v telnet://api.kist.cp.fyre.ibm.com:22
https://towardsdatascience.com/heres-how-to-run-sql-in-jupyter-notebooks-f26eb90f3259
In terminal, install psycopg2 if not installed already.
pip install psycopg2-binary
!pip install ipython-sql
%load_ext sql
%sql postgresql://queryuser:secret@api.kist.cp.fyre.ibm.com/querydb
%% sql
select * from test
mkdir /home/mywiki/jspwiki
find jspwiki -type d -exec chmod 777 {} \;
find jspwiki -type f -exec chmod 666 {} \;
podman run -d -p 18080:8080 --env="UMASK=000"--env="jspwiki_baseURL=http://localhost/" --restart always --name jspwiki --volume="/home/mywiki/jspwiki:/var/jspwiki/pages:Z" metskem/docker-jspwiki
As root:
firewall-cmd --permanent --add-port=18080/tcp
systemctl reload firewalls
Create a service
vi /etc/systemd/system/jspwiki-container.service
[Unit]
Description=My Wiki
[Service]
User=mywiki
Restart=always
ExecStart=/usr/bin/podman start -a jspwiki
ExecStop=/usr/bin/podman stop -t 2 jspwiki
[Install]
WantedBy=multi-user.target
systemctl enable jspwiki-container
Create new user
create user perf identified by secret;
GRANT ALL PRIVILEGES TO perf;
-Djava.util.logging.config.file=src/main/resources/logging.properties
handlers= java.util.logging.ConsoleHandler
.level= INFO
java.util.logging.ConsoleHandler.level = INFO
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
com.logging.level = WARNING
java.util.logging.SimpleFormatter.format=%1$s %4$s: %5$s %6$s
# "%1$tc %2$s%n%4$s: %5$s%6$s%n"
Problem:
Error: writing blob: adding layer with blob "sha256:5d20c808ce198565ff70b3ed23a991dd49afac45dece63474b27ce6ed036adc6": Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 0:42 for /etc/shadow): Check /etc/subuid and /etc/subgid: lchown /etc/shadow: invalid argument
Add manually:
vi /etc/subuid
vi /etc/subguid
podcast:100000:65536
repo:165536:65536
kafka:231072:65536
Important: after that, kill all podman processes launched by this user. Also, after an unsuccessful run.
ps -aef | grep podman
kill -9 pid
SELECT FILE_NAME,BYTES,AUTOEXTENSIBLE,MAXBYTES FROM DBA_DATA_FILES WHERE TABLESPACE_NAME = 'USERS'
ALTER TABLESPACE users ADD DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users02.dbf' SIZE 31G AUTOEXTEND ON
ALTER TABLESPACE users ADD DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users03.dbf' SIZE 31G AUTOEXTEND ON
ALTER TABLESPACE users ADD DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users04.dbf' SIZE 20G AUTOEXTEND ON