Skip to content

Silva rerum

stanislawbartkowski edited this page Dec 13, 2022 · 121 revisions

Remove the welcome page from httpd

ń https://www.thegeekdiary.com/how-to-disable-the-default-apache-welcome-page-in-centos-rhel-7/

cd /etc/httpd/conf.d
mv welcome.conf welcome.conf.arch
systemctl restart httpd

PostgreSQL, create role and database

CREATE ROLE <user> LOGIN PASSWORD '<password>';
CREATE DATABASE <database> OWNER <user> ENCODING 'UTF8';

Set locale regardless of ssh client locale

locale-gen en_US.UTF-8

vi /etc/profile

export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_COLLATE=C
export LC_CTYPE=en_US.UTF-8
export LC_ALL=en_US.UTF-8

Timezone CE

timedatectl set-timezone Europe/Warsaw

PostgreSQL-13

https://computingforgeeks.com/how-to-install-postgresql-13-on-centos-7/

yum -y install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
yum install postgresql13

PostgreSQL backup, restore

https://www.postgresql.org/docs/12/backup-dump.html#BACKUP-DUMP-RESTORE

pg_dump -h broth1 -U <user> <dbname> > dump.backup
psql --set ON_ERROR_STOP=on <dbname> < dump.backup

Ports

  • netstat -tulpn (listening ports)
  • netstat -anp
  • ss -tunlp4
  • nc -zv db2a3 50000

PostgreSQL ERROR: no more connections allowed

SELECT name, current_setting(name) FROM pg_settings WHERE name = 'max_connections';

select pg_terminate_backend(pid) from pg_stat_activity where state = 'idle' and query_start < current_timestamp - interval '5 minutes';

OpenShift, a project is "terminating" but still alive

In this example, the project "rook-ceph" got zombie status.

Go to Project->rook-ceph->Yaml
Go to "Message" section at the end and there is a reason for slowly dying, the project cannot dissapear because of dependencies.

    message: >-
        Some content in the namespace has finalizers remaining:
        cephblockpool.ceph.rook.io in 1 resource instances,
        cephfilesystem.ceph.rook.io in 1 resource instances

Next step is to find the dependent objects, they are not listed in the rook-ceph project scope.
Goto Explore->Find->Ceph (partial name) -> a number of resources are reported.
CephBlockPool -> Instances -> replicapool -> YAML

  namespace: rook-ceph
  finalizers:
    - cephblockpool.ceph.rook.io
spec:

It looks, that replicapool cannot be terminated because finalizer hook is blocked for some unspecified reason. So the only solution is simply to remove cephblockpool.ceph.rook.io and release replicapool off the hook.

namespace: rook-ceph
  finalizers: []
spec:

The same rescue mission for CephFilesystem.

GitHub Maven

https://docs.github.com/en/packages/guides/configuring-apache-maven-for-use-with-github-packages

Important: user lower case in artifact name

vi ~/.m2/settings.xml

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
                      http://maven.apache.org/xsd/settings-1.0.0.xsd">

  <activeProfiles>
    <activeProfile>github</activeProfile>
  </activeProfiles>

  <profiles>
    <profile>
      <id>github</id>
      <repositories>
        <repository>
          <id>github</id>
          <name>GitHub stanislawnartkowski Apache Maven Packages</name>
          <url>https://maven.pkg.github.com/stanislawbartkowski/RestService/</url>
                <releases><enabled>true</enabled></releases>
          <snapshots><enabled>true</enabled></snapshots>
        </repository>
      </repositories>
    </profile>
  </profiles>

  <servers>
    <server>
      <id>github</id>
      <username>stanislawbartkowski</username>
      <password>token</password>
    </server>
  </servers>
</settings>

pom.xml

   <distributionManagement>
        <repository>
            <id>github</id>
            <name>GitHub stanislawnartkowski Apache Maven Packages</name>
            <url>https://maven.pkg.github.com/stanislawbartkowski/RestService</url>
        </repository>
    </distributionManagement>

mvn deploy

firewall

NFS

firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --permanent --add-service=mountd

DB2

firewall-cmd --permanent --add-port 50000/tcp

OpenSSL

Command Description
openssl s_client -connect hdm2.sb.com:9091 Secure connection, check certificate
keytool -printcert -v -file DigiCertGlobalRootCA.crt Display details about certificate

S3

https://sysadminxpert.com/how-to-mount-s3-bucket-on-linux-instance/

yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel

cd /usr/src
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure
make
make install


File with secrets.

vi /etc/passwd-s3fs
chmod 600 /etc/passwd-s3fs

Mount point.

vi /etc/fstab

s3fs#(bucket name)  /mnt/s3 fuse _netdev,rw,nosuid,nodev,allow_other,nonempty,noauto  0 0

Set current schema in DB2 JDBC URL

Import: it is case-sensitive. currentSchema is correct but if currentschema is used, no error is reported but simply ignored while executing!

jdbc:db2://jobbery-inf:50000/$DBNAME:currentSchema=$SCHEMA;

podman, auth.json

cat /run/user/1001/containers/auth.json

Increase time-out in oc rsh command

The default is 1 minute. To increase to 10 minutes, enter the infrastructure node and change HAProxy settings.

vi /etc/haproxy/haproxy.cfg

defaults
        mode                    HTTP
...
       timeout client          10m
       timeout server          10m

systemctl reload haproxy

podman, Jupyter notebook

https://hub.docker.com/r/jupyter/pyspark-notebook/tags/?page=1&ordering=last_updated

podman pull jupyter/pyspark-notebook

podman run -d -p 8888:8888 --name jupyter pyspark-notebook

podman run -d -p 8888:8888 --name jupyter -e JUPYTER_ENABLE_LAB=yes -v /home/sbartkowski/work/jupyter:/home/jovyan/:Z all-spark-notebook

    To access the notebook, open this file in a browser:
        file:///home/jovyan/.local/share/jupyter/runtime/nbserver-7-open.html
    Or copy and paste one of these URLs:
        http://11aae84621e4:8888/?token=d4ceaa3ae4c7e0fe4f3ed44da85f5218e58e56ed659455fb
     or http://127.0.0.1:8888/?token=d4ceaa3ae4c7e0fe4f3ed44da85f5218e58e56ed659455fb

I 10:16:08.341 NotebookApp 302 GET /?token=d4ceaa3ae4c7e0fe4f3ed44da85f5218e58e56ed659455fb (10.0.2.100) 1.720000ms

http://127.0.0.1:8888/?token=d4ceaa3ae4c7e0fe4f3ed44da85f5218e58e56ed659455fb

podman start jupyter

podman stop jupyter

Test PySpark: https://www.sicara.ai/blog/2017-05-02-get-started-pyspark-jupyter-notebook-3-minutes


Access to SparkUI

podman run -d -p 4040:4040 -p 4041:4041 -p 4042:4042 -p 8888:8888 --name jupyter -e JUPYTER_ENABLE_LAB=yes -v /home/sbartkowski/work/jupyter:/home/jovyan/:Z all-spark-notebook


http://localhost:4040

Jupyter github

https://github.com/jupyterlab/jupyterlab-git

pip install --upgrade jupyterlab jupyterlab-git

Docker PostgreSQL

docker run --name postgres -e POSTGRES_PASSWORD=secret --restart=always -p 5432:5432 -d postgres

Podman PostgreSQL

Create a permanent directory/storage.

mkdir pgdata
podman run -d --name postgres -e POSTGRES_PASSWORD=secret -v $PWD/pgdata:/var/lib/postgresql/data:Z -p 5432:5432 postgres

Open port.

firewall-cmd --permanent --add-port=postgres/tcp
systemctl reload firewalld

Disk performance

The first terminal:

dd if=/dev/zero of=testfile bs=1G count=10 oflag=direct,dsync

The second terminal:

iostat -xyz 1

Columns

  • util : 100%
  • wkB/s: good - at least 800MB/s
  • w_wait: write latency - at least <20ms
Machine descr util% wkB/s w_await (latency)
Good 100% > 800 MB < 20 ms
(notebook, P50,SSD) 95-100% 707630 (700 MB)
803840 (800MB)
979970.50 (970MB)
228ms
242ms
236ms
(notebook, HDDisk) 95-100% 120090 (120 MB)
123760 (123MB)
121985 (121MB)
458ms
453ms
(notebook, USDB disk) 100% 4559 (45MB)
30830 (20MB)
42951 (42MB)
509ms
573ms
615ms
( (notebook, SDD, thinkde) 95-100% 390975 (390MB)
401805 (401MB)
394446 (394MB)
133.05
135.62
128.24
(ZLinux, VM) 82% 751460 (751MB)
774187 (774 MB)
235.77ms
240.12ms
237ms
OpenShift, ephemeral 44-51% 870430.30(870 MB)
887725.30(887MB)
1178368.70(1178MB)
42.08ms
39.78ms
37.47ms
OpenShift, rook-ceph-block + 99.70%
94.20%
69.70%
89.10
188380.00(188MB)
184448.00(184MB)
168584.00(168 MB)
933.08
597.15
722.00
OpenShift, rook-cephfs + 90.24%
88.86%
89.10%
44639.00 (44MB)
210544.00(210MB)
43259.80(43MB)
8.09ms
14.03ms
9.55ms
OpenShift, nfs + 2.10%
1.24%
1.94%
177.00 (0.177MB)
210.30(0.210MB)
213.60(0.213MB)
1.81ms
0.93ms
0.53ms
Z Linux (mainframe) 54%
20%
15%
2097148.00 (2GB)
1376253.00(1.3 GB)
1816576.00(1.8G)
31.56ms
34.44ms
32.67ms
Z Linux (mainframe), disk attached 36%
8%
35%
1998850.00(1.9B)
98301.50(98MB)
1938884.00(1.9GB)
4.94ms
12.39ms
14.31ms

HAProxy to give access to WebHDFS listening on a private network

Hadoop cluster (Cloudera) is installed on private network (10.x.x.x). NameNode (and WebHDFS) is listening on this private network and the client application cannot access it. One method to sort this problem is Knox Gateway and it is a recommended method because it gives more control over who and how the client can access Hadoop services.

Important: Access to all data hosts in the cluster is required, because WebHDFS redirects clients to the DataNode port (default port 9864)

The quick method is to use HAProxy Load Balancer on the edge node.

Install HAProxy

yum install haproxy
yum enable haproxy
yum start haproxy

Collect necessary informations

  • NameNode private IP address: 10.11.17.12
  • WebHDFS port number: 9870
  • Public address of edge HAProxy node: 9.30.181.152

Verify that WebHDFS is responding from the edge node.

nc -zv 10.11.17.12 9870
curl -i -k --negotiate -u : -X GET http://10.11.17.12:9870/webhdfs/v1/tmp/?op=LISTSTATUS

...
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16387,"group":"supergroup","length":0,"modificationTime":1623492327183,"owner":"hdfs","pathSuffix":".cloudera_health_monitoring_canary_files","permission":"0","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16560,"group":"supergroup","length":0,"modificationTime":1622918018491,"owner":"yarn","pathSuffix":"hadoop-yarn","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":3,"fileId":16451,"group":"supergroup","length":0,"modificationTime":1623341949547,"owner":"hive","pathSuffix":"hive","permission":"773","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16539,"group":"hadoop","length":0,"modificationTime":1623343994607,"owner":"mapred","pathSuffix":"logs","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}

Reconfigure HAProxy.

vi /etc/haproxy/haproxy.cfg

..........
#-----------------------------------------------
# access to webhdfs on private network
#-----------------------------------------------

frontend webhdfs-tcp
        bind 9.30.181.152:9870
        default_backend webhdfs-tcp
        mode tcp
        option tcplog

backend webhdfs-tcp
        balance source
        mode tcp
        server inimical1.fyre.ibm.com 10.11.17.12:9870 check

systemctl reload haproxy

Test again using a public address.

nc -zv 9.30.181.152 9870
curl -i -k --negotiate -u : -X GET http://9.30.181.152:9870/webhdfs/v1/tmp/?op=LISTSTATUS

Minio client

https://github.com/minio/mc
https://docs.min.io/docs/minio-client-complete-guide.html

Install the minion client on RedHat/CentOS

wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
mc --help

Prepare access information.

Information Sample value
S3 like endpoint sbstoraga.obj.fyre.ibm.com
Access Key KEY24mABCwio
Secret 67wAGqFfQS5axyi

Create alias

mc alias set mymc http://sbstoraga.obj.fyre.ibm.com KEY24mABCwio 67wAGqFfQS5axyi

List aliases

mc alias list

Create directory

mc mb mymc/hello

List directories

mc ls mymc

[2021-06-17 15:18:28 CEST]     0B hello/

ssh, KERBEROS, KRB5CCNAME

ssh ignores default_ccache_name in /etc/krb5.conf and sets KRB5CCNAME to PERSISTENT KEYRING. A workaround is to unset KRB5CCNAME.

vi /etc/profile

unset KRB5CCNAME

Kerberos ticket on AD

ktpass /princ dsxhi@FYRE.NET /pass secret /ptype KRB5_NT_PRINCIPAL /out dsxhi.keytab

Quarkus

Create Quarkus project

mvn io.quarkus.platform:quarkus-maven-plugin:2.2.2.Final:create -DprojectGroupId=com.redhat.training -DprojectArtifactId=multiplier -DplatformGroupId=com.redhat.quarkus -DplatformVersion=2.2.2.Final -DclassName="com.redhat.training.MultiplierResource" -Dpath="/multiplier" -Dextensions="rest-client"

Add OpenShift extension

mvn quarkus:add-extension -Dextension=openshift

Deploy to OpenShift (ignore cert validation, skip tests)

mvn package -Dquarkus.kubernetes.deploy=true -Dquarkus.s2i.base-jvm-image=registry.access.redhat.com/ubi8/openjdk-11 -Dquarkus.openshift.expose=true -Dquarkus.kubernetes-client.trust-certs=true -Dmaven.test.skip=true

KVM resize

qemu-img info kvmmachine.img
qemu-img resize kvmmachine.img +10G

X11 forwarding request failed on channel 0

https://www.cyberciti.biz/faq/how-to-fix-x11-forwarding-request-failed-on-channel-0/

sudo vi /etc/ssh/sshd_config

X11Forwarding yes
X11UseLocalhost no

sudo yum install xauth

Enable NFS

https://computingforgeeks.com/install-and-configure-nfs-server-on-centos-rhel/

Install software

sudo yum -y install nfs-utils
systemctl enable --now nfs-server rpcbind

Test services

systemctl status nfs-server
systemctl status rpcbind

Define exported file systems

vi /etc/exports

/mnt/usb           *(rw,no_root_squash)
/mnt/usb1           *(rw,no_root_squash)

Export file systems

exportfs -a

Test locally

showmount -e localhost

Export list for localhost:
/mnt/usb1 *
/mnt/usb  *

Open ports

firewall-cmd --add-service=nfs --permanent
firewall-cmd --add-service={nfs3,mountd,rpc-bind} --permanent
firewall-cmd --reload

Test on the remote client machine

showmount -e nfsserverhost

Export list for localhost:
/mnt/usb1 *
/mnt/usb  *

Define mount points on the client machine.

sudo vi /etc/fstab

nfsserverhost:/mnt/usb /mnt/usb nfs rw,sync,hard,intr,noauto        0       0
nfsserverhost:/mnt/usb1 /mnt/usb1 nfs rw,sync,hard,intr,noauto      0       0

Mount and enjoy

sudo mount /mnt/usb

Curl as telnet

curl -v telnet://api.kist.cp.fyre.ibm.com:5432
curl -v telnet://api.kist.cp.fyre.ibm.com:22

Jupyter SQL

https://towardsdatascience.com/heres-how-to-run-sql-in-jupyter-notebooks-f26eb90f3259

In terminal, install psycopg2 if not installed already.

pip install psycopg2-binary


!pip install ipython-sql
%load_ext sql
%sql postgresql://queryuser:secret@api.kist.cp.fyre.ibm.com/querydb

%% sql

select * from test

JSPWiki

mkdir /home/mywiki/jspwiki
find jspwiki -type d -exec chmod 777 {} \;
find jspwiki -type f -exec chmod 666 {} \;


podman run -d -p 18080:8080 --env="UMASK=000"--env="jspwiki_baseURL=http://localhost/" --restart always --name jspwiki --volume="/home/mywiki/jspwiki:/var/jspwiki/pages:Z" metskem/docker-jspwiki

As root:

firewall-cmd --permanent --add-port=18080/tcp
systemctl reload firewalld

Create a service

vi /etc/systemd/system/jspwiki-container.service

[Unit]
Description=My Wiki

[Service]
User=mywiki
Restart=always
ExecStart=/usr/bin/podman start -a jspwiki
ExecStop=/usr/bin/podman stop -t 2 jspwiki

[Install]
WantedBy=multi-user.target

systemctl enable jspwiki-container

Oracle

Create new user

create user perf identified by secret;
GRANT ALL PRIVILEGES TO perf;

Java, standard logging

-Djava.util.logging.config.file=src/main/resources/logging.properties

handlers= java.util.logging.ConsoleHandler
.level= INFO
java.util.logging.ConsoleHandler.level = INFO
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
com.logging.level = WARNING
#java.util.logging.SimpleFormatter.format=%1$s %4$s: %5$s %6$s
java.util.logging.SimpleFormatter.format=[%1$tF %1$tT] [%4$-7s] %5$s %6$s %n

# "%1$tc %2$s%n%4$s: %5$s%6$s%n"

/etc/subuid /etc/subguid

Problem:

Error: writing blob: adding layer with blob "sha256:5d20c808ce198565ff70b3ed23a991dd49afac45dece63474b27ce6ed036adc6": Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 0:42 for /etc/shadow): Check /etc/subuid and /etc/subgid: lchown /etc/shadow: invalid argument

Add manually:

vi /etc/subuid
vi /etc/subguid

podcast:100000:65536
repo:165536:65536
kafka:231072:65536

Important: after that, kill all podman processes launched by this user. Also, after an unsuccessful run.

ps -aef | grep podman
kill -9 pid

Oracle, add volumes

SELECT FILE_NAME,BYTES,AUTOEXTENSIBLE,MAXBYTES FROM DBA_DATA_FILES WHERE TABLESPACE_NAME = 'USERS'

ALTER TABLESPACE users ADD DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users02.dbf' SIZE 31G AUTOEXTEND ON
ALTER TABLESPACE users ADD DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users03.dbf' SIZE 31G AUTOEXTEND ON
ALTER TABLESPACE users ADD DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users04.dbf' SIZE 20G AUTOEXTEND ON

Password generator

tr -dc A-Za-z0-9_ < /dev/urandom | head -c 16 | xargs

Podman to Docker

https://www.reddit.com/r/podman/comments/r6ybkw/aws_sam_and_podman/

systemctl --user enable --now podman.socket
systemctl --user start podman.socket
systemctl --user status podman.socket


podman create --name="docker-cli" docker:dind
podman cp docker-cli:/usr/local/bin/docker ./docker
podman rm docker-cli
mv docker /usr/local/bin


export DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock

docker images

AWS SAM, pipenv

pip3 install --user pipenv

Create work directory

mkdir work/sam
cd work/sam
pipenv pip3 install awscli

Switch to environment

pipenv shell

(sam) sbartkowski:sam$ 

sam --version
SAM CLI, version 1.37.0

AWS, local environment for developing

Install docker

Assuming CentOS/8

Seems not working with podman

https://docs.docker.com/engine/install/centos/

dnf install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
systemctl start docker
systemctl enable docker

Test

docker run hello-world

Install python3

dnf install python3


python3 --version

Python 3.6.8

pip3 --version

pip 9.0.3 from /usr/lib/python3.6/site-packages (python 3.6)

Install Node.js 16

Node.js 16, not default Node.js 10.

https://techviewleo.com/install-node-js-on-centos-linux/

curl -fsSL https://rpm.nodesource.com/setup_16.x | sudo bash -

dnf info nodejs

Available Packages
Name         : nodejs
Epoch        : 2
Version      : 16.14.0
Release      : 1nodesource
Architecture : x86_64

dnf install nodejs

node --version

v16.14.0

Install AWS command-line tools

As root, but recommended local user.

pip3 install awscli
pip3 install aws-sam-cli

aws --version

aws-cli/1.22.54 Python/3.6.8 Linux/4.18.0-348.2.1.el8_5.x86_64 botocore/1.23.54

sam --version

SAM CLI, version 1.37.0

Configure access to AWS

You need: KeyId and Secret access key

aws configure

AWS Access Key ID [****************IZMQ]: 
AWS Secret Access Key [****************mROK]: 
Default region name [None]: eu-west-1
Default output format [None]: table

Test

aws ec2 describe-regions

---------------------------------------------------------------------------------
|                                DescribeRegions                                |
+-------------------------------------------------------------------------------+
||                                   Regions                                   ||
|+-----------------------------------+-----------------------+-----------------+|
||             Endpoint              |      OptInStatus      |   RegionName    ||
|+-----------------------------------+-----------------------+-----------------+|
||  ec2.eu-north-1.amazonaws.com     |  opt-in-not-required  |  eu-north-1     ||
||  ec2.ap-south-1.amazonaws.com     |  opt-in-not-required  |  ap-south-1     ||
||  ec2.eu-west-3.amazonaws.com      |  opt-in-not-required  |  eu-west-3      ||
||  ec2.eu-west-2.amazonaws.com      |  opt-in-not-required  |  eu-west-2      ||
||  ec2.eu-west-1.amazonaws.com      |  opt-in-not-required  |  eu-west-1      ||
||  ec2.ap-northeast-3.amazonaws.com |  opt-in-not-required  |  ap-northeast-3 ||
||  ec2.ap-northeast-2.amazonaws.com |  opt-in-not-required  |  ap-northeast-2 ||
||  ec2.ap-northeast-1.amazonaws.com |  opt-in-not-required  |  ap-northeast-1 ||
||  ec2.sa-east-1.amazonaws.com      |  opt-in-not-required  |  sa-east-1      ||
||  ec2.ca-central-1.amazonaws.com   |  opt-in-not-required  |  ca-central-1   ||
||  ec2.ap-southeast-1.amazonaws.com |  opt-in-not-required  |  ap-southeast-1 ||
||  ec2.ap-southeast-2.amazonaws.com |  opt-in-not-required  |  ap-southeast-2 ||
||  ec2.eu-central-1.amazonaws.com   |  opt-in-not-required  |  eu-central-1   ||
||  ec2.us-east-1.amazonaws.com      |  opt-in-not-required  |  us-east-1      ||
||  ec2.us-east-2.amazonaws.com      |  opt-in-not-required  |  us-east-2      ||
||  ec2.us-west-1.amazonaws.com      |  opt-in-not-required  |  us-west-1      ||
||  ec2.us-west-2.amazonaws.com      |  opt-in-not-required  |  us-west-2      ||
|+-----------------------------------+-----------------------+-----------------+|

aws s3 ls

2022-01-24 01:17:14 stanbbucket

npm username

~/.npmrc

Clone this wiki locally