Thứ Tư, 25 tháng 12, 2019

Docker Commands

  • docker attach - Attaches your local input/output/error stream to a running container.
  • docker commit  - Creates a new image from the current changed state of the container.
  • docker exec- Runs a command in a container that is active or running.
  • docker history- Displays the history of an image.
  • docker info- Shows system-wide information.
  • docker inspect- Finds system-level information about docker containers and images.
  • docker login- Logins to local registry or Docker Hub.
  • docker pull- Pulls an image or a repository from your local registry or Docker Hub.
  • docker ps- Lists various properties of containers.
  • docker restart- Stops and starts a container.
  • docker rm- Remove containers.
  • docker rmi- Remove images
  • docker run- Runs a command in an isolated container.
  • docker search- Searches the Docker Hub for images.
  • docker start- Starts already stopped containers.
  • docker stop- Stops running containers.
  • docker version- Provides docker version information.

Build

Build an image from the Dockerfile in the
current directory and tag the image
docker build -t myimage:1.0 .
List all images that are locally stored with
the Docker Engine
docker image ls
Delete an image from the local image store
docker image rm alpine:3.4

Share

Pull an image from a registry
docker pull myimage:1.0
Retag a local image with a new image name
and tag
docker tag myimage:1.0 myrepo/
myimage:2.0
Push an image to a registry
docker push myrepo/myimage:2.0 

Run

Run a container from the Alpine version 3.9
image, name the running container
“web” and expose port 5000 externally,
mapped to port 80 inside the container.
docker container run --name web -p
5000:80 alpine:3.9
Stop a running container through SIGTERM
docker container stop web
Stop a running container through SIGKILL
docker container kill web
List the networks
docker network ls
Run
List the running containers (add --all to
include stopped containers)
docker container ls
Delete all running and stopped containers
docker container rm -f $(docker ps -aq)
Print the last 100
lines of a container’s logs
docker container
logs --tail 100 web

Docker Management

All commands below are called as options to the base
docker command. Run docker <command> --help
for more information on a particular command.

app* Docker Application
assemble* Framework-aware builds (Docker Enterprise)
builder Manage builds
cluster Manage Docker clusters (Docker Enterprise)
config Manage Docker configs
context Manage contexts
engine Manage the docker Engine
image Manage images
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
registry* Manage Docker registries
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage swarm
system Manage Docker
template* Quickly scaffold services (Docker Enterprise)
trust Manage trust on Docker images
volume Manage volumes
Read More

Thứ Tư, 18 tháng 12, 2019

GlusterFS Cheat Sheet

Brick –> is basic storage (directory) on a server in the trusted storage pool.
Volume –> is a logical collection of bricks.
Cluster –> is a group of linked computers, working together as a single computer.
Distributed File System –> A filesystem in which the data is spread across the multiple storage nodes and allows the clients to access it over a network.
Client –> is a machine which mounts the volume.
Server –> is a machine where the actual file system is hosted in which the data will be stored.
Replicate –> Making multiple copies of data to achieve high redundancy.
Fuse –> is a loadable kernel module that lets non-privileged users create their own file systems without editing kernel code.
glusterd –> is a daemon that runs on all servers in the trusted storage pool.
RAID –> Redundant Array of Inexpensive Disks (RAID) is a technology that provides increased storage reliability through redundancy
TCP ports 111, 24007,24008  on all Gluster servers
TCP port  24009-(24009 + number of bricks across all volumes) on all Gluster servers
TCP port 24009 to 24014 -> 5 bricks for each

 glusterfs -V -> Check the version of installed glusterfs
 gluster -> Gluster Console Manager in interactive mode

 sudo vi /etc/hosts -> modify /etc/hosts file if DNS is N\A
 192.168.13.16  gluster1.storage.local  gluster1
 192.168.13.17  gluster2.storage.local  gluster2
 192.168.13.20  client.storage.local    client

 gluster peer status -> Verify the status of the trusted storage pool
 gluster peer probe gluster2-server ->  Add servers to the trusted storage pool
 gluster peer detach gluster2-server -> Remove a server in storage pool
 gluster pool list -> List the storage pool.


 mkdir -p /data/gluster/gvol0 -> Create a brick (directory) called “gvol0” in the mounted file system on both nodes
 gluster volume create gvol0 replica 2 gluster1.storage.local:/data/gluster/gvol0 gluster2.storage.local:/data/gluster/gvol0
 volume create: gvol0 -> Create the volume named “gvol0” with two replicas
 gluster volume start gvol0 -> Start volume
 gluster volume info -> Show the volume information
 gluster volume info gvol0 -> Show the volume information of volume gvol0
 gluster volume start test-volume -> Start volume

 mkfs.ext4 /dev/sdb1 -> Format partition
 mkdir -p /data/gluster -> Create directory called /data/gluster
 mount /dev/sdb1 /data/gluster -> Mount the disk on a directory called /data/gluster

 mount -t glusterfs gluster1-server:/test-volume /mnt/glusterfs -> Mount a Gluster volume on all Gluster servers
 cat /proc/mounts | grep glusterfs

 #/etc/fstab
 storage.example.lan:/test-volume       /mnt  glusterfs   defaults,_netdev  0  0
 gluster1-server:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0 -> Edit the /etc/fstab file on all Gluster servers
 echo "/dev/sdb1 /data/gluster ext4 defaults 0 0" | sudo tee --append /etc/fstab ->Add an entry to /etc/fstab


 sudo iptables -I INPUT -p all -s <ip-address> -j ACCEPT -> Configure the firewall to allow all connections within a cluster

 Redhat Based Systems
 chkconfig glusterd on -> Start the glusterd daemon every time the system boots

 Debian Based Systems
 sudo service glusterfs-server start ->Start the glusterfs-server service on all gluster nodes

 Clients
 dmesg | grep -i fuse -> Verify FUSE module is installed
 mkdir -p /mnt/glusterfs -> Create a directory to mount the GlusterFS filesystem
 mount -t glusterfs gluster1.storage.local:/gvol0 /mnt/glusterfs -> Mount the GlusterFS filesystem to /mnt/glusterfs
 df -hP /mnt/glusterfs -> Verify the mounted GlusterFS filesystem
 gluster1.storage.local:/gvol0 /mnt/glusterfs glusterfs  defaults,_netdev 0 0 -> Add to /etc/fstab for automatically mounting

Benchmarking && Testing

Servers
mount -t glusterfs gluster1.storage.local:/gvol0 /mnt -> Mount GlusterFS volume on the same storage node
/mnt directory ->  Data inside the /mnt directory of both nodes will always be same (replication).
ls -l /mnt/ ->  Verify the created files
poweroff -> Shutdown gluster node to test HA on client

Clients
touch /mnt/glusterfs/file1 -> Create some files on the mounted filesystem
ls -l /mnt/glusterfs/ ->  Verify the created files


Tuning
gluster volume set gvol0 network.ping-timeout "5" ->  set network ping timeout to 5 seconds from default 42 on all gluster nodes
gluster volume get gvol0 network.ping-timeout -> Verify network ping timeout
network.ping-timeout default 42 Secs-> The time duration for which the client waits to check if the server is responsive. When a ping timeout happens, there is a network disconnect between the client and server. All resources held by server on behalf of the client get cleaned up. When a reconnection happens, all resources will need to be re-acquired before the client can resume its operations on the server. Additionally, the locks will be acquired and the lock tables updated. This reconnect is a very expensive operation and should be avoided.

RDMA
Process glusterd will listen on both tcp and rdma if rdma device is found. Port used for rdma is 24008.

troubleshooting
sudo glusterd --debug
sudo netstat -ntlp | grep gluster
netstat -tlpn | grep 24007

#https://docs.gluster.org/en/v3/Administrator%20Guide/Setting%20Up%20Clients/
#https://docs.gluster.org/en/v3/Install-Guide/Install/
   41  sudo apt-get install -y software-properties-common
   42  sudo add-apt-repository ppa:gluster/glusterfs-3.13 -y
   43  sudo apt-get update
   44  sudo apt-get install glusterfs-server=3.13.2-1build1
   45  sudo service glusterfs-server start
   47  sudo service glusterd status
   48  sudo service glusterd restart 

 
  #Gluster 3.10 (Stable)
  #https://www.gluster.org/install/
sudo systemctl disable ufw
sudo systemctl stop ufw
sudo systemctl status ufw
hostnamectl set-hostname gluster2
sudo vi /etc/hosts
ping -c2 gluster1
ping -c2 gluster2
sudo apt-get install -y software-properties-common
sudo add-apt-repository ppa:gluster/glusterfs-3.10
sudo apt-get update -y
sudo apt-get install glusterfs-server -y
glusterfs --version
gluster peer probe gluster1
sudo systemctl start glusterd
sudo systemctl enable glusterd
sudo gluster volume create gvol0 replica 2 gluster1.example.lan:/data/gluster/gvol0 gluster2.example.lan:/data/gluster/gvol0
sudo gluster volume start test-volume
sudo gluster volume info test-volume
sudo gluster volume set test-volume network.ping-timeout 3
# glusterfs client
sudo apt-get install -y glusterfs-client
mkdir -p /mnt/glusterfs
mount -t glusterfs gluster1.example.lan:/data/gluster/gvol0 /mnt/glusterfs
echo 'gluster1.example.lan:/data/gluster/gvol0 /mnt/glusterfs glusterfs defaults,_netdev 0 0' >> /etc/fstab
Read More