Docker
October 23, 2016

Platform

  • Docker Engine and Docker CLI
  • Docker Compose (formerly “Fig”)
  • Docker Machine
  • Docker Swarm Mode
  • Kitematic
  • Docker Cloud (formerly “Tutum”)
  • Docker Datacenter

Log into the Docker Image using the root user

You can log into the Docker Image using the root user (ID = 0) instead of the provided default user when you use the -u option.

1
docker exec -u 0 -it mycontainer bash

getting logs

1
2
3
4
docker logs [container-name]
docker logs -f --tail 10 container_name // starting from the last 10 lines
docker logs --since=2m <container_id> // since last 2 minutes
docker logs --since=1h <container_id> // since last 1 hour

creating and connecting to a network

1
2
3
4
$ docker network create [network-name]
$ docker network inspect [network-name]
$ docker network connect [network-name] [container-name]
$ docker network inspect [network-name]

Running container on network

1
docker run --net [network-name] -d -p 8282:8080 --name [container-name] aripd/app-name

Docker Compose

Create docker-compose.yml file and run

1
$ docker-compose up

Below is the example for graylog

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
version: '2'
services:
  # MongoDB: https://hub.docker.com/_/mongo/
  mongodb:
    image: mongo:3
  # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/docker.html
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.2
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      # Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/security-settings.html#general-security-settings
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
  # Graylog: https://hub.docker.com/r/graylog/graylog/
  graylog:
    image: graylog/graylog:2.4.3-1
    environment:
      # CHANGE ME!
      - GRAYLOG_PASSWORD_SECRET=somepasswordpepper
      # Password: admin
      - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
      - GRAYLOG_WEB_ENDPOINT_URI=http://127.0.0.1:9000/api
    links:
      - mongodb:mongo
      - elasticsearch
    depends_on:
      - mongodb
      - elasticsearch
    ports:
      # Graylog web interface and REST API
      - 9000:9000
      # Syslog TCP
      - 514:514
      # Syslog UDP
      - 514:514/udp
      # GELF TCP
      - 12201:12201
      # GELF UDP
      - 12201:12201/udp

Below is the another example for mysql

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
version: '3.8'

services:
    mysql-server:
        image: mysql:8.0.23
        volumes:
            - ./data/db:/var/lib/mysql
            - ./conf/init-scripts/mysql:/docker-entrypoint-initdb.d
        restart: always
        environment:
            MYSQL_ROOT_PASSWORD: secret
            MYSQL_DATABASE: dbname
            MYSQL_USER: username
            MYSQL_PASSWORD: secret
        ports:
            - "3306:3306"
        command: ['--character-set-server=utf8mb4', '--collation-server=utf8mb4_bin']

    phpmyadmin:
        image: phpmyadmin/phpmyadmin:5.1.0
        restart: always
        environment:
          PMA_HOST: mysql-server
          PMA_USER: root
          PMA_PASSWORD: secret
        ports:
          - "8081:80"

volumes:
    data:

Startup and shutdown order

docker-compose.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
services:
  web:
    build: .
    depends_on:
      db:
        condition: service_healthy
        restart: true
      redis:
        condition: service_started
  redis:
    image: redis
  db:
    image: postgres
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      retries: 5
      start_period: 30s
      timeout: 10s

Docker Swarm Mode

Create machines

For development environment, create virtual machines first using docker-machine. Add “&” sign to send the process to background.

You can also use vagrant instead of docker-machine. You can create machines in amazonec2, azure, digitalocean, google etc. instead of virtualbox.

To create 3 machines with 2 CPU and 4G RAM in virtualbox, run

1
2
3
$ docker-machine create -d=virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=3096 vbox-01
$ docker-machine create -d=virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=3096 vbox-02 &
$ docker-machine create -d=virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=3096 vbox-03 &

To list machines, run

1
docker-machine ls

Init and join to the swarm

To connect first virtual server, which will be manager, run

1
$ eval $(docker-machine env vbox-01)

After connecting vbox-01, to start docker swarm and to make the cluster ready, run

1
docker swarm init

which gives below error message

"Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on different interfaces (10.0.2.15 on eth0 and 192.168.99.100 on eth1) - specify one with --advertise-addr"

Since vbox has two ethernet interfaces, you should specify the public IP with advertise address parameter.

1
docker swarm init --advertise-addr=192.168.99.100

You should get below output:

Swarm initialized: current node (chfpci27hkjr58m4wbfhe72wz) is now a manager.

The current node is now a manager. To add a worker to this swarm, run the following command in worker nodes:

1
$ docker swarm join --token SWMTKN-1-0ka8ag69aylzi4esvlgwwjv9s2t2wa3er1y4fza1yzzh3mqaat-euq613kyvhdri9mggxmm3b6wu 192.168.99.100:2377

To add a manager to this swarm, run docker swarm join-token manager and follow the instructions.

To add a worker to this swarm, run docker swarm join-token worker and follow the instructions.

To connect second virtual server, which will be worker, run

1
$ eval $(docker-machine env vbox-02)

To add vbox-02 as a worker, run

1
$ docker swarm join --token SWMTKN-1-0ka8ag69aylzi4esvlgwwjv9s2t2wa3er1y4fza1yzzh3mqaat-euq613kyvhdri9mggxmm3b6wu 192.168.99.100:2377

You can complete the same steps for vbox-03

1
2
$ eval $(docker-machine env vbox-03)
$ docker swarm join --token SWMTKN-1-0ka8ag69aylzi4esvlgwwjv9s2t2wa3er1y4fza1yzzh3mqaat-euq613kyvhdri9mggxmm3b6wu 192.168.99.100:2377

To list nodes, you should run docker node ls command on a manager node which is vbox-01 in this example.

1
2
$ eval $(docker-machine env vbox-01)
$ docker node ls

So now, our manager and worker nodes are ready to deploy stack

Deploy a stack to a swarm

1
$ docker service create --name=web --publish=9000:80 nginx:latest

Since there are multiple nodes, to see the service, run

1
$ docker service ps web

to check if service is running

1
2
3
$ open http://192.168.99.100:9000
$ open http://192.168.99.101:9000
$ open http://192.168.99.102:9000

We created only one service in one node of three. However we still have an access to this service through all three nodes. This is thanks to routing mesh.

The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.

Ingress Routing Mesh

Scaling

to watch and check number of replicas, which is 1/1 in our example.

1
$ watch -d docker service ps web

to scale our service from 1 to 2

1
$ docker service update --replicas=2 web

to list container and get container id

1
$ docker conatiner ls

to kill the container

1
$ docker container kill <containerid>

Once the container has been killed, container scheduler of docker swarm will create a new one and current state will be running.

Node promote/demote

to watch and list nodes

1
$ watch -d docker node ls

You can change the type of a node from Manager to Worker or vice versa.

1
2
$ docker node promote vbox-02
$ docker node demote vbox-02

There are two types of Manager Status. One is Leader and the other is Reachable. You can add more than one manager node, but only one is Leader, and others are Reachable.

The decision about how many manager nodes to implement is a trade-off between performance and fault-tolerance. Adding manager nodes to a swarm makes the swarm more fault-tolerant. However, additional manager nodes reduce write performance because more nodes must acknowledge proposals to update the swarm state. This means more network round-trip traffic.

Swarm manager nodes use the Raft Consensus Algorithm to manage the swarm state. Raft requires a majority of managers, also called the quorum, to agree on proposed updates to the swarm, such as node additions or removals.

While it is possible to scale a swarm down to a single manager node, it is impossible to demote the last manager node. Scaling down to a single manager is an unsafe operation and is not recommended. An odd number of managers is recommended, because the next even number does not make the quorum easier to keep.

Apply rolling updates and rollback to a service

Deploy an older redis

1
2
3
4
5
$ docker service create \
  --replicas 3 \
  --name redis \
  --update-delay 10s \
  redis:3.0.6

Inspect the redis service:

1
$ docker service inspect --pretty redis

Update the container image for redis. The swarm manager applies the update to nodes according to the UpdateConfig policy:

1
$ docker service update --image redis:3.0.7 redis

Run docker service ps <SERVICE-ID> to watch the rolling update:

1
$ docker service ps redis

To rollback to previous specification

1
$ docker service update --rollback redis

Deploy a stack to a swarm using docker-stack / docker-compose

When running Docker Engine in swarm mode, you can use docker stack deploy to deploy a complete application stack to the swarm. The deploy command accepts a stack description in the form of a Compose file.

The docker stack deploy command supports any Compose file of version “3.0” or above.

  1. Create the stack with docker stack deploy:
1
$ docker stack deploy --compose-file docker-compose.yml stackdemo

The last argument is a name for the stack. Each network, volume and service name is prefixed with the stack name.

  1. Check that it’s running with docker stack services stackdemo:
1
$ docker stack services stackdemo

Once it’s running, you should see 1/1 under REPLICAS for both services. This might take some time if you have a multi-node swarm, as images need to be pulled.

  1. Bring the stack down with docker stack rm:
1
$ docker stack rm stackdemo
  1. Bring the registry down with docker service rm:
1
$ docker service rm registry
  1. If you’re just testing things out on a local machine and want to bring your Docker Engine out of swarm mode, use docker swarm leave:
1
$ docker swarm leave --force