- Docker Engine and Docker CLI
- Docker Compose (formerly “Fig”)
- Docker Machine
- Docker Swarm Mode
- Kitematic
- Docker Cloud (formerly “Tutum”)
- Docker Datacenter
Installation
For Mac, using homebrew package manager
1
|
$ brew cask install docker docker-toolbox
|
Using docker script (not secure)
1
|
$ curl -sSL https://get.docker.com | bash
|
to use docker for non-root user, add user to docker
group
1
|
$ sudo usermod -aG docker <user>
|
Most common commands
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
|
#search images
$ docker search elasticsearch
#list images
$ docker images
#list containers
$ docker ps
$ docker ps -a | grep 60afe4036d97
#list all running containers
$ docker ps -a -f status=running
#remove container
$ docker rm <containerid>
#remove multiple container
$ docker rm 305297d7a235 ff0a5c3750b9
$ docker rm $(docker ps -a -q -f status=exited)
#remove image
$ docker rmi <imageid>
#remove all images
$ docker rmi $(docker images -q)
#remove all images using force removal (--force, -f)
$ docker rmi -f $(docker images -q)
#start container
$ docker start <containerName>
$ docker start -ai <containerName>
#stop container
$ docker stop <containerName>
#create machine
$ docker-machine create --driver virtualbox default
#rename container
$ docker rename CONTAINER NEW_NAME
#get container
$ docker pull glassfish
#run container
$ docker run glassfish
#get and run container
$ docker run -it glassfish bash
$ docker run -it glassfish sh
#login to Docker Hub
$ docker login
#execute bash
$ docker exec -it glassfish /bin/bash
$ docker kill glassfish
|
Dockerfile example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
FROM aripd/java
LABEL maintainer="dev@aripd.com" description="Glassfish v5 release image"
ENV GLASSFISH_ARCHIVE glassfish5
ENV DOMAIN_NAME domain1
ENV INSTALL_DIR /opt
RUN useradd -b /opt -m -s /bin/sh -d ${INSTALL_DIR} serveradmin && echo serveradmin:serveradmin | chpasswd
RUN apt-get -y install curl \
&& apt-get -y install unzip
RUN curl -o ${INSTALL_DIR}/${GLASSFISH_ARCHIVE}.zip -L http://mirrors.xmission.com/eclipse/glassfish/glassfish-5.1.0.zip \
&& unzip ${INSTALL_DIR}/${GLASSFISH_ARCHIVE}.zip -d ${INSTALL_DIR} \
&& rm ${INSTALL_DIR}/${GLASSFISH_ARCHIVE}.zip \
&& chown -R serveradmin:serveradmin /opt \
&& chmod -R a+rw /opt
ENV GLASSFISH_HOME ${INSTALL_DIR}/${GLASSFISH_ARCHIVE}/glassfish
ENV DEPLOYMENT_DIR ${GLASSFISH_HOME}/domains/${DOMAIN_NAME}/autodeploy
WORKDIR ${GLASSFISH_HOME}/bin
ENTRYPOINT ./asadmin start-domain --verbose ${DOMAIN_NAME}
USER serveradmin
EXPOSE 4848 8009 8080 8181
|
build aripd/glassfish container after creating Dockerfile
1
|
$ docker build -t aripd/glassfish .
|
push aripd/glassfish container to dockerhub
1
|
$ docker push aripd/glassfish
|
run aripd/glassfish container as glassfish with the ports 8080 and 4848
1
|
$ docker run -d -p 8080:8080 -p 4848:4848 --name glassfish aripd/glassfish
|
stop aripd/glassfish container
1
|
$ docker stop glassfish
|
remove aripd/glassfish container
Updating and committing an image
1
2
3
4
5
6
7
8
|
$ docker run -t -i aripd/ecommerce /bin/bash
root@0b2616b0e5a8:/# asadmin change-admin-password
root@0b2616b0e5a8:/# asadmin enable-secure-admin
... etc.
$ docker commit -m "Made some changes" -a "aripddev" 0b2616b0e5a8 aripd/ecommerce:v2
$ docker images
$ docker run -t -i aripd/ecommerce:v2 /bin/bash
$ docker history aripd/ecommerce:v2
|
commit and push docker images to dockerhub
1
2
3
4
5
|
docker ps
docker commit c3f279d17e0a aripd/testimage:version3
docker images
docker login
docker push aripd/testimage
|
Log into the Docker Image using the root user
You can log into the Docker Image using the root user (ID = 0) instead of the provided default user when you use the -u option.
1
|
docker exec -u 0 -it mycontainer bash
|
getting logs
1
2
3
4
|
docker logs [container-name]
docker logs -f --tail 10 container_name // starting from the last 10 lines
docker logs --since=2m <container_id> // since last 2 minutes
docker logs --since=1h <container_id> // since last 1 hour
|
creating and connecting to a network
1
2
3
4
|
$ docker network create [network-name]
$ docker network inspect [network-name]
$ docker network connect [network-name] [container-name]
$ docker network inspect [network-name]
|
Running container on network
1
|
docker run --net [network-name] -d -p 8282:8080 --name [container-name] aripd/app-name
|
isolated network
Dockerfile
1
2
3
4
5
6
7
8
|
FROM glassfish:latest
LABEL maintainer="dev@aripd.com"
RUN apt-get update
RUN curl http://central.maven.org/maven2/mysql/mysql-connector-java/8.0.21/mysql-connector-java-8.0.21.jar -O glassfish/domains/domain1/lib/mysql-connector-java-8.0.21.jar
COPY domain.xml glassfish/domains/domain1/config/domain.xml
COPY admin-keyfile glassfish/domains/domain1/config/admin-keyfile
COPY target/ecommerce-3.0-SNAPSHOT.war glassfish/domains/domain1/autodeploy/ecommerce-3.0-SNAPSHOT.war
EXPOSE 8080 4848 8181
|
jdbc url is jdbc:mysql://mydatabase:3306/Database_Name
1
2
3
4
5
|
$ docker network create my-network
$ docker run -d --net=my-network --name=mydatabase -e MYSQL_ROOT_PASSWORD='supersecret' mysql
$ unset GEM_PATH
$ mvn clean install && docker build -t aripd/ecommerce .
$ docker run -d --net=my-network --name=myapp aripd/ecommerce
|
Setting Up Database Servers
Redis
start a redis instance
1
|
$ docker run --name some-redis -p 6379:6379 -d redis
|
start with persistent storage
1
|
$ docker run --name some-redis -p 6379:6379 -d redis redis-server --appendonly yes
|
connecting via redis-cli
1
|
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
|
RedisInsight
is the GUI tool to access the server.
1
|
$ brew cask install redisinsight
|
MongoDB
1
|
$ docker run --name mongo -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret mongo:tag
|
MongoDB Compass is the GUI tool to access the server.
1
|
$ brew cask install mongodb-compass
|
MySQL
Download and start MySQL container:
1
|
$ docker run --name mysqldb -p 3306:3306 -e MYSQL_USER=dbuser -e MYSQL_PASSWORD=dbpass -e MYSQL_DATABASE=dbname -e MYSQL_ROOT_PASSWORD=secret mysql:tag
|
Install MySQL Shell and connect to the database - for more details.
1
2
|
$ brew cask install mysql-shell
$ mysqlsh dbuser@localhost:3306
|
Creating a container on the host network with --net=host
and share mysqldb of host with container.
1
2
3
4
5
6
7
8
|
$ docker run \
--name=mysql-host \
--net=host \
-e MYSQL_ROOT_PASSWORD=mypassword \
-v /usr/local/var/mysql:/var/lib/mysql \
-d mysql:5.6
$ docker exec -it mysql-host /bin/bash
|
PostgreSQL
Download and run PostgreSQL:
1
|
$ docker run --name postgresqldb -p 5432:5432 -e POSTGRES_PASSWORD=secret -d postgres
|
Install and connect using PSQL:
1
|
$ psql -h localhost -U postgres
|
Microsoft SQL Server
Download and start Microsoft SQL Server container:
1
|
$ docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=SqlServer2017' -p 1433:1433 -d microsoft/mssql-server-linux:2017-latest
|
Install and connect using sqlcmd — for more details.
1
2
|
$ brew tap microsoft/mssql-release https://github.com/Microsoft/homebrew-mssql-release
$ brew install --no-sandbox mssql-tools
|
Connect to SQL Server:
1
|
$ sqlcmd -S localhost,1433 -U SA -P SqlServer2017
|
Docker Compose
Create docker-compose.yml
file and run
Below is the example for graylog
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
|
version: '2'
services:
# MongoDB: https://hub.docker.com/_/mongo/
mongodb:
image: mongo:3
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.2
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
# Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/security-settings.html#general-security-settings
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: graylog/graylog:2.4.3-1
environment:
# CHANGE ME!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_WEB_ENDPOINT_URI=http://127.0.0.1:9000/api
links:
- mongodb:mongo
- elasticsearch
depends_on:
- mongodb
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 514:514
# Syslog UDP
- 514:514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
|
Docker Swarm Mode
Create machines
For development environment, create virtual machines first using docker-machine
. Add “&
” sign to send the process to background.
You can also use vagrant
instead of docker-machine
. You can create machines in amazonec2
, azure
, digitalocean
, google
etc. instead of virtualbox
.
To create 3 machines with 2 CPU and 4G RAM in virtualbox
, run
1
2
3
|
$ docker-machine create -d=virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=3096 vbox-01
$ docker-machine create -d=virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=3096 vbox-02 &
$ docker-machine create -d=virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=3096 vbox-03 &
|
To list machines, run
Init and join to the swarm
To connect first virtual server, which will be manager
, run
1
|
$ eval $(docker-machine env vbox-01)
|
After connecting vbox-01
, to start docker swarm and to make the cluster ready, run
which gives below error message
"Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on different interfaces (10.0.2.15 on eth0 and 192.168.99.100 on eth1) - specify one with --advertise-addr"
Since vbox
has two ethernet interfaces, you should specify the public IP with advertise address
parameter.
1
|
docker swarm init --advertise-addr=192.168.99.100
|
You should get below output:
Swarm initialized: current node (chfpci27hkjr58m4wbfhe72wz) is now a manager.
The current node is now a manager. To add a worker to this swarm, run the following command in worker nodes:
1
|
$ docker swarm join --token SWMTKN-1-0ka8ag69aylzi4esvlgwwjv9s2t2wa3er1y4fza1yzzh3mqaat-euq613kyvhdri9mggxmm3b6wu 192.168.99.100:2377
|
To add a manager to this swarm, run docker swarm join-token manager
and follow the instructions.
To add a worker to this swarm, run docker swarm join-token worker
and follow the instructions.
To connect second virtual server, which will be worker
, run
1
|
$ eval $(docker-machine env vbox-02)
|
To add vbox-02
as a worker, run
1
|
$ docker swarm join --token SWMTKN-1-0ka8ag69aylzi4esvlgwwjv9s2t2wa3er1y4fza1yzzh3mqaat-euq613kyvhdri9mggxmm3b6wu 192.168.99.100:2377
|
You can complete the same steps for vbox-03
1
2
|
$ eval $(docker-machine env vbox-03)
$ docker swarm join --token SWMTKN-1-0ka8ag69aylzi4esvlgwwjv9s2t2wa3er1y4fza1yzzh3mqaat-euq613kyvhdri9mggxmm3b6wu 192.168.99.100:2377
|
To list nodes, you should run docker node ls
command on a manager node which is vbox-01
in this example.
1
2
|
$ eval $(docker-machine env vbox-01)
$ docker node ls
|
So now, our manager and worker nodes are ready to deploy stack
Deploy a stack to a swarm
1
|
$ docker service create --name=web --publish=9000:80 nginx:latest
|
Since there are multiple nodes, to see the service, run
1
|
$ docker service ps web
|
to check if service is running
1
2
3
|
$ open http://192.168.99.100:9000
$ open http://192.168.99.101:9000
$ open http://192.168.99.102:9000
|
We created only one service in one node of three. However we still have an access to this service through all three nodes. This is thanks to routing mesh
.
The routing mesh
enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh
routes all incoming requests to published ports on available nodes to an active container.

Scaling
to watch and check number of replicas
, which is 1/1
in our example.
1
|
$ watch -d docker service ps web
|
to scale our service from 1
to 2
1
|
$ docker service update --replicas=2 web
|
to list container and get container id
to kill the container
1
|
$ docker container kill <containerid>
|
Once the container has been killed, container scheduler of docker swarm will create a new one and current state will be running.
to watch and list nodes
1
|
$ watch -d docker node ls
|
You can change the type of a node from Manager
to Worker
or vice versa.
1
2
|
$ docker node promote vbox-02
$ docker node demote vbox-02
|
There are two types of Manager Status
. One is Leader
and the other is Reachable
. You can add more than one manager node, but only one is Leader
, and others are Reachable
.
The decision about how many manager nodes to implement is a trade-off between performance
and fault-tolerance
. Adding manager nodes to a swarm makes the swarm more fault-tolerant. However, additional manager nodes reduce write performance because more nodes must acknowledge proposals to update the swarm state. This means more network round-trip traffic.
Swarm manager nodes use the Raft Consensus Algorithm
to manage the swarm state. Raft
requires a majority of managers, also called the quorum
, to agree on proposed updates to the swarm, such as node additions or removals.
While it is possible to scale a swarm down to a single manager node, it is impossible to demote the last manager node. Scaling down to a single manager is an unsafe operation and is not recommended. An odd number of managers is recommended, because the next even number does not make the quorum easier to keep.
Apply rolling updates and rollback to a service
Deploy an older redis
1
2
3
4
5
|
$ docker service create \
--replicas 3 \
--name redis \
--update-delay 10s \
redis:3.0.6
|
Inspect the redis
service:
1
|
$ docker service inspect --pretty redis
|
Update the container image for redis
. The swarm manager applies the update to nodes according to the UpdateConfig
policy:
1
|
$ docker service update --image redis:3.0.7 redis
|
Run docker service ps <SERVICE-ID>
to watch the rolling update:
1
|
$ docker service ps redis
|
To rollback to previous specification
1
|
$ docker service update --rollback redis
|
Deploy a stack to a swarm using docker-stack
/ docker-compose
When running Docker Engine in swarm mode, you can use docker stack deploy
to deploy a complete application stack to the swarm. The deploy
command accepts a stack description in the form of a Compose file
.
The docker stack deploy
command supports any Compose file of version “3.0” or above.
- Create the stack with
docker stack deploy
:
1
|
$ docker stack deploy --compose-file docker-compose.yml stackdemo
|
The last argument is a name for the stack. Each network, volume and service name is prefixed with the stack name.
- Check that it’s running with
docker stack services stackdemo
:
1
|
$ docker stack services stackdemo
|
Once it’s running, you should see 1/1
under REPLICAS
for both services. This might take some time if you have a multi-node swarm, as images need to be pulled.
- Bring the stack down with
docker stack rm
:
1
|
$ docker stack rm stackdemo
|
- Bring the registry down with
docker service rm
:
1
|
$ docker service rm registry
|
- If you’re just testing things out on a local machine and want to bring your Docker Engine out of swarm mode, use
docker swarm leave
:
1
|
$ docker swarm leave --force
|