Oct 23rd, 2016
For Mac, using homebrew package manager
$ brew cask install docker docker-toolbox
Using docker script (not secure)
$ curl -sSL https://get.docker.com | bash
to use docker for non-root user, add user to docker
group
$ sudo usermod -aG docker <user>
#search images
$ docker search elasticsearch
#list images
$ docker images
#list containers
$ docker ps
$ docker ps -a | grep 60afe4036d97
#list all running containers
$ docker ps -a -f status=running
#remove container
$ docker rm <containerid>
#remove multiple container
$ docker rm 305297d7a235 ff0a5c3750b9
$ docker rm $(docker ps -a -q -f status=exited)
#remove image
$ docker rmi <imageid>
#remove all images
$ docker rmi $(docker images -q)
#remove all images using force removal (--force, -f)
$ docker rmi -f $(docker images -q)
#start container
$ docker start <containerName>
$ docker start -ai <containerName>
#stop container
$ docker stop <containerName>
#create machine
$ docker-machine create --driver virtualbox default
#rename container
$ docker rename CONTAINER NEW_NAME
#get container
$ docker pull glassfish
#run container
$ docker run glassfish
#get and run container
$ docker run -it glassfish bash
$ docker run -it glassfish sh
#login to Docker Hub
$ docker login
#execute bash
$ docker exec -it glassfish /bin/bash
$ docker kill glassfish
FROM aripd/java
LABEL maintainer="dev@aripd.com" description="Glassfish v5 release image"
ENV GLASSFISH_ARCHIVE glassfish5
ENV DOMAIN_NAME domain1
ENV INSTALL_DIR /opt
RUN useradd -b /opt -m -s /bin/sh -d ${INSTALL_DIR} serveradmin && echo serveradmin:serveradmin | chpasswd
RUN apt-get -y install curl \
&& apt-get -y install unzip
RUN curl -o ${INSTALL_DIR}/${GLASSFISH_ARCHIVE}.zip -L http://mirrors.xmission.com/eclipse/glassfish/glassfish-5.1.0.zip \
&& unzip ${INSTALL_DIR}/${GLASSFISH_ARCHIVE}.zip -d ${INSTALL_DIR} \
&& rm ${INSTALL_DIR}/${GLASSFISH_ARCHIVE}.zip \
&& chown -R serveradmin:serveradmin /opt \
&& chmod -R a+rw /opt
ENV GLASSFISH_HOME ${INSTALL_DIR}/${GLASSFISH_ARCHIVE}/glassfish
ENV DEPLOYMENT_DIR ${GLASSFISH_HOME}/domains/${DOMAIN_NAME}/autodeploy
WORKDIR ${GLASSFISH_HOME}/bin
ENTRYPOINT ./asadmin start-domain --verbose ${DOMAIN_NAME}
USER serveradmin
EXPOSE 4848 8009 8080 8181
build aripd/glassfish container after creating Dockerfile
$ docker build -t aripd/glassfish .
push aripd/glassfish container to dockerhub
$ docker push aripd/glassfish
run aripd/glassfish container as glassfish with the ports 8080 and 4848
$ docker run -d -p 8080:8080 -p 4848:4848 --name glassfish aripd/glassfish
stop aripd/glassfish container
$ docker stop glassfish
remove aripd/glassfish container
$ docker rm glassfish
$ docker run -t -i aripd/ecommerce /bin/bash
root@0b2616b0e5a8:/# asadmin change-admin-password
root@0b2616b0e5a8:/# asadmin enable-secure-admin
... etc.
$ docker commit -m "Made some changes" -a "aripddev" 0b2616b0e5a8 aripd/ecommerce:v2
$ docker images
$ docker run -t -i aripd/ecommerce:v2 /bin/bash
$ docker history aripd/ecommerce:v2
docker ps
docker commit c3f279d17e0a aripd/testimage:version3
docker images
docker login
docker push aripd/testimage
You can log into the Docker Image using the root user (ID = 0) instead of the provided default user when you use the -u option.
docker exec -u 0 -it mycontainer bash
docker logs [container-name]
docker logs -f --tail 10 container_name // starting from the last 10 lines
docker logs --since=2m <container_id> // since last 2 minutes
docker logs --since=1h <container_id> // since last 1 hour
$ docker network create [network-name]
$ docker network inspect [network-name]
$ docker network connect [network-name] [container-name]
$ docker network inspect [network-name]
Running container on network
docker run --net [network-name] -d -p 8282:8080 --name [container-name] aripd/app-name
Dockerfile
FROM glassfish:latest
LABEL maintainer="dev@aripd.com"
RUN apt-get update
RUN curl http://central.maven.org/maven2/mysql/mysql-connector-java/8.0.21/mysql-connector-java-8.0.21.jar -O glassfish/domains/domain1/lib/mysql-connector-java-8.0.21.jar
COPY domain.xml glassfish/domains/domain1/config/domain.xml
COPY admin-keyfile glassfish/domains/domain1/config/admin-keyfile
COPY target/ecommerce-3.0-SNAPSHOT.war glassfish/domains/domain1/autodeploy/ecommerce-3.0-SNAPSHOT.war
EXPOSE 8080 4848 8181
jdbc url is jdbc:mysql://mydatabase:3306/Database_Name
$ docker network create my-network
$ docker run -d --net=my-network --name=mydatabase -e MYSQL_ROOT_PASSWORD='supersecret' mysql
$ unset GEM_PATH
$ mvn clean install && docker build -t aripd/ecommerce .
$ docker run -d --net=my-network --name=myapp aripd/ecommerce
start a redis instance
$ docker run --name some-redis -p 6379:6379 -d redis
start with persistent storage
$ docker run --name some-redis -p 6379:6379 -d redis redis-server --appendonly yes
connecting via redis-cli
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
RedisInsight
is the GUI tool to access the server.
$ brew cask install redisinsight
$ docker run --name mongo -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret mongo:tag
MongoDB Compass is the GUI tool to access the server.
$ brew cask install mongodb-compass
Download and start MySQL container:
$ docker run --name mysqldb -p 3306:3306 -e MYSQL_USER=dbuser -e MYSQL_PASSWORD=dbpass -e MYSQL_DATABASE=dbname -e MYSQL_ROOT_PASSWORD=secret mysql:tag
Install MySQL Shell and connect to the database - for more details.
$ brew cask install mysql-shell
$ mysqlsh dbuser@localhost:3306
Creating a container on the host network with --net=host
and share mysqldb of host with container.
$ docker run \
--name=mysql-host \
--net=host \
-e MYSQL_ROOT_PASSWORD=mypassword \
-v /usr/local/var/mysql:/var/lib/mysql \
-d mysql:5.6
$ docker exec -it mysql-host /bin/bash
Download and run PostgreSQL:
$ docker run --name postgresqldb -p 5432:5432 -e POSTGRES_PASSWORD=secret -d postgres
Install and connect using PSQL:
$ psql -h localhost -U postgres
Download and start Microsoft SQL Server container:
$ docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=SqlServer2017' -p 1433:1433 -d microsoft/mssql-server-linux:2017-latest
Install and connect using sqlcmd — for more details.
$ brew tap microsoft/mssql-release https://github.com/Microsoft/homebrew-mssql-release
$ brew install --no-sandbox mssql-tools
Connect to SQL Server:
$ sqlcmd -S localhost,1433 -U SA -P SqlServer2017
Create docker-compose.yml
file and run
$ docker-compose up
Below is the example for graylog
version: '2'
services:
# MongoDB: https://hub.docker.com/_/mongo/
mongodb:
image: mongo:3
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.2
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
# Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/security-settings.html#general-security-settings
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: graylog/graylog:2.4.3-1
environment:
# CHANGE ME!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_WEB_ENDPOINT_URI=http://127.0.0.1:9000/api
links:
- mongodb:mongo
- elasticsearch
depends_on:
- mongodb
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 514:514
# Syslog UDP
- 514:514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
For development environment, create virtual machines first using docker-machine
. Add “&
” sign to send the process to background.
You can also use vagrant
instead of docker-machine
. You can create machines in amazonec2
, azure
, digitalocean
, google
etc. instead of virtualbox
.
To create 3 machines with 2 CPU and 4G RAM in virtualbox
, run
$ docker-machine create -d=virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=3096 vbox-01
$ docker-machine create -d=virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=3096 vbox-02 &
$ docker-machine create -d=virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=3096 vbox-03 &
To list machines, run
docker-machine ls
To connect first virtual server, which will be manager
, run
$ eval $(docker-machine env vbox-01)
After connecting vbox-01
, to start docker swarm and to make the cluster ready, run
docker swarm init
which gives below error message
"Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on different interfaces (10.0.2.15 on eth0 and 192.168.99.100 on eth1) - specify one with --advertise-addr"
Since vbox
has two ethernet interfaces, you should specify the public IP with advertise address
parameter.
docker swarm init --advertise-addr=192.168.99.100
You should get below output:
Swarm initialized: current node (chfpci27hkjr58m4wbfhe72wz) is now a manager.
The current node is now a manager. To add a worker to this swarm, run the following command in worker nodes:
$ docker swarm join --token SWMTKN-1-0ka8ag69aylzi4esvlgwwjv9s2t2wa3er1y4fza1yzzh3mqaat-euq613kyvhdri9mggxmm3b6wu 192.168.99.100:2377
To add a manager to this swarm, run docker swarm join-token manager
and follow the instructions.
To add a worker to this swarm, run docker swarm join-token worker
and follow the instructions.
To connect second virtual server, which will be worker
, run
$ eval $(docker-machine env vbox-02)
To add vbox-02
as a worker, run
$ docker swarm join --token SWMTKN-1-0ka8ag69aylzi4esvlgwwjv9s2t2wa3er1y4fza1yzzh3mqaat-euq613kyvhdri9mggxmm3b6wu 192.168.99.100:2377
You can complete the same steps for vbox-03
$ eval $(docker-machine env vbox-03)
$ docker swarm join --token SWMTKN-1-0ka8ag69aylzi4esvlgwwjv9s2t2wa3er1y4fza1yzzh3mqaat-euq613kyvhdri9mggxmm3b6wu 192.168.99.100:2377
To list nodes, you should run docker node ls
command on a manager node which is vbox-01
in this example.
$ eval $(docker-machine env vbox-01)
$ docker node ls
So now, our manager and worker nodes are ready to deploy stack
$ docker service create --name=web --publish=9000:80 nginx:latest
Since there are multiple nodes, to see the service, run
$ docker service ps web
to check if service is running
$ open http://192.168.99.100:9000
$ open http://192.168.99.101:9000
$ open http://192.168.99.102:9000
We created only one service in one node of three. However we still have an access to this service through all three nodes. This is thanks to routing mesh
.
The routing mesh
enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh
routes all incoming requests to published ports on available nodes to an active container.
to watch and check number of replicas
, which is 1/1
in our example.
$ watch -d docker service ps web
to scale our service from 1
to 2
$ docker service update --replicas=2 web
to list container and get container id
$ docker conatiner ls
to kill the container
$ docker container kill <containerid>
Once the container has been killed, container scheduler of docker swarm will create a new one and current state will be running.
to watch and list nodes
$ watch -d docker node ls
You can change the type of a node from Manager
to Worker
or vice versa.
$ docker node promote vbox-02
$ docker node demote vbox-02
There are two types of Manager Status
. One is Leader
and the other is Reachable
. You can add more than one manager node, but only one is Leader
, and others are Reachable
.
The decision about how many manager nodes to implement is a trade-off between performance
and fault-tolerance
. Adding manager nodes to a swarm makes the swarm more fault-tolerant. However, additional manager nodes reduce write performance because more nodes must acknowledge proposals to update the swarm state. This means more network round-trip traffic.
Swarm manager nodes use the Raft Consensus Algorithm
to manage the swarm state. Raft
requires a majority of managers, also called the quorum
, to agree on proposed updates to the swarm, such as node additions or removals.
While it is possible to scale a swarm down to a single manager node, it is impossible to demote the last manager node. Scaling down to a single manager is an unsafe operation and is not recommended. An odd number of managers is recommended, because the next even number does not make the quorum easier to keep.
Deploy an older redis
$ docker service create \
--replicas 3 \
--name redis \
--update-delay 10s \
redis:3.0.6
Inspect the redis
service:
$ docker service inspect --pretty redis
Update the container image for redis
. The swarm manager applies the update to nodes according to the UpdateConfig
policy:
$ docker service update --image redis:3.0.7 redis
Run docker service ps <SERVICE-ID>
to watch the rolling update:
$ docker service ps redis
To rollback to previous specification
$ docker service update --rollback redis
docker-stack
/ docker-compose
When running Docker Engine in swarm mode, you can use docker stack deploy
to deploy a complete application stack to the swarm. The deploy
command accepts a stack description in the form of a Compose file
.
The docker stack deploy
command supports any Compose file of version “3.0” or above.
docker stack deploy
:$ docker stack deploy --compose-file docker-compose.yml stackdemo
The last argument is a name for the stack. Each network, volume and service name is prefixed with the stack name.
docker stack services stackdemo
:$ docker stack services stackdemo
Once it’s running, you should see 1/1
under REPLICAS
for both services. This might take some time if you have a multi-node swarm, as images need to be pulled.
docker stack rm
:$ docker stack rm stackdemo
docker service rm
:$ docker service rm registry
docker swarm leave
:$ docker swarm leave --force