Continuous Integration, Continuous Delivery, Continuous Deployment
July 28, 2020

References

What is Continuous Integration, Continuous Delivery, Continuous Deployment?

CI-CD Pipeline

Continuous Integration

Developers practicing continuous integration merge their changes back to the main branch as often as possible. The developer’s changes are validated by creating a build and running automated tests against the build. By doing so, you avoid the integration hell that usually happens when people wait for release day to merge their changes into the release branch.

Continuous integration puts a great emphasis on testing automation to check that the application is not broken whenever new commits are integrated into the main branch.

Continuous Delivery

Continuous delivery is an extension of continuous integration to make sure that you can release new changes to your customers quickly in a sustainable way. This means that on top of having automated your testing, you also have automated your release process and you can deploy your application at any point of time by clicking on a button.

In theory, with continuous delivery, you can decide to release daily, weekly, fortnightly, or whatever suits your business requirements. However, if you truly want to get the benefits of continuous delivery, you should deploy to production as early as possible to make sure that you release small batches that are easy to troubleshoot in case of a problem.

Continuous Deployment

Continuous deployment goes one step further than continuous delivery. With this practice, every change that passes all stages of your production pipeline is released to your customers. There’s no human intervention, and only a failed test will prevent a new change to be deployed to production.

Continuous deployment is an excellent way to accelerate the feedback loop with your customers and take pressure off the team as there isn’t a Release Day anymore. Developers can focus on building software, and they see their work go live minutes after they’ve finished working on it.

CD

Docker multi-stage builds

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Dockerfile
# BUILD STAGE
FROM maven:3.6.3-jdk-11 AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package

# DEPLOYMENT STAGE
FROM payara/server-full
COPY ./postgresql-42.2.14.jar ${PAYARA_DIR}/glassfish/domains/production/lib
COPY ./domain.xml ${PAYARA_DIR}/glassfish/domains/production/config
COPY --from=build /usr/src/app/target/appname-1.0-SNAPSHOT.war ${DEPLOY_DIR}

Using Bitbucket Pipelines to create a Docker image

  • Automation Tool: Bitbucket
  • Container Platform: Docker
  • Container Registry: AWS ECR

Our first build step will handle the building of the image and pushing it to AWS ECR. Next, we will set up our trigger, so when changes are pushed/merged into the main branch it will run the build image step.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# bitbucket-pipelines.yml
image:
  name: atlassian/pipelines-awscli

definitions:
  buildImage: &buildImage
    name: Build and Push Docker Image
    caches:
      - docker
    services:
      - docker
    script:
      - export DOCKER_URI=$DOCKER_IMAGE:build-$BITBUCKET_BUILD_NUMBER
      # Login to docker registry on AWS
      - eval $(aws ecr get-login --no-include-email)
      # Build image
      - docker build -f Dockerfile -t $DOCKER_URI .
      # Push image to private registry
      - docker push $DOCKER_URI
pipelines:
  branches:
    main:
      - step: *buildImage
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#bitbucket-pipelines.yml
image: atlassian/default-image:2

pipelines:
  branches:
    main:
      - step: #Its working
          name: Create Artifact
          caches:
            - maven
          script: # Modify the commands below to build your repository.
            - mvn -B package # -B batch mode makes Maven less verbose
          artifacts: # defining the artifacts to be passed to each future step.
            - target/**

      - step:
          name: Push docker image to the registry
          deployment: Production
          services:
            - docker
          script: # Modify the commands below to build your repository.
            # Set $DOCKER_HUB_USERNAME and $DOCKER_HUB_PASSWORD as environment variables in repository settings
            - export IMAGE_NAME=your-Dockerhub-account/your-docker-image-name:$BITBUCKET_COMMIT

            # build the Docker image (this will use the Dockerfile in the root of the repo)
            - docker build -t $IMAGE_NAME .
            # authenticate with the Docker Hub registry
            - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
            # push the new Docker image to the Docker registry
            - docker push $IMAGE_NAME

You can double check that your pipleine file is valid by pasting into the validator here: https://bitbucket-pipelines.atlassian.io/validator

At this point, you may be thinking wait where are the variables DOCKER_IMAGE and BITBUCKET_BUILD_NUMBER defined?

Bitbucket pipelines provides a set of default variables, those variables start with BITBUCKET, which makes it easy to differentiate them for user-defined variables. DOCKER_IMAGE, on the other hand, needs to be defined within Bitbucket along with 3 other variables, this can be done by:

  1. Going to your repository in Bitbucket
  2. Clicking on Repository settings in the second left menu bar
  3. And clicking on Repository variables under the Pipelines heading

The 4 user-defined variables:

Name Description Example
DOCKER_IMAGE AWS ECR URI for your image 111111111111.dkr.ecr.ap-southeast-2.amazonaws.com/sct
AWS_ACCESS_KEY_ID AWS access key associated with an IAM user or role. AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY Secret key associated with the access key. wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWS_DEFAULT_REGION AWS Region to send the request to. ap-southeast-2

Now that you have completed all that, you are ready to go on and add additional build steps to have a streamline build and deploy workflow.

Once the the Pipeline has finished, you should be able to see a new tag for your image in Docker Hub.

The three possible environment types are Test, Staging, and Production. Test can be promoted to Staging, and Staging to Deployment. It is also possible to set up multiple environments of the same type from the Deployments settings screen. This could be useful, for example, to deploy to different geographical regions separately.

Another example

Process

  • Run composer install within the application directory
  • Build a docker image and push it to AWS ECR based on the config in environment/containers/production/eb_single_php_container_app
  • Make a Beanstalk version, upload it to S3 and register with Beanstalk using the configs in environment/artifact
  • Allow you to deploy to Staging environment, then to Production
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
image:
  name: atlassian/pipelines-awscli

definitions:
  composerInstall: &composerInstall
    name: Install Composer
    image: composer
    caches:
      - composer
    script:
      - composer install --no-ansi --no-dev --no-interaction --no-progress --no-scripts --optimize-autoloader
    artifacts:
      - 'application/vendor/**'
  buildImage: &buildImage
    name: Build and Push Docker Image
    caches:
      - docker
    services:
      - docker
    script:
      - export DOCKER_URI=$DOCKER_IMAGE:prod-$BITBUCKET_BUILD_NUMBER
      # Login to docker registry on AWS
      - eval $(aws ecr get-login --no-include-email)
      # Build image
      - docker build -f environment/containers/production/eb_single_php_container_app/Dockerfile -t $DOCKER_URI .
      # Push image to private registry
      - docker push $DOCKER_URI
  sendToAws: &sendToAws
    name: Create EB Version
    script:
      # Set environment variables
      - export DOCKER_URI=$DOCKER_IMAGE:prod-$BITBUCKET_BUILD_NUMBER
      - export FILE_NAME=build-$BITBUCKET_BUILD_NUMBER.zip
      - export GIT_DESCRIPTION=`echo $(git log -1 --pretty=%B) | cut -c -199`
      - export VERSION_NAME=${BITBUCKET_BUILD_NUMBER}_${BITBUCKET_TAG:=$BITBUCKET_BRANCH}
      - cd environment/artifact
      # Update options in AWS docker file
      - sed -i "s|<IMAGE_URL>|$DOCKER_URI|g" Dockerrun.aws.json
      # Compress deployment artifact
      - zip $FILE_NAME -r * .[^.]*
      # Copy AWS docker file to S3
      - aws s3 cp "$FILE_NAME" s3://$BUILD_BUCKET/$BUILD_PATH/"$FILE_NAME"
      # Create EB Application Version
      - aws elasticbeanstalk create-application-version --application-name "$APP_NAME" --version-label "$VERSION_NAME" --description "$GIT_DESCRIPTION" --source-bundle S3Bucket=$BUILD_BUCKET,S3Key="$BUILD_PATH/$FILE_NAME"
  deployStaging: &deployStaging
    name: Deploy to Staging
    deployment: staging
    trigger: manual
    script:
      # Set environment variable
      - export VERSION_NAME=${BITBUCKET_BUILD_NUMBER}_${BITBUCKET_TAG:=$BITBUCKET_BRANCH}
      # Deploy api
      - aws elasticbeanstalk update-environment --application-name "$APP_NAME" --environment-name "$APP_ENV_STAGING" --version-label "$VERSION_NAME"
  deployProduction: &deployProduction
    name: Deploy to Production
    deployment: production
    trigger: manual
    script:
      # Set environment variable
      - export VERSION_NAME=${BITBUCKET_BUILD_NUMBER}_${BITBUCKET_TAG:=$BITBUCKET_BRANCH}
      # Deploy api
      - aws elasticbeanstalk update-environment --application-name "$APP_NAME" --environment-name "$APP_ENV_PRODUCTION" --version-label "$VERSION_NAME"
pipelines:
  branches:
    release-*:
      - step: *composerInstall
      - step: *buildImage
      - step: *sendToAws
      - step: *deployStaging
      - step: *deployProduction

Require Environment Variables

Required environment variables that need to be added in BitBucket for the pipelines to run correctly.

The AWS Access Key used here will require write access to a bucket to store Beanstalk versions and read/write access to AWS ECR.

Name Description Example
AWS_ACCESS_KEY_ID AWS access key associated with an IAM user or role. AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY Secret key associated with the access key. wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWS_DEFAULT_REGION AWS Region to send the request to. ap-southeast-2
DOCKER_IMAGE AWS ECR URI for your image 111111111111.dkr.ecr.ap-southeast-2.amazonaws.com/sct
BUILD_BUCKET S3 Bucket name where builds will be stored ct-builds-configs20190605042428563200000001
BUILD_PATH Path in S3 Bucket to store builds builds/sct
APP_NAME AWS Beanstalk application name to register version with EB Single Container App
APP_ENV_STAGING AWS Beanstalk application environment name for Staging eb-single-container-app-staging
APP_ENV_PRODUCTION AWS Beanstalk application environment name for Production eb-single-container-app-production