Deploying a docker container to AWS Part 2
Check out Part 1 if you haven’t already, as this post assumes you’ve got a docker container running in AWS already. In addition, make sure you have the AWS CLI up and running. Using the AWS CLI, we’ll accomplish the following:
- Build Stage: Build, tag, and push our docker image into our ECR repository
- Deploy Stage (1/2): Update our task definition with our newly tagged docker image
- Deploy Stage (2/2): Update our service to use the new task definition revision
Next step will be going through automating the build and deploy stages with GitLab’s CI/CD as an example, but as long as you understand the underlying commands, you can apply this to any pipeline. Throughout this post, I’ll be using work from a SlackBot hosted on our AWS. To avoid confusion, here are the names of the various ECS components referred to:
Component | Dev. Name | Prod. Name |
---|---|---|
Cluster | slackbot-dev | slackbot-prod |
Service | feedback-bot-service | feedback-bot-service |
Task Def. | feedback-bot-dev | feedback-bot-prod |
Task Def. JSON | task-def-dev.json | task-def-prod.json |
Docker Image | slackbot/feedback-bot:dev | slackbot/feedback-bot:prod |
Pushing a docker image to an ECR repository
We already did this in Part 1, but as a recap you’ll need to log in to ECR:
$(aws ecr get-login --no-include-email --region us-east-2)
And then push your image to your repository:
docker build -t slackbot/feedback-bot:dev . docker tag slackbot/feedback-bot:dev aws-domain-here/slackbot/feedback-bot:dev docker push aws-domain-here/slackbot/feedback-bot:dev
Updating your task definition
When you push a docker image up to your repository, its corresponding task definition isn’t aware of the new image. For this reason, we need to create a new revision of the task definition. Using the ecs command, we can create a new revision like this:
aws ecs register-task-definition \ --family slackbot/feedback-bot:dev \ --requires-compatibilities FARGATE \ --region us-east-2 \ --cli-input-json file://aws/task-def-dev.json
The family argument is just referring to the name of the task definition. Since we set up our task to use Fargate, we’ll specify it as required. Be sure to replace the region with your own. Now to the task definition file:
{ "family": "feedback-bot-dev", "memory": "512", "cpu": "256", "networkMode": "awsvpc", "executionRoleArn": "arn:aws:iam::936832894876:role/ecsTaskExecutionRole", "containerDefinitions": [ { "portMappings": [ { "hostPort": 3000, "protocol": "tcp", "containerPort": 3000 } ], "environment": [ { "name": "NODE_ENV", "value": "development" }, { "name": "PORT", "value": "3000" } ], "image": "936832894876.dkr.ecr.us-east-2.amazonaws.com/slackbot/feedback-bot:dev", "essential": true, "name": "feedback-bot-dev" } ] }
This is a partial representation of a task definition as a JSON file. In the AWS console, you can navigate to your task definition to view its JSON, which will be significantly more than this JSON file depicted above. Taking a closer look at this, you’ll see some of the parameters we set up in the ECS wizard. If you need to determine the value of any property, you can refer to your task definition’s JSON in AWS. It’s also a pretty common use case to provide a secret, such as an API key, to the docker container as an environment variable. Unfortunately, you cannot do this with an argument and it must be specified in the JSON file. However, you can use AWS’s secrets manager to specify a secret. After creating a secret, you can specify it with a secrets property adjacent to the environment property:
"environment": [], "secrets": [ { "valueFrom": "arn:aws:secretsmanager:us-east-2:123456789:secret:dev/feedback-bot/slack-secret", "name": "SLACK_SIGNING_SECRET" } ]
Running the task registration command should produce a new revision of your task. Now we just need to tell our service to use it with the update service command below. This command is more straightforward, but you’ll notice a version number wasn’t specified with the task definition. If no version is specified, it will use the latest version which is exactly what we want.
aws ecs update-service \ --cluster slackbot-dev \ --service feedback-bot-service \ --task-definition feedback-bot-dev \ --region us-east-2
After running this command successfully, your service will stop your older task once your new task has been spun up successfully.
Bringing everything together in GitLab
Before we begin, I’ve set up some helpful variables in the GitLab CI settings that will be used. They are the following:
Name | Value |
---|---|
AWS_ACCESS_KEY_ID | Key goes here |
AWS_SECRET_ACCESS | Secret goes here |
AWS_REGION | e.g. us-east-2 |
You can use your account’s access key and secret, but I’d recommend creating a user in IAM with the AmazonEC2ContainerRegistryFullAccess and AmazonEC2ContainerServiceFullAccess policies.
For the pipeline, every commit to the master or develop branch should build it’s corresponding docker image and deploy it. In the pipeline we can separate this into two jobs. The “build” job will build the docker image and push it to ECR, while the “deploy” job uses the docker image in ECR and deploys it to Fargate (these could easily be put into a single job if preferred). Let’s begin setup in our GitLab CI file.
# .gitlab-ci.yml stages: - build - deploy variables: BASE_REPOSITORY_URL: 936832894876.dkr.ecr.us-east-2.amazonaws.com/slackbot/feedback-bot .aws_setup: image: docker:latest services: - docker:dind # This allows us to use docker commands before_script: # Install the AWS CLI and login to your ECR repository - apk add --no-cache curl jq python py-pip - pip install awscli - $(aws ecr get-login --no-include-email --region $AWS_REGION) tags: - docker
This defines the two stages I mentioned earlier, and it defines our image ECR repository url. The AWS setup portion uses the docker:latest image and the docker:dind service to be able to run docker commands. The before_script attribute just authenticates us so that we can push to ECR. Next, let’s define our build job.
# BUILD .build_job: extends: .aws_setup stage: build script: # Base implementation, build and tag, and then push to ECR - docker build -t $REPOSITORY_URL . - docker push $REPOSITORY_URL build_dev: extends: .build_job variables: REPOSITORY_URL: ${BASE_REPOSITORY_URL}:dev environment: name: development only: - develop # Only commits to develop will trigger this job build_prod: extends: .build_job variables: REPOSITORY_URL: ${BASE_REPOSITORY_URL}:prod environment: name: production only: - master # Only commits to master will trigger this job
The build job utilizes the AWS setup we defined earlier. All we need to do is two define a job for each environment, and push our new image to ECR. Moving on to the deploy stage.
# DEPLOY .deploy_job: extends: .aws_setup stage: deploy script: - aws ecs register-task-definition --family $TASK_DEF_NAME --requires-compatibilities FARGATE --cli-input-json $TASK_DEF_FILE --region $AWS_REGION - aws ecs update-service --cluster $ECS_CLUSTER --service feedback-bot --task-definition $TASK_DEF_NAME --region $AWS_REGION deploy_dev: extends: .deploy_job variables: ECS_CLUSTER: slackbot-dev TASK_DEF_NAME: feedback-bot-dev TASK_DEF_FILE: file://aws/task-def-dev.json environment: name: development only: - develop deploy_prod: extends: .deploy_job variables: ECS_CLUSTER: slackbot-prod TASK_DEF_NAME: feedback-bot-prod TASK_DEF_FILE: file://aws/task-def-prod.json environment: name: production when: manual # Require manual intervention for deploying to PROD allow_failure: false only: - master
We’ll find the same commands we used earlier to deploy our image. Each deployment job defines its own set of variables that are appropriate to its environment. However, when it comes to the production environment it’s probably not a great idea to automate the entire process. To enforce some human intervention we can use the when attribute with manual to require a human to start the job. With the manual requirement GitLab will require a human to click a button to start the job now. Voilà, we’re done!
Although it can feel like a pain to set up, automation can pay itself off very quickly. I hope this will save you some time like it has for me. Here’s the final ci file for reference. Happy automating!