Jun 2, 2020

Deploying a docker container to AWS Part 1

AWS, with its ever growing collection of services, seems to always have an answer to a problem. When it comes to hosting an application with docker, AWS has its Elastic Container Service (ECS). ECS has you pretty much covered when it comes to docker. You can host your image on a Elastic Container Registry (ECR) and deploy a container on a cluster with relative ease.

This post assumes you already have docker set up and an AWS account, but these are the only prerequisites. Part 1 will go over the initial setup of ECS by hosting a docker container, and part 2 will go over automating the process with CLI and GitLab’s CI/CD.

Here are the AWS services we’ll be using:

  • Identity and Access Management (IAM) for creating a role so that we can deploy our docker image
  • Elastic Container Registry (ECR) for storing docker images
  • Elastic Container Service (ECS) for hosting our docker containers

Creating an IAM Role

Before we can get into the docker side of things, we need to create a new IAM role. This role will be used to pull docker images from ECR for deployment. Head over to IAM in the console and find roles under “Access Management.” Choose “Create a Role.” The ECS service will use this role, so select “Elastic Container Service.” Our use case is “Elastic Container Service Task.” After clicking “Next,” search for the “AmazonECSTaskExecutionRolePolicy” and apply it. Go to “Review” and name it whatever you wish (e.g. “EcsTaskExecutionRole”). Then finish creating your role.

Setting up AWS CLI

You’ll need to install AWS’s CLI to get started. If you’re using SAML to login, check out saml2aws. Get your user’s access key ID and secret access key ready and run aws configure. Once you’ve entered your information, try running aws ecs list-tasks to ensure everything is working.

Setting up ECR

For creating a repository for your docker image, head over to the AWS console. Find the ECR service and create a repository. Give your repository a name and an optional namespace if you want. Select your newly created repository and click the handy “View push commands” button. This will give you the commands needed to push your docker image to your repository.


$(aws ecr get-login --no-include-email --region us-east-2)
docker build -t my-namespace/my-project:dev .
docker tag my-namespace/my-project:dev aws-domain-here/my-namespace/my-project:dev
docker push aws-domain-here/my-namespace/my-project:dev

Setting up ECS

Now that your image is available, lets try deploying a container. In ECS, a task definition defines how to run one or more docker containers. A task definition specifies what docker images to use and other factors such as CPU and memory allocation. An ECS service allows you to run a specified number of task definition instances in a cluster. Finally, a cluster is a logical grouping of services and tasks. Our first step is to create a task definition.

Creating a task definition

Navigate to “ECS” and then “Clusters.” Then, click on the “Get Started” button towards the top.

AWS provides a very handy wizard for your first task definition. First we’ll create a task definition. There are some pre-defined task definitions, but choose “Custom” in order to use your own application. Here’s what you need to configure the custom container definition.

Container Property Value
Container name e.g. my-project-dev
Image full url to your ecr image with the tag
Port mappings exposed application port
Environment Variables Any env vars needed for your docker container
Log configuration Off (unchecked), found under advanced configuration

Next, in the task definition section, click “Edit” to add the role you created above. This wizard could’ve made this role automatically, but I wanted to show how the role is created and why it is needed. The default task definition uses Fargate rather than a new EC2 instance (see the differences here). Fargate allows us to host our task in a cluster without creating a new EC2 instance. This is perfect for our small project and helps simplify the process.

Creating a cluster and service

Next, you’ll be prompted to define your service. Nothing is needed to be done here. You’ll notice you can choose a load balancer type. In normal use cases, you should front your service with a load balancer, but this is beyond our scope. For now, having no load balancer will do fine. You can also run a task without a service, but a service is recommended as it provides the means to easily scale your tasks as needed.

The final step is to name your cluster. After that, you can proceed and let AWS do the heavy lifting. You can click “View Service” once it’s done to see your service. If all went well, you should see the status of your task transition from “PENDING” to “RUNNING”. Since we didn’t make our task private, you can click on it to reveal its public IP. In my case, I was able to check my public-ip:exposed-port/healthcheck route to ensure my application was running.

Going through this process gives an understanding of how a docker container can be hosted in AWS. If you’re like me though, I bet you’d prefer to automate this. In part 2, I’ll go over updating our task definition and services with the AWS CLI, and then replicate those steps in a GitLab pipeline.

About the Author

Nick Ellis profile.

Nick Ellis

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Blog Posts
Using Conftest to Validate Configuration Files
Conftest is a utility within the Open Policy Agent ecosystem that helps simplify writing validation tests against configuration files. In a previous blog post, I wrote about using the Open Policy Agent utility directly to […]
SwiftGen with Image & Color Asset Catalogs
You might remember back in 2015 when iOS 9 was introduced, and we were finally given a way to manage all of our assets in one place with Asset Catalogs. A few years later, support […]
Tracking Original URL Through Authentication
If you read my other post about refreshing AWS tokens, then you probably have a use case for keeping track of the original requested resource while the user goes through authentication so you can route […]
Using Spring Beans in a Kafka Streams ExceptionHandler
There are many things to know before diving into Kafka Streams. If you haven’t already, check out these 5 things as a starting point. Bullet 2 mentions designing for exceptions. Ironically, this seems to be […]