Deploying a dockerfile to AWS Fargate (Building docker images on AWS) - amazon-web-services

I have the end goal of deploying a docker container on AWS Fargate. As it happens, my dockerfile has no local dependencies and my upload connection is very slow, thus I want to build it in the cloud. What would be the easiest way to build the image on AWS? Creating an EC2 Linux instance, installing docker and aws-cli in it, building the image then uploading to AWS ECR, if that's possible?

The easiest way is by using AWS CodeBuild - it will do everything for you, even push it to AWS ECR.
Basic instructions: here

Related

Deploy Django Code with Docker to AWS EC2 instance without ECS

Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment

Deploy Docker-Compose YML to AWS ECS

One of the projects has shared its docker-compose.yml file. It contains various services. Each service forms a container. I can easily deploy this image in EC2 and get going. However, I want to use AWS ECS only.
How can I deploy that YML file in AWS ECS?
AWS ECS is little bit different from normal docker environment where you directly start the container.
In ECS you need to create a task with the docker image and then create a service to run that task.
So you cannot directly apply deployment.yaml file over ECS.
Here's how you can do this manually, https://aws.amazon.com/getting-started/hands-on/deploy-docker-containers/
You can always automate this using terraform/ aws cli etc..

Single Docker image push into AWS elastic container registry (ECR) from VSTS build/release definition

We have a python docker image which needs to build/publish (CI/CD) into AWS container registry.
At the moment AWS does not support for running docker tasks using docker hub private repositories, therefore we have to use ECR instead of docker hub.
Our CI/CD pipeline uses docker build and push tasks. Docker authentication is done via a Service Endpoint in the VSTS project.
There are few steps we should follow to setup a VSTS service endpoint for ECR. This required to execute AWS CLI command (locally or cloud) to get a user and password for docker client to login, it looks like;
aws ecr get-login --no-include-email
Above command outputs a docker login command with a username (AWS) and a password (token).
The issue with this approach is access token will last only for 12 hours. Therefore CI/CD task requires updating the Service Endpoint every 12 hours, otherwise build fail with unauthorised token exception.
Other option we have is to run some shell commands to execute aws get-login command and run docker build/push commands in the same context. This option required installing aws cli into build agent (we are using public linux agent).
In addition shell command involves awkward task configuration with environment/variables. Otherwise we will be exposing aws application id and secret in the build steps.
Could you please advice if you have solved VSTS CI/CD pipeline using docker with AWS ecr?
Thanks, Mahi
After lot of research, trial and error I found an answer to my own question.
AWS provides an extension to VSTS with build tasks and Service Endpoints. You need to configure AWS service endpoint using an account number, application ID, and secret. Then, in your build/release definition;
build docker image using out of the box docker build task, or shell/bash command (for an example; docker build -t your:tag . )
Then add another build step to push image into AWS registry, for this you can use AWS extension task (Amazon Elastic Container Registry Push Image). Amazon Elastic Container Registry Push Image build task will generate token and login docker client every time you run this build definition. You don't have to worry about updating username/token every 12 hours, AWS extension build task will do that for you.
You are looking for this
Amazon ECR Docker Credential Helper
AWS documentation
This is where Amazon ECR Docker Credential Helper makes it easy for developers to use ECR without the need to use docker login or write logic to refresh tokens and provide transparent access to ECR repositories.
Credential Helper helps developers in a continuous development environment to automate the authentication process to ECR repositories without having to regenerate tokens every 12 hours. In addition, Credential Helper also provides token caching under the hood so you don’t have to worry about getting throttled or writing additional logic

what triggers Elastic Beanstalk to pull in an updated Docker image

I have an Elastic Beanstalk application running and configured to serve a Docker container ("generic Docker" configuration) and linked to a private image on Docker Hub.
How can I prompt the Elastic Beanstalk application to download the latest version of the docker hub image after pushing up a new version with docker push?
Do I need to "restart the app server," "rebuild the environment," something else, or is "supposed" to pull it in automatically? Not seeing this addressed in the docs.
** EDIT **
To be clear, eb deploy does NOT pull in an updated Docker image, but it does push up the files from your application directory to your ec2 instances.
So, at the end of the day I'm probably not going to use docker push for deployments, but just to keep the image up to date in the case that you actually need to make ENVIRONMENT configuration changes, not code changes, or when bringing on a new developer, you can use docker pull.
Currently eb deploy my-environment-name is working great for Docker based Elastic Beanstalk deployments.
You just need to run command line: eb deploy. Here is a nice tutorial http://victorlin.me/posts/2014/11/26/running-docker-with-aws-elastic-beanstalk.

How to configure Amazon container service without docker hub integration

I am trying to setup a new springboot+docker(microservices) based project. The deployment is targeted on aws. Every service has a Dockerfile associated with it. I am thinking of using amazon container service for deployment, but as far as I see it only pulls images from docker hub. I don't want ECS to pull from docker-hub, rather build the images from docker file and then take over the deploying those containers.Is it possible to do? If yes how.
This is not possible yet with the Amazon EC2 Container Service (ECS) alone - while ECS meanwhile supports private registries (see also the introductory blog post), it doesn't yet offer an image build service (as usual, AWS is expected to add such notable additional features over time, see e.g. the Feature Request: ECS container dream service for more on this).
However, it can already be achieved with AWS Elastic Beanstalk's built in initial support for Single Container Docker Configurations:
Docker uses a Dockerfile to create a Docker image that contains your source bundle. [...] Dockerfile is a plain text file that contains instructions that Elastic Beanstalk uses to build a customized Docker image on each Amazon EC2 instance in your Elastic Beanstalk environment. Create a Dockerfile when you do not already have an existing image hosted in a repository. [emphasis mine]
In an ironic twist, Elastic Beanstalk has now added Multicontainer Docker Environments based on ECS, but this highly desired more versatile Docker deployment option doesn't offer the ability to build images in turn:
Building custom images during deployment with a Dockerfile is not supported by the multicontainer Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment. [emphasis mine]
As mentioned above, I would expect this to be added to ECS in a not too distant future due to AWS' well known agility (see e.g. the most recent ECS updates), but they usually don't commit to roadmap details, so it is hard to estimate how long we need to wait on this one.
Meanwhile Amazon has introduced EC2 Container Registry https://aws.amazon.com/ecr/
It is a private docker repository if you do not like docker hub. Nicely integrated with the ECS service.
However it does not build your docker images, so it does not solve the entire problem.
I use a bamboo server for building images (the source is in git repositories in bitbucket). Bamboo pushes the images to Amazons container registry.
I am hoping the Bitbucket Pipelines will make the process more smooth with less configuration of build servers. From the videos I have seen all your build configuration sits right in your repository. It is still in a closed beta so I guess we will have to wait a bit more to see what it ends up being.