I am setting up a CI/CD pipeline for my micro-services. Currently I use TravisCI to pull the code from Github upon check-in, build the docker image and push it to DockerHub. I tried using docker cloud(previously knows as Tutum), which provides automatic deployment feature to AWS EC2 instance but the deployment sometimes recreates the container and the service endpoint URL changes, which is not desirable.
I am exploring amazon's ECS and its tasks , but I can not find any reference for how to setup continuos deployment to ECS when a new image is pushed to docker hub.
Anybody has any experience doing the setup ?
with ECS you would basically have CI detect a change to docker hub and update your task definition/service.
For this I use the wonderful ecs-deploy script from here:
https://github.com/silinternational/ecs-deploy
After my container has been built and deployed to dockerhub it's simply a matter of:
ecs-deploy -k $AWS_KEY -s $AWS_SECRET -r $AWS_REGION -c $CLUSTER_NAME -n $SERVICE_NAME -i $DOCKER_IMAGE_NAME
and that does it.
Related
Question:
How can I install aws cli, from WITHIN the ECS task ?
DESCRIPTION:
I'm using a docker container to run the logstash application (it is part of the elastic family).
The docker image name is "docker.elastic.co/logstash/logstash:7.10.2"
This logstash application needs to write to S3, thus it needs AWS CLI installed.
If aws is not installed, it crashes.
# STEP 1 #
To avoid crashing, when I used this application only as a docker, I ran it in a way that I caused the 'logstash start' to be delayed, after docker container was started.
I did this by adding "sleep" command to an external docker-entrypoint file, before it starts the logstash.
This is how it looks in the docker-entrypoint file:
sleep 120
if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
exec logstash "$#"
else
exec "$#"
fi
# EOF
# STEP 2 #
run the docker with "--entrypoint" flag so it will use my entrypoint file
docker run \
-d \
--name my_logstash \
-v /home/centos/DevOps/psifas_logstash_docker-entrypoint:/usr/local/bin/psifas_logstash_docker-entrypoint \
-v /home/centos/DevOps/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-v /home/centos/DevOps/logstash.yml:/usr/share/logstash/config/logstash.yml \
--entrypoint /usr/local/bin/psifas_logstash_docker-entrypoint \
docker.elastic.co/logstash/logstash:7.10.2
# STEP 3 #
install aws cli and configure aws cli from the server hosting the docker:
docker exec -it -u root <DOCKER_CONTAINER_ID> yum install awscli -y
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_access_key_id <MY_aws_access_key_id>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_secret_access_key <MY_aws_secret_access_key>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set region <MY_region>
This worked for me,
Now I want to "translate" this flow into an AWS ECS task.
in ECS I will use parameters instead of running the above 3 "aws configure" commands.
MY QUESTION
How can I do my 3rd step, installing aws cli, from WITHIN the ECS task ? (meaning not to run it on the EC2 server hosting the ECS cluster)
When I was working on the docker I also thought of these options to use the aws cli:
find an official elastic docker image containing both logstash and aws cli. <-- I did not find one.
create such an image by myself and use. <-- I prefer not , because I want to avoid the maintenance of creating new custom images when needed (e.g when new version of logstash image is available).
Eventually I choose the 3 steps above, but I'm open to suggestion.
Also, My tests showed that running 2 containers within the same ECS task:
logstah
awscli
and then the logstash container will use the aws cli container
(image "amazon/aws-cli") is not working.
THANKS A LOT IN ADVANCE :-)
Your option #2, create the image yourself, is really the best way to do this. Anything else is going to be a "hack". Also, you shouldn't be running aws configure for an image running in ECS, you should be assigning a IAM role to the task, and the AWS CLI will pick that up and use it.
Mark B, your answer helped me to solve this. Thanks!
writing here the solution in case it will help somebody else.
There is no need to install AWS CLI, in the logstash docker container running inside the ECS task.
Inside the logstash container (from image "docker.elastic.co/logstash/logstash:7.10.2") there is AWS SDK to connect to the S3.
The only thing required is to allow the ECS Task execution role, access to S3.
(I attached AmazonS3FullAccess policy)
Final goal: To deploy a ready-made cryptocurrency exchange on AWS.
I have setup a readymade server by 0xProject by running the following command on my local machine:
npx #0x/launch-kit-wizard && docker-compose up
This command creates a docker-compose.yml file which has multiple container definitions and starts the exchange on http://localhost:3001/
I need to deploy this to AWS for which I'm following this Youtube tutorial
I have created a registry user with appropriate permissions
An EC2 instance is created
ECR repository is created
AWS CLI is configured
As per AWS instructions, I'm retrieving an authentication token and authenticating Docker client to registry:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <docker-id-given-by-AWS>.dkr.ecr.us-east-2.amazonaws.com
I'm trying to build the docker image:
docker build -t testdockerregistry .
Now, since in this case, we have docker-compose.yml instead of Dockerfile - when I try to build the image - it throws the following error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: CreateFile C:\Users\hp\Desktop\xxx\Dockerfile: The system cannot find the file specified.
I tried building image from docker-compose itself as per this guide, which fails with the following message:
postgres uses an image, skipping
frontend uses an image, skipping
mesh uses an image, skipping
backend uses an image, skipping
nginx uses an image, skipping
Can anyone please help me with this?
You can use the aws ecs cli-compose command from the ECS CLI.
By using this command it will translate the docker-compose file you create into a ECS Task Definition.
If you're interested in finding out more about the CLI take a read of the AWS documentation here.
Another approach, instead of using the AWS ECS CLI directly, is to use the new docker/compose-cli
This CLI tool makes it easy to run Docker containers and Docker Compose applications in the cloud using either Amazon Elastic Container Service (ECS) or Microsoft Azure Container Instances (ACI) using the Docker commands you already know.
See "Docker Announces Open Source Compose for AWS ECS & Microsoft ACI " from Aditya Kulkarni.
It references "Docker Open Sources Compose for Amazon ECS and Microsoft ACI" from Chris Crone, Engineer #docker:
While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted.
We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:
I have ec2 instance for testing. I deployed using OpsWorks, and now I'm making new job on Jenkins to deploy automatically. what I want to do is
when someone push to branch
Jenkins server build docker image
push image to ecr
ec2 instance pull ecr image and build docker container and run
I made a job that using ecr and deploy ECS Fargate, but never done using ecr and deploy pre existed ec2 instance.I wonder this is possible to make it.
Pre-requisite
On your EC2 you first have to install docker.
There are many ways you can do it.
Once Jenkin build & push docker image to ECR you can further add the step in Jenkin build steps. Jenkin will do SSH inside EC2 and pull and run the docker image.
Once Jenkin build & push docker image to ECR you can further add the step in Jenkin build steps. Jenkin will trigger shell script file on EC2. That sh file having all logic to pull the latest one and stop existing etc.
From Jenkins also you can do it via ansible script.
The use case is like - developer makes some code changes and the below things happen automatically -
build runs, application artifact created, docker image generated with the artifact, image pushed to Docker registry, AWS ECS tasks and ECS services updated.
I want to know what are the ways to achieve the above automation of update of AWS ECS services. Till now I have implemented AWS ECS update from Jenkins build using -
1>run post build AWS CLi scripts from Jenkins to update ECS
2>post build action or pipeline step to invoke AWS Lambda function. I have created one Lambda function in Java to implement that.
Please let me the other ways we can achieve the above. Thanks.
I'm continuously deploying Docker containers from CircleCI to AWS ECS.
The outline of the deployment flow is as follows:
Build and tag a new Docker image
Login to AWS ECR and push the image
Update task definitions and services of ECS with ecs-deploy
ecs-deploy is a useful script that updates Docker images in ECS.
https://github.com/silinternational/ecs-deploy
You could use a shell script that calls aws cli commands to create cloudformation stacks or directly call the create commands in the aws cli for the ECR repository, Task Definition, Events rule and target(for scheduling).
then you just call this script on your terminal using this command: ./setup.sh and it should execute all your commands at once.
aws ecr create-repository \
--repository-name tasks-${TASK_NAME}-${TASK_ENV} \
;
or if you want to set up your resources via cloudformation templates, you can launch them using this command as long as the template exists at file://name.yml:
aws cloudformation create-stack \
--stack-name stack-name \
--capabilities CAPABILITY_IAM \
--template-body file://name.yml \
--parameters
ParameterKey=ParamName,ParameterValue=${PARAM_NAME} \
;
Take a look at Codefresh - https://docs.codefresh.io/docs/amazon-ecs
You can build your pipeline
Build Step
Push to Registry
Deply to ECS
That easy
While there are a ton of CI/CD tools out there, since I am early in my rollout, I decided to write a small script instead of having CI/CD pipelines do it.
Here is a one-click deploy script I wrote using the ecs-deploy script as a dependency to achieve a rolling deploy of a docker image to ECS.
You can run this locally from your dev or build/deployment box or use Jenkins or some local build tool.
#!/bin/bash
# automatically login to AWS
eval $(aws ecr get-login)
# build local docker image and push repo to AWS
docker build -t <yourlocaldockerimagetag> .
docker tag <yourlocaldockerimagetag>:latest <yourECSRepoURL>:latest
docker -D -l debug push <yourECSRepoURL>:latest
# deploy to ECS
ecs-deploy/ecs-deploy -m 50 -k <access-key> -s <secret-key> -r <aws-region> -c <cluster-name> -n <service-name> -i <yourECSRepoURL>:latest
Parameters:
cluster-name: Your cluster name in ECS
service-name: Your service name that you had created in ECS
yourECSRepoURL: ECS Repository URL
yourlocaldockerimagetag: Any local image tag name
access-key: your AWS access key for deployments
secret-key: your AWS secret key
Make sure you install ecs-deploy before this script.
The -m 50 tells it that it can deploy even if the number of nodes drops to 50%. Ideally you would have an extra node to do deployments, but if you can't afford that setting this would ensure that deployments continue to happen.
If you are also using an ELB (load balancer), then the default deregistration delay for target groups is 5 minutes which is a bit excessive. The deregistration delay is the time to wait for existing requests to complete BEFORE ECS sends a SIGTERM or SIGINT to your docker container. You should lower this by going to the Target Groups in EC2 dashboard and click the Edit Attributes to edit it. Otherwise your deployments may take forever.
I think nobody has mentioned CodePipeline from AWS, it really integrates easilly with many AWS Services including ECS and CodeCommit:
Push commit to CodeCommit Repo, triggering the pipeline execution.
(Optional) Configure a Manual Approval step that needs you to take an action before Build.
Run a CodeBuild Project that builds your Dockerfile and push the image to an ECR Repo.
Run a "Deploy" step that deploys to a specific ECS Service. It updates the services with a new Task Definition that points to the new ECR Image.
I have used this flow with BitBucket also, just configure a BitBucket pipeline that pushes all new code to a CodeCommit Repo as a previous step.
Exactly as #minamiyojo and #astav answers, we ended up glueing ecs-deploy with a template engine to power up our CD pipeline with some reusable component, we just open-sourced as well:
https://github.com/GuccioGucci/yoke
Please refer to Motivation section in README, hope this would help your scenario too.
Currently we build our Docker containers and publish them to Amazon ECR. We have created TaskDefinitions and are able to deploy them manually on an ECS Cluster. So a new deployment involves manual update of the TaskDefinition.
Now we would like to automate the deployment so when a Docker Image is successfully build using Jenkins and published to the ECR repo we would like to replace the current running version with the newly build one.
Next to this we would like to give people the opportunity to launch a specific version of 1 or more combinations of docker containers. Any suggestion on how we could implement a continuous cycle without manually updating the TaskDefinitions?
A more simple solution for this might be to use the ecs-deploy script from here:
https://github.com/silinternational/ecs-deploy
After my container has been built and deployed to dockerhub it's simply a matter of:
ecs-deploy -k $AWS_KEY -s $AWS_SECRET -r $AWS_REGION -c $CLUSTER_NAME -n $SERVICE_NAME -i $DOCKER_IMAGE_NAME
and that does it.
This article describes how to do Continuous deployments to ECS with Jenkins. It uses a shell script after the image has been built and pushed to update an ECS service with a new task definition revision. Hope it helps.