Jenkins: deploy to AWS ECS with docker compose - amazon-web-services

I have some misunderstanding. I have Jenkins instance, ESC cluster and docker-compose config file to compose my images and link up containers.
After git push my Jenkins instance grabs all sources from all repos (webapp, api, lb) and makes a batch of operations like build, copy files and etc.
After that I have all folders with Dockerfiles in state "ready for compose".
And in this stage I cant get how I should ping my ESC cluster on AWS to grab all images from Jenkins and compose them with my docker-compose.yml config.
I would be glad of any useful information.
Thanks.

First, you need to push your images from the Jenkins server into ECR. Push every image individually to an independent repo.
Next, you have to create an ECS cluster. In this cluster, you create an ECS service that runs in the cluster. For this Service, you create and configure a Task Definition to run your linked containers. You don't need to use docker-compose for this: in ECS you define the links between containers in the configuration of the Task Definition. You can add several container definitions to the same Task Definition, and ECS will link them together.
You can automate all of this from your Jenkins server by attaching an Instance Profile to it that allows it to call the ECS API. In order to deploy new images, all you need to do is pushing them to their ECR repos, and create a new Task Definition pointing to them, and update the service that runs these task definitions. ECS will automatically trigger a blue-green deployment on your behalf.

Related

Deploy Django Code with Docker to AWS EC2 instance without ECS

Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment

How to run commands in a fargate task

I have a requirement where i have to create a Fargate task that can clone a gitab repository(source code) and run a maven build command to build the code.
And there would be another fargate task that would create a docker image out of it.
Gitlab is on an EC2 instance.
Since we do not have exec access into the containers on Fargate, how and what would be the best way to do this. (I have multiple repos on Gitlab and so the repo that i want to clone and build is not going be the same every time)
I have been reading about the Amazon Elastic Container Service (ECS) / Fargate plugin on Jenkins.But i'm not sure if Jenkins can be used to get into a Fargate container and run commands.
nowadays you can use ECS exec. Here's how to set it up: https://aws.amazon.com/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/
or in short:
https://www.ernestchiang.com/en/posts/2021/using-amazon-ecs-exec/

Deploy Docker-Compose YML to AWS ECS

One of the projects has shared its docker-compose.yml file. It contains various services. Each service forms a container. I can easily deploy this image in EC2 and get going. However, I want to use AWS ECS only.
How can I deploy that YML file in AWS ECS?
AWS ECS is little bit different from normal docker environment where you directly start the container.
In ECS you need to create a task with the docker image and then create a service to run that task.
So you cannot directly apply deployment.yaml file over ECS.
Here's how you can do this manually, https://aws.amazon.com/getting-started/hands-on/deploy-docker-containers/
You can always automate this using terraform/ aws cli etc..

Are ECS Task Definitions supposed to be committed into version control?

I'm fairly new to AWS ECS and am unsure if you are supposed to commit into version control some things that AWS already hosts for you. Specifically, should task-definitions created on AWS be committed to GitHub? Or do we just use AWS ECS/ECR as the version control FOR ECS Task definitions?
First ECS/ECR not used for version control like GithHub.
ECS is container management service where ECR is nothing but docker registry on AWS.
Task definition used to run docker container in ECS.You can create task definition on or after creating ECS cluster.You can create/modify task definition in many ways like console,aws cli, aws cloud formation,terraform etc. It depends on how you want to do this and how frequently you change task definition. yes, you can keep your task definition GitHub and create automated job to execute every time from there but there is no need to store task definition anywhere once your task running.
ECR is Elastic container registry which is used to store the container images.
ECS Task definition A task definition is required to run Docker containers in Amazon ECS. Some of the parameters you can specify in a task definition & include: The Docker images to use with the containers in your & task.
You have to provide your ECR URI while creating task definition or it will look for the default docker hub registry for the container image.
Also, You can keep your task definition json on any version control system if you want to use them lateron.

How to ensure to update Docker image on AWS ECS?

I use Docker Hub to store a private Docker image, the repository has a webhook that once the image is updated it calls a service I built to:
update the ECS task definition
update the ECS service
deregister the old ECS task definition
The service is running accordingly. After it runs ECS creates a new task with the new task definition, stops the task with the old task definition and the service come back with the new definition.
The point is that the Docker Image is not updated, once the service starts in the new task definition it remains with the old image.
Am I doing something wrong? How o ensure the docker image is updated?
After analysing the AWS ECS logs I found out that the problem was in the ECS Docker authentication.
To solve that I've added the following data to the file /etc/ecs/ecs.config
ECS_CLUSTER=default
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"auth":"YOUR_DOCKER_HUB_AUTH","email":"YOUR_DOCKER_HUB_EMAIL"}}
Just replace the YOUR_DOCKER_HUB_AUTH and YOUR_DOCKER_HUB_EMAIL by your own information and it shall work properly.
To find this information you can execute docker login on your own computer and then look for the data in the file ~/.docker/config.json
For more information on the Private Registry Authentication topic please look at http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html