Are ECS Task Definitions supposed to be committed into version control? - amazon-web-services

I'm fairly new to AWS ECS and am unsure if you are supposed to commit into version control some things that AWS already hosts for you. Specifically, should task-definitions created on AWS be committed to GitHub? Or do we just use AWS ECS/ECR as the version control FOR ECS Task definitions?

First ECS/ECR not used for version control like GithHub.
ECS is container management service where ECR is nothing but docker registry on AWS.
Task definition used to run docker container in ECS.You can create task definition on or after creating ECS cluster.You can create/modify task definition in many ways like console,aws cli, aws cloud formation,terraform etc. It depends on how you want to do this and how frequently you change task definition. yes, you can keep your task definition GitHub and create automated job to execute every time from there but there is no need to store task definition anywhere once your task running.

ECR is Elastic container registry which is used to store the container images.
ECS Task definition A task definition is required to run Docker containers in Amazon ECS. Some of the parameters you can specify in a task definition & include: The Docker images to use with the containers in your & task.
You have to provide your ECR URI while creating task definition or it will look for the default docker hub registry for the container image.
Also, You can keep your task definition json on any version control system if you want to use them lateron.

Related

Multiple containers in one ECS cluster

I have a single Cluster running on ECS. At the moment, I have a single task definition and a single docker container running in the cluster. I would like to deploy a second container to the same cluster that will be running entirely separately. What is the best way to achieve this?
I have tried setting up a second task definition pointing to the second image in ECR, and then setting up a second service for that definition. I'm not sure if this is the best approach.
Are there any good examples online of how to achieve the running of two separate containers in a single ECS cluster?
"I have tried setting up a second task definition pointing to the second image in ECR, and then setting up a second service for that definition. I'm not sure if this is the best approach."
Yes this is a correct way to do it, so I don't think you need any examples.
Each service uses a task definition that refers to a container in ECR (or other repo, like Docker Hub, but typically ECR).
Each service could be running any number of tasks (instances of the task definition).

ECS equivalent of docker-compose's command

I have an application running using docker-compose.
Now I'm migrating the application to be hosted on ECS.
I'm translating the docker-compose settings to the boto3 ECS equivalents.
Unfortunately I don't find an equivalent of docker-compose's command in the AWS CLI.
You can use container transform with boto3, that will convert docker-compose to equivalent ECS task definition. this is also base on python.
container-transform is a small utility to transform various docker container formats to one another.
Currently, container-transform can parse and convert:
Kubernetes Pod specs
ECS task definitions
Docker-compose configuration files
Marathon Application Definitions or Groups of Applications
Chronos Task Definitions
container-transform
cat docker-compose.yml | container-transform -v
compose-to-ecs
Also suggested tool by AWS ECS road map.
we're unlikely to support the docker-compose format directly in our APIs. But, would a tool like container-transform to transform a docker-compose file into an ECS task definition work for you? Then you can use the resulting ECS task definition file in boto.
ECS does not contain a docker-compose command. Instead you will specify a task definition file that contains all the definitions of a service and the containers that reside within it.
The ECS service will then deploy this based on the task definition, you simply define parameters such as how many of these tasks are operating at once.
You can however use the ecs-cli tool to perform this migration for you, using the ecs-cli compose command it can take the docker-compose file and perform those translations.
Take a look at the Using Docker Compose File Syntax page to see which parameters are supported from a docker-compose file.
You can also use ECS ComposeX which will allow you to keep using your docker-compose definitions as they exist for local purposes as it does not introduce any unsupported extensions for docker-compose, but will also allow you to defined RDS/DocDB/DynamoDB/Kinesis and plenty other options that you can automatically link to your services.
When ready, ComposeX will transform all of that in CFN templates, containing AWS ECS definitions and all necessary resources, that are logically linked to work together but equally self-sufficient (so you can deploy things separately, like DBs for example).
All templates are automatically parsed and validated through cloudformation API (to the best of its abilities).
It is purposely aimed at working with AWS services, and follow all best practices, including allowing you to define least-privileged access from/to services and AWS resources.
It supports autoscaling, creation/use of existing ECS clusters and is aimed to make workloads primarly on Fargate but also on EC2 instances.

AWS ECS Cluster: how to update a single docker if its task definition has many other containers

I'm using AWS ECS to deploy Spring Cloud MSA by using 3 ec2 instances.
2 instances is used to run spring-cloud services, like consul server, AP1, AP2, spring-gateway.
Another one is an instance in small type which only runs consul server.
I have 2 task definitions:
One for deploying Consul which performs as service discovery and config server.
Another task definition is used to deploy dockers, like AP1, AP2, spring-gateway.
My question is that, in the second task definition, how can I only update AP1 amoung multiple containers if a new docker image is created?
I tried to create a new revision of the second task definition, then update the service. But seems like all docker containers defined in the task difinition are updated, but not the single 1 which changed docker image.
You can not update one container if the are sharing the same task definition. better to place these containers in the separate task definition.
This because all container is part of single task definition, and each task definition run by service.

Jenkins: deploy to AWS ECS with docker compose

I have some misunderstanding. I have Jenkins instance, ESC cluster and docker-compose config file to compose my images and link up containers.
After git push my Jenkins instance grabs all sources from all repos (webapp, api, lb) and makes a batch of operations like build, copy files and etc.
After that I have all folders with Dockerfiles in state "ready for compose".
And in this stage I cant get how I should ping my ESC cluster on AWS to grab all images from Jenkins and compose them with my docker-compose.yml config.
I would be glad of any useful information.
Thanks.
First, you need to push your images from the Jenkins server into ECR. Push every image individually to an independent repo.
Next, you have to create an ECS cluster. In this cluster, you create an ECS service that runs in the cluster. For this Service, you create and configure a Task Definition to run your linked containers. You don't need to use docker-compose for this: in ECS you define the links between containers in the configuration of the Task Definition. You can add several container definitions to the same Task Definition, and ECS will link them together.
You can automate all of this from your Jenkins server by attaching an Instance Profile to it that allows it to call the ECS API. In order to deploy new images, all you need to do is pushing them to their ECR repos, and create a new Task Definition pointing to them, and update the service that runs these task definitions. ECS will automatically trigger a blue-green deployment on your behalf.

AWS when we have to update task definition

As a newbie to AWS, I have been updating the task definition file every time when I have to update the service.
Say if I have a Task Definition 1 that has a Docker Image, and if I updated the Docker image, refreshing the service will get the latest docker image?
or do I need to update the Task Definition file to let the service pull the latest docker image?
In the AWS ECS, after updating the Docker image, it is necessary to update the task definition and update the relation with the service.
A tool such as ecs-deploy is useful to simplify this type of work.
https://github.com/silinternational/ecs-deploy
If you want to update Docker image frequently, you can further automate such as deploying from CI service.
You could try to use a label within the docker repository like "latest" or "dev" referenced within the task definition.
However, you would not be able to use the update-service deployment orchestration because the task definition version does not change.
With ECS, the task definition version is key to the orchestration of deployments.