How to use existing docker-compose.yml file in AWS Elastic Container Service? - amazon-web-services

I have multi container application and using docker-compose.yml file to start the application. I'm planning to use AWS Elastic Container Service for deployment. From the tutorials, I came across to create a container definitions in task definitions instead of using user defined compose file in AWS ECS console.
How to use existing docker-compose.yml file in AWS Elastic Container Service?

It does seem that there is support to use the ECS CLI to use a docker-compose file using the ecs-cli-compose command.
This will translate your docker-compose file and create a task definition that is able to be used with ECS
The task definition in ECS is a mapping file for all container based configuration. Within this file you can define the container definitions for each container within the task, define port mappings, any custom commands, volume mounting etc.

Related

How do I change Grafana docker container environment variables on AWS Fargate to use a mysql database

How do I change Grafana Docker container environment variables on AWS Fargate to use a mysql database? I am very new to AWS and containers. Is it through the cmd line? When I type "docker ps -a" my containers do not show up. I've tried looking through the Grafana documentation but I think that's if you are running the containers locally. Would it be the same for AWS? I've looked into SSHing into the containers, but that requires me to alter the task definition in Terraform.
You need to define correct variables in the ECS task definition, e.g.:
GF_DATABASE_TYPE: mysql
GF_DATABASE_HOST: my-mysql:3306
GF_DATABASE_USER: my-user
GF_DATABASE_PASSWORD: my-password
Of course you may need to more if you have specific DB configuration. All options are documented: https://grafana.com/docs/grafana/latest/administration/configuration/#database

ECS equivalent of docker-compose's command

I have an application running using docker-compose.
Now I'm migrating the application to be hosted on ECS.
I'm translating the docker-compose settings to the boto3 ECS equivalents.
Unfortunately I don't find an equivalent of docker-compose's command in the AWS CLI.
You can use container transform with boto3, that will convert docker-compose to equivalent ECS task definition. this is also base on python.
container-transform is a small utility to transform various docker container formats to one another.
Currently, container-transform can parse and convert:
Kubernetes Pod specs
ECS task definitions
Docker-compose configuration files
Marathon Application Definitions or Groups of Applications
Chronos Task Definitions
container-transform
cat docker-compose.yml | container-transform -v
compose-to-ecs
Also suggested tool by AWS ECS road map.
we're unlikely to support the docker-compose format directly in our APIs. But, would a tool like container-transform to transform a docker-compose file into an ECS task definition work for you? Then you can use the resulting ECS task definition file in boto.
ECS does not contain a docker-compose command. Instead you will specify a task definition file that contains all the definitions of a service and the containers that reside within it.
The ECS service will then deploy this based on the task definition, you simply define parameters such as how many of these tasks are operating at once.
You can however use the ecs-cli tool to perform this migration for you, using the ecs-cli compose command it can take the docker-compose file and perform those translations.
Take a look at the Using Docker Compose File Syntax page to see which parameters are supported from a docker-compose file.
You can also use ECS ComposeX which will allow you to keep using your docker-compose definitions as they exist for local purposes as it does not introduce any unsupported extensions for docker-compose, but will also allow you to defined RDS/DocDB/DynamoDB/Kinesis and plenty other options that you can automatically link to your services.
When ready, ComposeX will transform all of that in CFN templates, containing AWS ECS definitions and all necessary resources, that are logically linked to work together but equally self-sufficient (so you can deploy things separately, like DBs for example).
All templates are automatically parsed and validated through cloudformation API (to the best of its abilities).
It is purposely aimed at working with AWS services, and follow all best practices, including allowing you to define least-privileged access from/to services and AWS resources.
It supports autoscaling, creation/use of existing ECS clusters and is aimed to make workloads primarly on Fargate but also on EC2 instances.

Deploy Docker-Compose YML to AWS ECS

One of the projects has shared its docker-compose.yml file. It contains various services. Each service forms a container. I can easily deploy this image in EC2 and get going. However, I want to use AWS ECS only.
How can I deploy that YML file in AWS ECS?
AWS ECS is little bit different from normal docker environment where you directly start the container.
In ECS you need to create a task with the docker image and then create a service to run that task.
So you cannot directly apply deployment.yaml file over ECS.
Here's how you can do this manually, https://aws.amazon.com/getting-started/hands-on/deploy-docker-containers/
You can always automate this using terraform/ aws cli etc..

Are ECS Task Definitions supposed to be committed into version control?

I'm fairly new to AWS ECS and am unsure if you are supposed to commit into version control some things that AWS already hosts for you. Specifically, should task-definitions created on AWS be committed to GitHub? Or do we just use AWS ECS/ECR as the version control FOR ECS Task definitions?
First ECS/ECR not used for version control like GithHub.
ECS is container management service where ECR is nothing but docker registry on AWS.
Task definition used to run docker container in ECS.You can create task definition on or after creating ECS cluster.You can create/modify task definition in many ways like console,aws cli, aws cloud formation,terraform etc. It depends on how you want to do this and how frequently you change task definition. yes, you can keep your task definition GitHub and create automated job to execute every time from there but there is no need to store task definition anywhere once your task running.
ECR is Elastic container registry which is used to store the container images.
ECS Task definition A task definition is required to run Docker containers in Amazon ECS. Some of the parameters you can specify in a task definition & include: The Docker images to use with the containers in your & task.
You have to provide your ECR URI while creating task definition or it will look for the default docker hub registry for the container image.
Also, You can keep your task definition json on any version control system if you want to use them lateron.

Jenkins: deploy to AWS ECS with docker compose

I have some misunderstanding. I have Jenkins instance, ESC cluster and docker-compose config file to compose my images and link up containers.
After git push my Jenkins instance grabs all sources from all repos (webapp, api, lb) and makes a batch of operations like build, copy files and etc.
After that I have all folders with Dockerfiles in state "ready for compose".
And in this stage I cant get how I should ping my ESC cluster on AWS to grab all images from Jenkins and compose them with my docker-compose.yml config.
I would be glad of any useful information.
Thanks.
First, you need to push your images from the Jenkins server into ECR. Push every image individually to an independent repo.
Next, you have to create an ECS cluster. In this cluster, you create an ECS service that runs in the cluster. For this Service, you create and configure a Task Definition to run your linked containers. You don't need to use docker-compose for this: in ECS you define the links between containers in the configuration of the Task Definition. You can add several container definitions to the same Task Definition, and ECS will link them together.
You can automate all of this from your Jenkins server by attaching an Instance Profile to it that allows it to call the ECS API. In order to deploy new images, all you need to do is pushing them to their ECR repos, and create a new Task Definition pointing to them, and update the service that runs these task definitions. ECS will automatically trigger a blue-green deployment on your behalf.