I am at a bit of a crossroads here. My goal is to automate creating my ECS architecture and deploying my docker-compose services to ECS Fargate, but there are so many ways to do it!
Hoping to get some insight from the community on picking the right tool for the job. What are the use cases for each of these? When should I pick one over the other?
Docker ECS integration
https://docs.docker.com/cloud/ecs-integration/
ecs-cli
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI.html
AWS Copilot
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Copilot.html
AWS CLI ecs
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecs/index.html
This is how I am breaking it down:
Docker Compose / ECS integration: use this if you are deep into Docker already are in love with the compose syntax. If you are not already in love with it you should because it's a very simple syntax
ECS CLI: this is the previous version of the Copilot CLI. ECS CLI had native ECS components/workflows to it + some level of docker compose compatibility. We have since improved the workflows in Copilot and forked compose support into the Docker Compose / ECS integration above. TL/DR: don't use ECS CLI
Copilot CLI: this is a very easy way to start with containers on AWS ECS. If you don't know much about Docker and you don't need to become an expert in containers Copilot is the right approach. Only drawback (IMO) is that it doesn't really support a proper IaC pattern yet (example).
AWS CLI (ecs namespace): this is just the CLI manifestation of the 1:1 mapping of the core ECS APIs. This is most likely way too low level for you to extract value (unless you are doing very deep things). For example one Copilot command or a few Docker Compose lines could easily be exploded in dozens of AWS CLI commands.
I'd add CDK to your list: CDK is a representation of AWS constructs (including ECS) that could be represented with standard programming languages. The good thing about CDK is that it could map 1:1 raw API/Cloudformation constructs but it also ships with higher level and more abstracted libraries that allows you to express in a few lines of code content that would require hundreds of lines of CFN.
If you are deep into compose (as it seems from your original message) you may want to have a look at this approach.
Related
I need to deploy an application in AWS using ECS Fargate. This application has multiple services and a docker-compose file. I see there are two main ways to do this:
Using Docker's Context ECS cli, the official docs I found: Docker doc and AWS doc.
Using Amazon's ECS cli as described here.
I am trying to understand the following but didn't find any comparison on the web:
Which are the advantages/disadvantages of each way?
Can the same result be achieved with both options, or is there something one can do that the other can't?
What should I take in consideration when I choose one?
Thanks,
So I've been trying this this past week, both approaches, and here's what I have found.
ecs-cli and docker support different sets of tags for nontrivial features, even things such as how much CPU and memory your container needs.
For example docker wants the deploy config in a deploy tag under the service. See https://docs.docker.com/compose/compose-file/deploy/#memory
However ECS-CLI wants things in a acs-params.yml. There are almost no examples out there other than trivial ones though, and the ones published don't actually work with the current tooling. What they publish doesn't work.
Docker ECS integration works and handles tons of details for you, including VPC, subnet creation, LBs, security groups, everything. This is an amazing part.
ecs-cli offers tons more options than Docker CLI but you need to do a lot more work yourself, manual security group config, etc.
I was never able to get ecs-cli to really work. It kept choking on cpu config, and what was written in AWS docs did not actually work\
docker compose logs doesn't work
Overall neither CLI seems to be in production shape, but docker one seems to be far ahead of where the ecs-cli is IMO.
I developed a spring boot-microservices application in which each microservice is packaged into a separate docker container. The databases for these services are also in separate docker containers. Currently, all these are hosted and running in AWS ECS. If I need to migrate to Lambda, can I reuse the same docker containers as such? (of course, I will add the AWS serverless dependency in all the pom.xml files) and do repackaging. Kindly let me know if I can run the modified docker images as such in Lambda?
Thank You
I think you can't use share the same docker image between your ECS task and Lambda. Because they differ in few aspects and some of them are very specific to lambda, how we write the handler as well as package them.
New for AWS Lambda – Container Image Support
Your wording too is a bit confusing
can I reuse the same docker containers as such?
and then you say
can run the modified docker images as such in Lambda?
Example task definitions
For Spring Boot Application specifically, you can take a look here:
Running APIs Written in Java on AWS Lambda
Java AWS Lambda Container Image Support (Complete Guide)
There are so many options:
Docker-compose with ECS cli looks the easiest solution
Terraform
CloudFormation (looks complex!)
Ansible
I am only interested in setting up a basic ECS docker set-up with ELB and easily updating the Docker image version.
We all love technology here, but we're not all super geniuses when it comes to tech. So I'm looking to keep my set-up as simple as possible. We run Jenkins, 2 NodeJS applications, 2 Java applications in ECS and I know it involves IAM, Security Groups, EBS, ELB, ECS Service/Task, ECS Task Definition, but that already gets complex quickly in CloudFormation.
What are good technologies that will allow us to use Docker, keep things simple and don't require us to be very intelligent to understand our own programming code?
I would suggest you start by trying to setup your pipeline using Terraform. Learning it will give you experience in a non-vendor specific infrastructure as code.
Another possibility is to avoid using CloudFormation directly and prefer using the AWS CDK (https://docs.aws.amazon.com/cdk/latest/guide/home.html) as IaC.
Best regards
I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.
I am working on a project using a microservices architecture.
Each service lives in its own docker container and has a separate git repository in order to ensure loose coupling.
It is my understanding that AWS recently announced support for Multi-Container Docker environments in ElasticBeanstalk. This is great for development because I can launch all services with a single command and test everything locally on my laptop. Just like Docker Compose.
However, it seems I only have the option to also deploy all services at once which I am afraid defies the initial purpose of having a micro services architecture.
I would like to be able to deploy/version each service independently to AWS. What would be the best way to achieve that while keeping infrastructure management to a minimum?
We are currently using Amazon ECS to accomplish exactly what you are talking about trying to achieve. You can define your Docker Container as a Task definition and then Create an ECS Service which will handle number of instances, scaling, etc.
One thing to note is Amazon mentions the word container a lot in the documentation. They may be talking about the EC2 instance used for the cluster for your docker instances/containers.