AWS ECS Cli vs Docker Context ECS Cli - amazon-web-services

I need to deploy an application in AWS using ECS Fargate. This application has multiple services and a docker-compose file. I see there are two main ways to do this:
Using Docker's Context ECS cli, the official docs I found: Docker doc and AWS doc.
Using Amazon's ECS cli as described here.
I am trying to understand the following but didn't find any comparison on the web:
Which are the advantages/disadvantages of each way?
Can the same result be achieved with both options, or is there something one can do that the other can't?
What should I take in consideration when I choose one?
Thanks,

So I've been trying this this past week, both approaches, and here's what I have found.
ecs-cli and docker support different sets of tags for nontrivial features, even things such as how much CPU and memory your container needs.
For example docker wants the deploy config in a deploy tag under the service. See https://docs.docker.com/compose/compose-file/deploy/#memory
However ECS-CLI wants things in a acs-params.yml. There are almost no examples out there other than trivial ones though, and the ones published don't actually work with the current tooling. What they publish doesn't work.
Docker ECS integration works and handles tons of details for you, including VPC, subnet creation, LBs, security groups, everything. This is an amazing part.
ecs-cli offers tons more options than Docker CLI but you need to do a lot more work yourself, manual security group config, etc.
I was never able to get ecs-cli to really work. It kept choking on cpu config, and what was written in AWS docs did not actually work\
docker compose logs doesn't work
Overall neither CLI seems to be in production shape, but docker one seems to be far ahead of where the ecs-cli is IMO.

Related

How to decide which ECS tool to use?

I am at a bit of a crossroads here. My goal is to automate creating my ECS architecture and deploying my docker-compose services to ECS Fargate, but there are so many ways to do it!
Hoping to get some insight from the community on picking the right tool for the job. What are the use cases for each of these? When should I pick one over the other?
Docker ECS integration
https://docs.docker.com/cloud/ecs-integration/
ecs-cli
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI.html
AWS Copilot
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Copilot.html
AWS CLI ecs
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecs/index.html
This is how I am breaking it down:
Docker Compose / ECS integration: use this if you are deep into Docker already are in love with the compose syntax. If you are not already in love with it you should because it's a very simple syntax
ECS CLI: this is the previous version of the Copilot CLI. ECS CLI had native ECS components/workflows to it + some level of docker compose compatibility. We have since improved the workflows in Copilot and forked compose support into the Docker Compose / ECS integration above. TL/DR: don't use ECS CLI
Copilot CLI: this is a very easy way to start with containers on AWS ECS. If you don't know much about Docker and you don't need to become an expert in containers Copilot is the right approach. Only drawback (IMO) is that it doesn't really support a proper IaC pattern yet (example).
AWS CLI (ecs namespace): this is just the CLI manifestation of the 1:1 mapping of the core ECS APIs. This is most likely way too low level for you to extract value (unless you are doing very deep things). For example one Copilot command or a few Docker Compose lines could easily be exploded in dozens of AWS CLI commands.
I'd add CDK to your list: CDK is a representation of AWS constructs (including ECS) that could be represented with standard programming languages. The good thing about CDK is that it could map 1:1 raw API/Cloudformation constructs but it also ships with higher level and more abstracted libraries that allows you to express in a few lines of code content that would require hundreds of lines of CFN.
If you are deep into compose (as it seems from your original message) you may want to have a look at this approach.

What is the best way to set up a CI/CD pipeline on ECS?

There are so many options:
Docker-compose with ECS cli looks the easiest solution
Terraform
CloudFormation (looks complex!)
Ansible
I am only interested in setting up a basic ECS docker set-up with ELB and easily updating the Docker image version.
We all love technology here, but we're not all super geniuses when it comes to tech. So I'm looking to keep my set-up as simple as possible. We run Jenkins, 2 NodeJS applications, 2 Java applications in ECS and I know it involves IAM, Security Groups, EBS, ELB, ECS Service/Task, ECS Task Definition, but that already gets complex quickly in CloudFormation.
What are good technologies that will allow us to use Docker, keep things simple and don't require us to be very intelligent to understand our own programming code?
I would suggest you start by trying to setup your pipeline using Terraform. Learning it will give you experience in a non-vendor specific infrastructure as code.
Another possibility is to avoid using CloudFormation directly and prefer using the AWS CDK (https://docs.aws.amazon.com/cdk/latest/guide/home.html) as IaC.
Best regards

Continuous Deployment of Docker Compose App to AWS/EC2

I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.

What commands/config can I use to deploy a django+postgres+nginx using ecs-cli?

I have a basic django/postgres app running locally, based on the Docker Django docs. It uses docker compose to run the containers locally.
I'd like to run this app on Amazon Web Services (AWS), and to deploy it using the command line, not the AWS console.
My Attempt
When I tried this, I ended up with:
this yml config for ecs-cli
these notes on how I deployed from the command line.
Note: I was trying to fire up the Python dev server in a janky way, hoping that would work before I added nginx. The cluster (RDS+server) would come up, but then the instances would die right away.
Issues I Then Failed to Solve
I realized over the course of this:
the setup needs another container for a web server (nginx) to run on AWS (like this blog post, but the tutorial uses the AWS Console, which I wanted to avoid)
ecs-cli uses a different syntax for yml/json config than docker-compose, so you need to have some separate/similar code from your local docker.yml (and I'm not sure if my file above was correct)
Question
So, what ecs-cli commands and config do I use to deploy the app, or am I going about things all wrong?
Feel free to say I'm doing it all wrong. I could also use Elastic Beanstalk - the tutorials on this don't seem to use docker/docker-compose, but seem easier overall (at least well documented).
I'd like to understand why any given approach is a good way to do this.
One alternative you may wish to consider in lieu of ECS, if you just want to get it up in the amazon cloud, is to make use of docker-machine using the amazonec2 driver.
When executing the docker-compose, just ensure the remote Amazon host machine is ACTIVE which can be viewed with a docker-machine ls
One item you will have to revisit with the Amazon Mmgt Console is to open the applicable PORTS such as Port 80 and any other ports exposed in the compose file. Once the security group is in place for the VPC, you should be able to simply refer to the VPC ID on subsequent executions bypassing any need to use the Mgmt console to add the ports. You may wish to bump up the instance size from the default t2.micro to match the t2.medium specified in your NOTES.
If ECS orchestration is needed, then a task definition will need to be created containing the container definitions you require as defined in your docker compose file. My recommendation would be to take advantage of the Mgmt console to construct the definition and then grab the accompanying JSON defintion which is made available and store in your source code repository for future executions on the command line where they can be referenced in registering task definitions, executing tasks and services within a given cluster.

deploying on AWS ECS via task definitions without dockerhub

Currently I'm using task-definitions that refer to custom images in dockerhub to deploy my webapp on ECS (Amazon EC2 Container Service). Is there a way to do this without going through dockerhub i.e. build/deploy the dockerfile locally across cluster nodes?
At the moment, I can only think of sending shell commands over ssh or using a tool like ansible.
Perhaps I'm missing something totally obvious here...
This is a little late for an answered question, but I just figured this out myself. The EC2 Container Registry (ECR, Amazon's repository equivalent) is working well for me, maybe didn't exist at the time?
I build the containers locally. Tag them and push them to Amazon's ECR using the AWS CLI (later versions of which include support for ECR), and then refer to them at that location in the task definitions in ECS. Works like a charm.
http://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html
ECS is a service to run containers, not to build them. It has no native support for it, so you're not missing something obvious.
As you suggest, you could distribute a Dockerfile to the container instances and build locally, but that will actually be more difficult since the container instances must have everything needed to build the image, plus you'd have to distribute the image to the other container instances.
You could run a repository yourself and specify a different repository-url for the image parameter in your ECS task definition. You'd still be responsible for building the images and now the added burden of running a repository as well.
Sorry to be the bearer of bad news but there's not a simpler workflow for this at the moment.