Deploy docker-compose project into ECS - amazon-web-services

I am attempting to create a pipeline to build and deploy docker-compose applications. I have already completed the CodeBuild portion to push the images to ECR. I'm just not sure what's next. If I need to use CodeDeploy? The code I'm attempting to push is a cookie cutter of https://github.com/pydanny/cookiecutter-django
I might be missing something simple but I haven't seen much documentation on deploying docker-compose applications. This is a question that I have been attempting to solve for the last few hours and would appreciate any help.

ECS does not support natively docker-compose. Instead it uses its own task-definitions format. Thus you have to translate docker-compose.yaml into the task definition.
Having said that, there is ongoing effort from AWS and Docker communities to make docker-compose work. One way how it can be possible is described in the recent AWS blog:
Deploy applications on Amazon ECS using Docker Compose
The approach used in the blog post involves Docker desktop.

Related

AWS Fargate run docker inside under docker

I need to run docker inside a docker because I am using Fargate as build pipeline to build AWS Sam build.
Similar questions have been asked here before, such as
Run docker inside of docker on AWS Fargate
As stated, accessing the Docker daemon is not currently supported as it violates the principle of isolation and has implications on security. Please read here for more information about this. Additionally it should be noted that the issue of building containers on Fargate was already raised and closed.
At present, a work around is possible using Kaniko. An example can be found here.
An alternative for a build pipeline could use AWS Code Build.

AWS CD with CodeDeploy for Docker Images

I have a scenario and looking for feedback and best approaches. We create and build our Docker Images using Azure Devops (VSTS) and push those images to our AWS repository. Now I can deploy those images just fine manually but would like to automate the process in a continual deployment model. Is there an approach to use codepipeline with a build step to just create and zip the imagesdefinitions.json file before it goes to the deploy step?
Or is there an better alternative that I am overlooking.
Thanks!
You can definitely use a build step (eg. CodeBuild) to automate generating your imagedefinitions.json file, there's an example here.
You might also want to look at the recently announced CodeDeploy ECS deployment option. It works a little differently to the ECS deployment action but allows blue/green deployments via CodeDeploy. There's more information in the announcement and blog post.

Continuous Deployment of Docker Compose App to AWS/EC2

I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.

What is easiest way to deploy using docker-compose.yml on GCP in 2018?

Time is changing and it's too fast in Cloud environment.
I'm developing my project with docker-compose and want to deploy it on GCP using docker-compose.yml and want to know how to deploy it with docker-compose at the easiest way.
Because there seems many ways on GCP in this time.
Try to use Kompose to deploy docker-compose.yml in kubernetes (GCP or openshift here)
kompose can convert docker-compose file into deploy, services files etc.
Kompose is a great option to migrate to K8s. You may also want to use Google Container Builder as well if you don't want to use an entire cluster. Here is an article that addresses this solution.

deploying on AWS ECS via task definitions without dockerhub

Currently I'm using task-definitions that refer to custom images in dockerhub to deploy my webapp on ECS (Amazon EC2 Container Service). Is there a way to do this without going through dockerhub i.e. build/deploy the dockerfile locally across cluster nodes?
At the moment, I can only think of sending shell commands over ssh or using a tool like ansible.
Perhaps I'm missing something totally obvious here...
This is a little late for an answered question, but I just figured this out myself. The EC2 Container Registry (ECR, Amazon's repository equivalent) is working well for me, maybe didn't exist at the time?
I build the containers locally. Tag them and push them to Amazon's ECR using the AWS CLI (later versions of which include support for ECR), and then refer to them at that location in the task definitions in ECS. Works like a charm.
http://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html
ECS is a service to run containers, not to build them. It has no native support for it, so you're not missing something obvious.
As you suggest, you could distribute a Dockerfile to the container instances and build locally, but that will actually be more difficult since the container instances must have everything needed to build the image, plus you'd have to distribute the image to the other container instances.
You could run a repository yourself and specify a different repository-url for the image parameter in your ECS task definition. You'd still be responsible for building the images and now the added burden of running a repository as well.
Sorry to be the bearer of bad news but there's not a simpler workflow for this at the moment.