I need to run docker inside a docker because I am using Fargate as build pipeline to build AWS Sam build.
Similar questions have been asked here before, such as
Run docker inside of docker on AWS Fargate
As stated, accessing the Docker daemon is not currently supported as it violates the principle of isolation and has implications on security. Please read here for more information about this. Additionally it should be noted that the issue of building containers on Fargate was already raised and closed.
At present, a work around is possible using Kaniko. An example can be found here.
An alternative for a build pipeline could use AWS Code Build.
Related
I need to deploy an application in AWS using ECS Fargate. This application has multiple services and a docker-compose file. I see there are two main ways to do this:
Using Docker's Context ECS cli, the official docs I found: Docker doc and AWS doc.
Using Amazon's ECS cli as described here.
I am trying to understand the following but didn't find any comparison on the web:
Which are the advantages/disadvantages of each way?
Can the same result be achieved with both options, or is there something one can do that the other can't?
What should I take in consideration when I choose one?
Thanks,
So I've been trying this this past week, both approaches, and here's what I have found.
ecs-cli and docker support different sets of tags for nontrivial features, even things such as how much CPU and memory your container needs.
For example docker wants the deploy config in a deploy tag under the service. See https://docs.docker.com/compose/compose-file/deploy/#memory
However ECS-CLI wants things in a acs-params.yml. There are almost no examples out there other than trivial ones though, and the ones published don't actually work with the current tooling. What they publish doesn't work.
Docker ECS integration works and handles tons of details for you, including VPC, subnet creation, LBs, security groups, everything. This is an amazing part.
ecs-cli offers tons more options than Docker CLI but you need to do a lot more work yourself, manual security group config, etc.
I was never able to get ecs-cli to really work. It kept choking on cpu config, and what was written in AWS docs did not actually work\
docker compose logs doesn't work
Overall neither CLI seems to be in production shape, but docker one seems to be far ahead of where the ecs-cli is IMO.
I am attempting to create a pipeline to build and deploy docker-compose applications. I have already completed the CodeBuild portion to push the images to ECR. I'm just not sure what's next. If I need to use CodeDeploy? The code I'm attempting to push is a cookie cutter of https://github.com/pydanny/cookiecutter-django
I might be missing something simple but I haven't seen much documentation on deploying docker-compose applications. This is a question that I have been attempting to solve for the last few hours and would appreciate any help.
ECS does not support natively docker-compose. Instead it uses its own task-definitions format. Thus you have to translate docker-compose.yaml into the task definition.
Having said that, there is ongoing effort from AWS and Docker communities to make docker-compose work. One way how it can be possible is described in the recent AWS blog:
Deploy applications on Amazon ECS using Docker Compose
The approach used in the blog post involves Docker desktop.
I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.
Currently I'm using task-definitions that refer to custom images in dockerhub to deploy my webapp on ECS (Amazon EC2 Container Service). Is there a way to do this without going through dockerhub i.e. build/deploy the dockerfile locally across cluster nodes?
At the moment, I can only think of sending shell commands over ssh or using a tool like ansible.
Perhaps I'm missing something totally obvious here...
This is a little late for an answered question, but I just figured this out myself. The EC2 Container Registry (ECR, Amazon's repository equivalent) is working well for me, maybe didn't exist at the time?
I build the containers locally. Tag them and push them to Amazon's ECR using the AWS CLI (later versions of which include support for ECR), and then refer to them at that location in the task definitions in ECS. Works like a charm.
http://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html
ECS is a service to run containers, not to build them. It has no native support for it, so you're not missing something obvious.
As you suggest, you could distribute a Dockerfile to the container instances and build locally, but that will actually be more difficult since the container instances must have everything needed to build the image, plus you'd have to distribute the image to the other container instances.
You could run a repository yourself and specify a different repository-url for the image parameter in your ECS task definition. You'd still be responsible for building the images and now the added burden of running a repository as well.
Sorry to be the bearer of bad news but there's not a simpler workflow for this at the moment.
I have most of my app designed and deployed. We have gone for RabbitMQ over SQS for several reasons.
I currently have my consumer running in a docker container and would like to deploy this on AWS. I am fairly familiar with Elastic Beanstalk since our web-tier is running there, but it seems all workers deployed this way have to use SQS?
The other option I am aware of is to use ECS for the docker component, but I do not want to make the image publicly available and don't have access to a private repository.
Is there some basic functionality I am missing, a document describing how to deploy to ECS using a Dockerfile and source code locally, or a way of deploying to EB using a Dockerfile without being locked into SQS?
edit
So I have found the note on the docs for EB which says that Dockerfiles are not supported when building a multi-container environment and that repositories currently have to be used for that purpose, so EB is out for me.