How to pull a Docker image from DockerHub to Elastic Beanstalk? - amazon-web-services

I wanted to make a CI/CD with a project on Github using GitHub Actions. Used this tutorial:
https://www.blog.labouardy.com/elastic-beanstalk-docker-tips/
But I still do not understand how elastic beanstalk will pull the docker image from the DockerHub.
How should this happen?
And why do we need a Dockerrun.aws.json file and how to use it?

There are different approaches that can be followed. The blogger chose to use the Dockerrun.aws.json + Dockerfile + zipfile approach. In other words, every time the CircleCI builds, it uploads a zip file containing the Dockerrun.aws.json (the Dockerfile is not really needed in this case since he's building the image remotely as well as the rest of the application since he's not mapping anything).
The circleci executes the following steps:
build image
push image
send zip file to AWS Elastic Beanstalk
AWS Elastic Beanstalk will simply follow the configuration inside the Dockerrun.aws.json and update using the tag ${CIRCLE_SHA1}.
Is the Dockerrun.aws.json necessary? No, you can also use a docker-compose.yml file.
I suggest you check AWS documentation on this topic.
EDIT: IMHO it's better to use docker-compose.yml since it allows to start the containers locally and make sure they're ok before updating the application remotely

Like an member said you could use the AWS documentation with complete steps.

Related

Deploy docker-compose project into ECS

I am attempting to create a pipeline to build and deploy docker-compose applications. I have already completed the CodeBuild portion to push the images to ECR. I'm just not sure what's next. If I need to use CodeDeploy? The code I'm attempting to push is a cookie cutter of https://github.com/pydanny/cookiecutter-django
I might be missing something simple but I haven't seen much documentation on deploying docker-compose applications. This is a question that I have been attempting to solve for the last few hours and would appreciate any help.
ECS does not support natively docker-compose. Instead it uses its own task-definitions format. Thus you have to translate docker-compose.yaml into the task definition.
Having said that, there is ongoing effort from AWS and Docker communities to make docker-compose work. One way how it can be possible is described in the recent AWS blog:
Deploy applications on Amazon ECS using Docker Compose
The approach used in the blog post involves Docker desktop.

Continuous Deployment of Docker Compose App to AWS/EC2

I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.

Dockerfile vs Dockerrun.aws.json on AWS Elastic Beanstalk

Can anyone explain use cases for using either or both of these in an EB project? I understand from the docs that using one makes the other optional, but I'm not exactly clear on pros vs cons. Any good scenarios for using both? For my EB app, I'm building a Docker Image manually through a Dockerfile (not a repo) and launching a node.js server.

Deploy to elasticbeanstalk via CLI deploy command with Dockerrun.aws.json

I am running an elasticbeanstalk application, with multiple environments. This particular application is hosting docker containers which host a webservice.
To upload and deploy a new version of the application to one of the environments, I can go through the web client and click on "Upload and Deploy" and from the file option I select my latest Dockerrun.aws.json file, which references the latest version of the container that is privately hosted. The upload and deploy works fine and without issue.
To make it simpler for myself and others to deploy I'd like to be able to use the CLI to upload and deploy the Dockerrun.aws.json file. If I use the cli eb deploy command without any special configuration the normal process of zipping up the whole application and sending it to the host occurs and fails (it cannot reason out that it only needs to read the Dockerrun.aws.json file).
I found a documentation tidbit about controlling what is uploaded using the .elasticbeanstalk/config.yml file.
Using this syntax:
deploy:
artifact: Dockerrun.aws.json
The file is uploaded and actually deploys successfully to the first batch of instances, and then always fails to deploy to the second set of instances.
The failure error is of the flavor: 'container exited unexpectedly...'
Can anyone explain, or provide link to the canonical approach for using the CLI to deploy single docker container applications?
So it turns out that the method I listed about with the config.yml was correct. The reason I was seeing a partially successful deployment was because the previously running docker container on the hosts was not being stopped by EB.
I think that what was happening was that EB is sending something like
sudo docker kill --signal=SIGTERM $CONTAINER_ID instead of the more common sudo docker stop $CONTAINER_ID
The specific container I was running didn't respond to SIGTERM and so it would just sit there. When I tested it locally with SIGKILL it would (obviously) stop properly, but SIGTERM alone wouldn't stop it.
The issue wasn't the deployment methodology but rather confusion in the output that EB generated and my misinterpretation.
Since you have asked for a link, I am providing a link which I initially used to successfully test and deploy docker using elasticbeanstalk cli.
Kindly see if this helps you as well: https://fangpenlin.com/posts/2014/11/25/running-docker-with-aws-elastic-beanstalk/

How to configure Amazon container service without docker hub integration

I am trying to setup a new springboot+docker(microservices) based project. The deployment is targeted on aws. Every service has a Dockerfile associated with it. I am thinking of using amazon container service for deployment, but as far as I see it only pulls images from docker hub. I don't want ECS to pull from docker-hub, rather build the images from docker file and then take over the deploying those containers.Is it possible to do? If yes how.
This is not possible yet with the Amazon EC2 Container Service (ECS) alone - while ECS meanwhile supports private registries (see also the introductory blog post), it doesn't yet offer an image build service (as usual, AWS is expected to add such notable additional features over time, see e.g. the Feature Request: ECS container dream service for more on this).
However, it can already be achieved with AWS Elastic Beanstalk's built in initial support for Single Container Docker Configurations:
Docker uses a Dockerfile to create a Docker image that contains your source bundle. [...] Dockerfile is a plain text file that contains instructions that Elastic Beanstalk uses to build a customized Docker image on each Amazon EC2 instance in your Elastic Beanstalk environment. Create a Dockerfile when you do not already have an existing image hosted in a repository. [emphasis mine]
In an ironic twist, Elastic Beanstalk has now added Multicontainer Docker Environments based on ECS, but this highly desired more versatile Docker deployment option doesn't offer the ability to build images in turn:
Building custom images during deployment with a Dockerfile is not supported by the multicontainer Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment. [emphasis mine]
As mentioned above, I would expect this to be added to ECS in a not too distant future due to AWS' well known agility (see e.g. the most recent ECS updates), but they usually don't commit to roadmap details, so it is hard to estimate how long we need to wait on this one.
Meanwhile Amazon has introduced EC2 Container Registry https://aws.amazon.com/ecr/
It is a private docker repository if you do not like docker hub. Nicely integrated with the ECS service.
However it does not build your docker images, so it does not solve the entire problem.
I use a bamboo server for building images (the source is in git repositories in bitbucket). Bamboo pushes the images to Amazons container registry.
I am hoping the Bitbucket Pipelines will make the process more smooth with less configuration of build servers. From the videos I have seen all your build configuration sits right in your repository. It is still in a closed beta so I guess we will have to wait a bit more to see what it ends up being.