Dockerfile vs Dockerrun.aws.json on AWS Elastic Beanstalk - amazon-web-services

Can anyone explain use cases for using either or both of these in an EB project? I understand from the docs that using one makes the other optional, but I'm not exactly clear on pros vs cons. Any good scenarios for using both? For my EB app, I'm building a Docker Image manually through a Dockerfile (not a repo) and launching a node.js server.

Related

How to pull a Docker image from DockerHub to Elastic Beanstalk?

I wanted to make a CI/CD with a project on Github using GitHub Actions. Used this tutorial:
https://www.blog.labouardy.com/elastic-beanstalk-docker-tips/
But I still do not understand how elastic beanstalk will pull the docker image from the DockerHub.
How should this happen?
And why do we need a Dockerrun.aws.json file and how to use it?
There are different approaches that can be followed. The blogger chose to use the Dockerrun.aws.json + Dockerfile + zipfile approach. In other words, every time the CircleCI builds, it uploads a zip file containing the Dockerrun.aws.json (the Dockerfile is not really needed in this case since he's building the image remotely as well as the rest of the application since he's not mapping anything).
The circleci executes the following steps:
build image
push image
send zip file to AWS Elastic Beanstalk
AWS Elastic Beanstalk will simply follow the configuration inside the Dockerrun.aws.json and update using the tag ${CIRCLE_SHA1}.
Is the Dockerrun.aws.json necessary? No, you can also use a docker-compose.yml file.
I suggest you check AWS documentation on this topic.
EDIT: IMHO it's better to use docker-compose.yml since it allows to start the containers locally and make sure they're ok before updating the application remotely
Like an member said you could use the AWS documentation with complete steps.

Continuous Deployment of Docker Compose App to AWS/EC2

I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.

What's the advantages of using Docker with AWS Elastic Beanstalk?

I've deployed a couple of websites on AWS Elastic Beanstalk, and then I heard of Docker, so I think I can probably try it this time for a small business e-commerce website (Lumen + Angularjs). I searched all over the Internet, but as having no experience with the Docker, it is still hard to
get a great understanding of the advantages of using Docker on AWS. All I can find are some descriptions like this:
Pros
Separated management of dependencies and server hardware
Development environment is identical (internally) to production environment
Dependency management means that not everyone needs intimate knowledge about every part of your technology stack
Easy custom task and service scheduling with AWS SDK or a third-party tool
Make good use of available resources with ECS assigning tasks to EC2’s with enough free resources Use auto-scaling when tasks need more resources
Cons
Build produces a large file that needs to be uploaded
Docker NAT can increase network latency (use docker run –net=host, for more docker performance info see here)
Some developers have fits when the word docker is mentioned
Some applications need to be fixed to work on Docker
Can someone give me some simple examples or explanations?
I think the primary advantage of Docker on Elastic Beanstalk is the flexibility it gives you when compared to running your app on one of the specific runtime environments Elastic Beanstalk supports.
From the official documentation:
Elastic Beanstalk supports the deployment of web applications from
Docker containers. With Docker containers, you can define your own
runtime environment. You can choose your own platform, programming
language, and any application dependencies (such as package managers
or tools), that aren't supported by other platforms. Docker containers
are self-contained and include all the configuration information and
software your web application requires to run.
For example I've seen lots of people ask how to deploy Java applications that use something other than Tomcat on Elastic Beanstalk. You couldn't do that before they added Docker support.
If you are using one of the officially supported Elastic Beanstalk runtimes then it is difficult for me to recommend using Docker. It would somewhat separate your app from AWS specifics, theoretically allowing you to run your app more easily outside of AWS. If you want to avoid vendor lock-in at all costs, or if you just want to stay up to date on the latest technologies, then Docker is a good choice. Otherwise, if you already have your app running on Elastic Beanstalk there isn't much reason for you to convert it to Docker.
Edit: Note that my reply is related to using Docker specifically with Elastic Beanstalk, as your question title asks. I see in your question that you also refer to the ECS service and the more general use of Docker on AWS. That's a much bigger discussion and there are definitely some advantages to using Docker instead of plain EC2 instances for certain things.
Edit 10/5/2019: AWS seems to be pushing people towards Docker now so that they don't have to maintain updates to the official runtimes. For example the latest Java runtime for Elastic Beanstalk is Java 8. So if you want to run a modern version of Java on Elastic Beanstalk you would have to use Docker.
There is a certain amount of environments considering the elastic beanstalk. In order to add extra configurations and have something custom upon these environments, you have to utilize .ebextensions.
However ebextensions are being executed on the creation of the elastic beanstalk server. Also .ebextensions are not as easy to implement as a docker image.
By using docker on elastic beanstalk you have your image setup ready to be deployed without any extra configuration, and docker is great when you need an immutable architecture.

deploying on AWS ECS via task definitions without dockerhub

Currently I'm using task-definitions that refer to custom images in dockerhub to deploy my webapp on ECS (Amazon EC2 Container Service). Is there a way to do this without going through dockerhub i.e. build/deploy the dockerfile locally across cluster nodes?
At the moment, I can only think of sending shell commands over ssh or using a tool like ansible.
Perhaps I'm missing something totally obvious here...
This is a little late for an answered question, but I just figured this out myself. The EC2 Container Registry (ECR, Amazon's repository equivalent) is working well for me, maybe didn't exist at the time?
I build the containers locally. Tag them and push them to Amazon's ECR using the AWS CLI (later versions of which include support for ECR), and then refer to them at that location in the task definitions in ECS. Works like a charm.
http://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html
ECS is a service to run containers, not to build them. It has no native support for it, so you're not missing something obvious.
As you suggest, you could distribute a Dockerfile to the container instances and build locally, but that will actually be more difficult since the container instances must have everything needed to build the image, plus you'd have to distribute the image to the other container instances.
You could run a repository yourself and specify a different repository-url for the image parameter in your ECS task definition. You'd still be responsible for building the images and now the added burden of running a repository as well.
Sorry to be the bearer of bad news but there's not a simpler workflow for this at the moment.

Docker consumer on AWS while using RabbitMQ

I have most of my app designed and deployed. We have gone for RabbitMQ over SQS for several reasons.
I currently have my consumer running in a docker container and would like to deploy this on AWS. I am fairly familiar with Elastic Beanstalk since our web-tier is running there, but it seems all workers deployed this way have to use SQS?
The other option I am aware of is to use ECS for the docker component, but I do not want to make the image publicly available and don't have access to a private repository.
Is there some basic functionality I am missing, a document describing how to deploy to ECS using a Dockerfile and source code locally, or a way of deploying to EB using a Dockerfile without being locked into SQS?
edit
So I have found the note on the docs for EB which says that Dockerfiles are not supported when building a multi-container environment and that repositories currently have to be used for that purpose, so EB is out for me.