Elastic Beanstalk - Update App based on DockerHub "Automated Build" - amazon-web-services

I'm currently experimenting with Elastic Beanstalk and Docker, and I'm wondering if there is some way to get an Elastic Beanstalk Docker App, to auto update based on a DockerHub AUTOMATED BUILD image?
I'm using the following setup.
1) GitHub repository with Dockerfile and associated files.
2) DockerHub Automated Build image linked to the GitHub repository.
3) Elastic Beanstalk App built using a Dockerrun.aws.json like so...
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "my_repo/my_image:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "80"
}
]
}
If I commit changes to my GitHub repository, I can see that they're picked up in DockerHub, and a new image is built. However I'm not sure how best to trigger an update in Elastic Beanstalk.
I can rebuild the environment. However that's a bit of an expensive operation, and takes the application off-line while it's happening.
What I'd like is for it to automatically trigger a rolling update, so my instances will be upgraded one at a time, so nothing goes offline.

You may want to try Blue-Green deployments to avoid downtimes while deploying a new version of your application image.
This article might help you to start with it.

Related

How to use multi container docker in Elastic beanstalk using Amazon linux 2?

Currently, Amazon deprecated Multi-container Docker running on 64bit Amazon Linux.Need migrate to Docker running on 64bit Amazon Linux 2. In 1st version , we used Dockerrun.aws.json v2 to manage multi container docker. In latest version (Docker running on 64bit Amazon Linux 2), we need to use Dockerrun.aws.json v3 or docker-compose. But there is no working example or blogs are available. Can i get working samples ?.
In regards to Elastic Beanstalk and the Docker running on 64bit Amazon Linux 2 platform.
I was struggling too and finally got to the bottom of it. What confused me is that the documentation makes it seem like you can choose to use either, the Dockerrun.aws.json (v3) or a docker-compose.yml in your EB application package.
Then you go looking for the documentation on Dockerrun.aws.json (v3), and you won't find it anywhere.
The reason for this is that, you don't get a choice. If you want to run multiple containers you must include a docker-compose.yml in your application package. The only thing the Dockerrun.aws.json (v3) allows you to do is configure the s3 bucket and key to the location of your container repository authentication file ".dockercfg"
This is essentially the documentation for "Dockerrun.aws.json (v3)" it doesn't support anything similar to the "Dockerrun.aws.json (v2)
{
"AWSEBDockerrunVersion": "3",
"Authentication": {
"bucket": "DOC-EXAMPLE-BUCKET",
"key": "mydockercfg"
}
}
Include a docker-compose.yml and you'll need the dockerrun.aws.json (v3) only if the docker images are in a private repository.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
According to AWS Docs, Multi-container Docker running on Amazon Linux can be migrated to ECS on Amazon Linux 2
This option seems to be easier to apply with the CLI than using the Elastic Beanstalk console because it requires 1 single command:
aws elasticbeanstalk update-environment \
--environment-name ${my-env} \
--solution-stack-name "64bit Amazon Linux 2 ${version} running ECS" \
--region ${my-region}
I'd suggest that you first clone the environment you'd like to upgrade, apply the command mentioned above to this copied environment and test it, if everything works as expected then you can use a blue/green deployment to avoid downtime.
I hope this helps someone!

Aurelia, Docker, Nginx, AWS Elastic Beanstalk Showing 502 Bad Gateway

I've deployed an Aurelia application to AWS Elastic Beanstalk via AWS ECR and have run into some difficulty. The docker container, when run locally, works perfectly (see below for Dockerfile).
FROM nginx:1.15.8-alpine
COPY dist /usr/share/nginx/html
The deployment works quite well, however when I navigate to the AWS provided endpoint http://docker-tester.***.elasticbeanstalk.com/ I get 502 Bad Gateway
nginx/1.12.1.
I can't figure out what might be the issue. The docker container in question is a simple Hello World example created via the au new command; it's nothing fancy at all.
Below is my Dockerrun.aws.json file
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "***.dkr.ecr.eu-central-1.amazonaws.com/tester:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
],
"Logging": "/var/log/nginx"
}
My Elastic Beanstalk configuration is rather small with an EC2 instance type of t2.micro. I'm using the free tier as an opportunity to learn.
I greatly appreciate any help, or links to some reading that may point in the right direction.
It has nothing to do with your aurelia application. You are missing EXPOSE statement (which is mandatory) in your Dockerfile. You can change it like this.
FROM nginx:1.15.8-alpine
EXPOSE 80
COPY dist /usr/share/nginx/html
If you try to run it without EXPOSE, you will get an error
ERROR: ValidationError - The Dockerfile must list ports to expose on the Docker container. Specify at least one port, and then try again.
You should test your application before pushing it to ElasticBeanstalk
install eb cli (assuming that you have pip, if not then you need to install it as well)
pip install awsebcli --upgrade --user
then initialize local repository for deployment
eb init -p docker <application-name>
and you can test it
eb local run --port <port-number>

writing file from docker container to host instance on AWS

So I am using Travis CI to automatically deploy my application to AWS Elasticbeanstalk environment. I have this issue that I need to update the nginx.conf file that is located in the host machine files.
Im running a Single container Docker image inside that host machine.
How can I copy or link the nginx.conf file from docker container to host machines nginx.conf file.
Currently my Dockerrun.aws.json looks like that:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "some:image:url:here",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8001"
}
],
"Volumes": [
{
"HostDirectory": "/etc/nginx/nginx.conf",
"ContainerDirectory": "/home/node/app/nginx.conf"
}
]
}
When I tried to use dockerrunversion: 2, it gave me an error on the build that version is wrong.
How can I link those two files with Single Container Docker application?
The "Volumes" key is used to map full volumes, not individual files. See Dockerrun.aws.json file specifications for an explanation.
I know of 2 ways you can solve this problem: 1) Custom AMI or 2) use a Dockerfile with your Dockerrun.aws.json.
1. Build a Custom AMI
The idea behind building a custom AMI is to launch an instance from one of Amazons existing AMIs. You make the changes you need to it (in your case, change the nginx.conf). Finally you create a new AMI from this instance and it will be available to you when you create your environment in Elastic Beanstalk. Here are the detailed steps to create your own AMI and how to use it with Elastic Beanstalk.
2. Use a Dockerfile with your Dockerrun.aws.json
If you dont build your own AMI, you can copy your conf file with the help of a Dockerfile. Dockerfile is a text file that provides commands to Elastic Beanstalk to run to build your custom image. The Docerfile reference details the commands that can be added to a Dockerfile to build your image. You are going to need to to use the Copy command or if the file is simple, you can use Run and echo to build it like in the example here.
Once you create your Dockerfile, you will need to put the Dockerfile and your Dockerrun.aws.json into a directory and create a zip file with both. Provide this to Elastic Beanstalk as your source bundle. Follow this guide to build the source bundle correctly.

AWS Beanstalk docker image automatic update doesn't work

I have a node.js application packaged in a docker image hosted in a public repository.
I have deployed that image in an AWS Beanstalk docker application successfully.
The problem is that I was expecting the Beanstalk application to be automatically updated when I update the image in the public repository, as the following configuration sugggests.
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "peveuve/dynamio-payment-service",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8000"
}
],
"Logging": "/var/log/dynamio"
}
The Dockerfile is very simple:
FROM node:4.2.1-onbuild
# Environment variables
ENV NODE_ENV test
ENV PORT 8000
# expose application port outside
EXPOSE $PORT
The Amazon documentation is pretty clear on that:
Optionally include the Update key. The default value is "true" and
instructs Elastic Beanstalk to check the repository, pull any updates
to the image, and overwrite any cached images.
But I have to update the Beanstalk application manually by uploading a new version of the Dockerrun.aws.json descriptor. Did I miss something? Is it supposed to work like that?
You can use the aws command-line tool to trigger the update:
aws elasticbeanstalk update-environment --application-name [your_app_name] --environment-name [your_environment_name] --version-label [your_version_label]
You specify the version that contains the Dockerrun.aws.json file, that way a new version won't be added to the application. In this case the Dockerrun file works as the "source" for the application, but it only tells aws to pull the docker image, so it would be redundant to create new versions for the application in Elastic Beanstalk (unless you use specifically tagged docker images in the Dockerrun file)
Links:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html
http://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_UpdateEnvironment.htm
The documentation should be more clear. What they are saying is with update=true:
EBS will do a docker pull before it does a docker run when the application is first started. It will not continually poll docker hub.
In contrast, issuing a docker run without first doing a docker pull will always use the locally stored version of that machine, which may not always be the latest.
In order to acheive what you want, you'll need to set up a webhook on Docker Hub, that calls an application you control, that rebuilds your ELB app.

How to deploy django 1.8 on Elastic Beanstalk using Docker

My restrictions are:
Django is to be deployed using uWSGI with nginx
Django app is to use postgresql that is hosted on RDS
the dockerfile will use ubuntu:14.04 as the container OS
This is what I have for docker setup:
https://github.com/simkimsia/aws-docker-django
It contains a dockerfile and other configuration files. I have tested it on linux box. It works.
This is what I have tried. I logged into AWS console and selected Elastic Beanstalk and then selected create new application using docker as environment.
A new environment is created and it prompts me to upload and deploy.
I zipped up all the files you see in https://github.com/simkimsia/aws-docker-django and uploaded the zip file.
I got error with deploying.
I have also subsequently tried with using the following json file.
{
"AWSEBDockerrunVersion": "1",
"Volumes": [
{
"ContainerDirectory": "/var/app",
"HostDirectory": "/var/app"
}
],
"Logging": "/var/eb_log"
}
I have answers such as this but they will go against at least one of the 3 restrictions I have.
How do I go about achieving deployment on AWS beanstalk using Docker?
Have you been able to run any docker images on Elastic Beanstalk? I was having several issues, but eventually documented my solution here: https://github.com/dkarchmer/aws-eb-docker-django
It does not use nginx but that should all be on your Dockerfile, so you should be able to reverse engineer my example and ideally just use your own Dockerfile instead of mine.