what triggers Elastic Beanstalk to pull in an updated Docker image - amazon-web-services

I have an Elastic Beanstalk application running and configured to serve a Docker container ("generic Docker" configuration) and linked to a private image on Docker Hub.
How can I prompt the Elastic Beanstalk application to download the latest version of the docker hub image after pushing up a new version with docker push?
Do I need to "restart the app server," "rebuild the environment," something else, or is "supposed" to pull it in automatically? Not seeing this addressed in the docs.
** EDIT **
To be clear, eb deploy does NOT pull in an updated Docker image, but it does push up the files from your application directory to your ec2 instances.
So, at the end of the day I'm probably not going to use docker push for deployments, but just to keep the image up to date in the case that you actually need to make ENVIRONMENT configuration changes, not code changes, or when bringing on a new developer, you can use docker pull.
Currently eb deploy my-environment-name is working great for Docker based Elastic Beanstalk deployments.

You just need to run command line: eb deploy. Here is a nice tutorial http://victorlin.me/posts/2014/11/26/running-docker-with-aws-elastic-beanstalk.

Related

Deploy app to AWS Elastic Beanstalk using docker-compose

I'm trying to deploy a multi-container app to AWS Elastic Beanstalk using docker-compose. My folder structure is as follows:
AppDirectory/
app/
proxy/
scripts/
static/
docker-compose.yml
Dockerfile
requirements.txt
I've got the images built and pushed into Docker Hub. Also, the environment is working as expected when running docker-compose up in development. Using AWS Elastic Beanstalk dashboard, I create an application and then I proceed to create an environment, using Docker as platform. I have a .zip folder with the structure mentioned before.
When creating the environment, there's first a console log telling me that Configuration files cannot be extracted from the application version appname-source. Check that the application version is a valid zip or war file.
Not sure what it means here, as I'm uploading a .zip file and inside there are Dockerfile and docker-compose.yml.
After it says the app deployed successfully, I get a 502 Bad Gateway error from nginx and then I have the environment with severe health and updating for hours without changes. I've been following the documentation regarding this and I believe it is possible to deploy it with docker-compose.yml, however I'm wondering if my configuration is enough? I linked the Docker Hub images also inside my docker-compose. I'm not able to request logs neither as the state must be 'ready', and it is constantly 'updating'.
Anyone with any experience with such deployment that wants to share any tips or configuration settings? Thank you.

Deploy Django Code with Docker to AWS EC2 instance without ECS

Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment

Elastic Beanstalk Always Updates Environment On Deploy From Codebuild

I have a large, multi-component django application I am trying to deploy to elastic beanstalk. I am using the multi-docker environment. THis is my current workflow
Git commit triggers AWS code pipeline
AWS Codebuild builds docker image (docker-compose build), runs some tests, and pushes this image to AWS Elastic Container Registry
AWS Code Build calls eb deploy
The issue I am running into is that when I call eb deploy from my local box, the it simply upgrades the application, but when I call it from Code Build, it is upgrading the environment every time, which takes about 30 minutes for some reason
I run the deploy command with -v and confirmed that the same files are being zipped. Any ideas on what is going on here, is my setup incorrect?
I also tried to deploy the application from Code Deploy in the pipeline and can confirm that it also always upgrades the entire environement.
I think that if you use CB to update your EB env, it just replaces it as it is being considered as a new environment. In your local workstation you are using only one single environment, but with new application version.
I would consider replacing CB for updating your EB environment, with the EB deploy provider in your CP. This should successful just upload your new application version to an existing EB environment.
CP natively supports a number of deploy action providers, one of the being Elastic Beanstalk:
You can configure CodePipeline to use Elastic Beanstalk to deploy your code. You can create the Elastic Beanstalk application and environment to use in a deploy action in a stage either before you create the pipeline or when you use the Create Pipeline wizard.

Deploying a dockerfile to AWS Fargate (Building docker images on AWS)

I have the end goal of deploying a docker container on AWS Fargate. As it happens, my dockerfile has no local dependencies and my upload connection is very slow, thus I want to build it in the cloud. What would be the easiest way to build the image on AWS? Creating an EC2 Linux instance, installing docker and aws-cli in it, building the image then uploading to AWS ECR, if that's possible?
The easiest way is by using AWS CodeBuild - it will do everything for you, even push it to AWS ECR.
Basic instructions: here

How to configure Amazon container service without docker hub integration

I am trying to setup a new springboot+docker(microservices) based project. The deployment is targeted on aws. Every service has a Dockerfile associated with it. I am thinking of using amazon container service for deployment, but as far as I see it only pulls images from docker hub. I don't want ECS to pull from docker-hub, rather build the images from docker file and then take over the deploying those containers.Is it possible to do? If yes how.
This is not possible yet with the Amazon EC2 Container Service (ECS) alone - while ECS meanwhile supports private registries (see also the introductory blog post), it doesn't yet offer an image build service (as usual, AWS is expected to add such notable additional features over time, see e.g. the Feature Request: ECS container dream service for more on this).
However, it can already be achieved with AWS Elastic Beanstalk's built in initial support for Single Container Docker Configurations:
Docker uses a Dockerfile to create a Docker image that contains your source bundle. [...] Dockerfile is a plain text file that contains instructions that Elastic Beanstalk uses to build a customized Docker image on each Amazon EC2 instance in your Elastic Beanstalk environment. Create a Dockerfile when you do not already have an existing image hosted in a repository. [emphasis mine]
In an ironic twist, Elastic Beanstalk has now added Multicontainer Docker Environments based on ECS, but this highly desired more versatile Docker deployment option doesn't offer the ability to build images in turn:
Building custom images during deployment with a Dockerfile is not supported by the multicontainer Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment. [emphasis mine]
As mentioned above, I would expect this to be added to ECS in a not too distant future due to AWS' well known agility (see e.g. the most recent ECS updates), but they usually don't commit to roadmap details, so it is hard to estimate how long we need to wait on this one.
Meanwhile Amazon has introduced EC2 Container Registry https://aws.amazon.com/ecr/
It is a private docker repository if you do not like docker hub. Nicely integrated with the ECS service.
However it does not build your docker images, so it does not solve the entire problem.
I use a bamboo server for building images (the source is in git repositories in bitbucket). Bamboo pushes the images to Amazons container registry.
I am hoping the Bitbucket Pipelines will make the process more smooth with less configuration of build servers. From the videos I have seen all your build configuration sits right in your repository. It is still in a closed beta so I guess we will have to wait a bit more to see what it ends up being.