I'm trying to deploy a multi-container app to AWS Elastic Beanstalk using docker-compose. My folder structure is as follows:
AppDirectory/
app/
proxy/
scripts/
static/
docker-compose.yml
Dockerfile
requirements.txt
I've got the images built and pushed into Docker Hub. Also, the environment is working as expected when running docker-compose up in development. Using AWS Elastic Beanstalk dashboard, I create an application and then I proceed to create an environment, using Docker as platform. I have a .zip folder with the structure mentioned before.
When creating the environment, there's first a console log telling me that Configuration files cannot be extracted from the application version appname-source. Check that the application version is a valid zip or war file.
Not sure what it means here, as I'm uploading a .zip file and inside there are Dockerfile and docker-compose.yml.
After it says the app deployed successfully, I get a 502 Bad Gateway error from nginx and then I have the environment with severe health and updating for hours without changes. I've been following the documentation regarding this and I believe it is possible to deploy it with docker-compose.yml, however I'm wondering if my configuration is enough? I linked the Docker Hub images also inside my docker-compose. I'm not able to request logs neither as the state must be 'ready', and it is constantly 'updating'.
Anyone with any experience with such deployment that wants to share any tips or configuration settings? Thank you.
Related
I'm trying to build a basic Spring Boot Java web app and deploy it to AWS Elastic Beanstalk (EB). Most tutorials I've read suggest uploading a JAR when creating an application, however, I'm unable to successfully deploy a JAR using the AWS web UI. When I try, the state of the environment is set as 'Severe' and the environment web link returns a 502 Bad Gateway response.
In order to make sure it wasn't some issue with the code, I downloaded the Java sample app from AWS (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/samples/corretto.zip), fixed the jetty inconsistency in the pom.xml file (the plugin didn't match the dependency), set the JDK version to be the same as what I created the environment to be (Corretto 11), then ran the following command to create the JAR:
mvn package
When I upload the created JAR onto the EB environment, the deployment fails. If, however, I simply upload the downloaded zip file (with the corrected pom and correct JDK set), the deployment works.
The following AWS page talks about how to create a source bundle to upload and only mentions being able to upload ZIP and WAR files, not JARs:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-sourcebundle.html
My question is then, is it possible to upload JAR files to Elastic Beanstalk via the EB web UI, or do I need to stick with WAR and ZIP files? This will help me when navigating the various Spring Boot / Elastic Beanstalk tutorials out there. It's proving quite difficult to get any Spring Boot web app to actually work on Elastic Beanstalk. Any advice would be much appreciated.
It is very possible to upload a JAR to Elastic Beanstalk using the AWS Management Console. Furthermore, then once deployed, the app shows Green:
If you get RED, means you have not set something correctly.
See this basic Spring BOOT app example to follow this process.
Creating your first AWS Java web application
I'm learning Elastic Beanstalk. We have an application which is deployed with EB.
I'm able to download the zip of the deployment.
Now is my question: am I able to adapt the content of the zip and recreate the zip and deploy again? Because this does not seem to work for me:
Infra-WriteRuntimeConfig, Infra-WriteApplication1, Infra-WriteApplication2, Infra-EmbeddedPreBuild, Hook-PreAppDeploy, Hook-EnactAppDeploy, Infra-EmbeddedPostBuild, Hook-PostAppDeploy ..
The application (which I don't know) is also in a git. Is the zipfile on AWS the application after the build + deploy so is the approach I'm taking not possible? Am I trying to redeploy an already build application which includes changes?
In the logs I see some: AWSDeployment.log. Is this part changing the content which is in the zip on AWS?
I'm migrating a Heroku project for Elastic Beanstalk and for few weeks I need to have it running parallel on both servers. The problem is that the requirements.txt on the root of the project has some libraries specific for some heroku plugins.
Is there a way to change elastic beanstalk to not run the requirements.txt in the root folder?
My idea is to create a aws_requirements.txt and run it thru .ebextensions/ but now I'm still getting error because the Beanstalk tries to install the main requirements.txt as well.
One solution is to use a docker base environment where you will just need to add an extra dockerfile and dockerrun file, but you can control everything.
And another solution may be to have the aws-requirements on S3 and do a copy from your .ebextensions
And yet another solution is do handle this on your side, where you keep AWS-requirements and heroku-requirements and do the copy before the eb deploy or heroku deploy
I have an Elastic Beanstalk application running and configured to serve a Docker container ("generic Docker" configuration) and linked to a private image on Docker Hub.
How can I prompt the Elastic Beanstalk application to download the latest version of the docker hub image after pushing up a new version with docker push?
Do I need to "restart the app server," "rebuild the environment," something else, or is "supposed" to pull it in automatically? Not seeing this addressed in the docs.
** EDIT **
To be clear, eb deploy does NOT pull in an updated Docker image, but it does push up the files from your application directory to your ec2 instances.
So, at the end of the day I'm probably not going to use docker push for deployments, but just to keep the image up to date in the case that you actually need to make ENVIRONMENT configuration changes, not code changes, or when bringing on a new developer, you can use docker pull.
Currently eb deploy my-environment-name is working great for Docker based Elastic Beanstalk deployments.
You just need to run command line: eb deploy. Here is a nice tutorial http://victorlin.me/posts/2014/11/26/running-docker-with-aws-elastic-beanstalk.
I am running an elasticbeanstalk application, with multiple environments. This particular application is hosting docker containers which host a webservice.
To upload and deploy a new version of the application to one of the environments, I can go through the web client and click on "Upload and Deploy" and from the file option I select my latest Dockerrun.aws.json file, which references the latest version of the container that is privately hosted. The upload and deploy works fine and without issue.
To make it simpler for myself and others to deploy I'd like to be able to use the CLI to upload and deploy the Dockerrun.aws.json file. If I use the cli eb deploy command without any special configuration the normal process of zipping up the whole application and sending it to the host occurs and fails (it cannot reason out that it only needs to read the Dockerrun.aws.json file).
I found a documentation tidbit about controlling what is uploaded using the .elasticbeanstalk/config.yml file.
Using this syntax:
deploy:
artifact: Dockerrun.aws.json
The file is uploaded and actually deploys successfully to the first batch of instances, and then always fails to deploy to the second set of instances.
The failure error is of the flavor: 'container exited unexpectedly...'
Can anyone explain, or provide link to the canonical approach for using the CLI to deploy single docker container applications?
So it turns out that the method I listed about with the config.yml was correct. The reason I was seeing a partially successful deployment was because the previously running docker container on the hosts was not being stopped by EB.
I think that what was happening was that EB is sending something like
sudo docker kill --signal=SIGTERM $CONTAINER_ID instead of the more common sudo docker stop $CONTAINER_ID
The specific container I was running didn't respond to SIGTERM and so it would just sit there. When I tested it locally with SIGKILL it would (obviously) stop properly, but SIGTERM alone wouldn't stop it.
The issue wasn't the deployment methodology but rather confusion in the output that EB generated and my misinterpretation.
Since you have asked for a link, I am providing a link which I initially used to successfully test and deploy docker using elasticbeanstalk cli.
Kindly see if this helps you as well: https://fangpenlin.com/posts/2014/11/25/running-docker-with-aws-elastic-beanstalk/