I'm migrating a Heroku project for Elastic Beanstalk and for few weeks I need to have it running parallel on both servers. The problem is that the requirements.txt on the root of the project has some libraries specific for some heroku plugins.
Is there a way to change elastic beanstalk to not run the requirements.txt in the root folder?
My idea is to create a aws_requirements.txt and run it thru .ebextensions/ but now I'm still getting error because the Beanstalk tries to install the main requirements.txt as well.
One solution is to use a docker base environment where you will just need to add an extra dockerfile and dockerrun file, but you can control everything.
And another solution may be to have the aws-requirements on S3 and do a copy from your .ebextensions
And yet another solution is do handle this on your side, where you keep AWS-requirements and heroku-requirements and do the copy before the eb deploy or heroku deploy
Related
I'm trying to deploy a multi-container app to AWS Elastic Beanstalk using docker-compose. My folder structure is as follows:
AppDirectory/
app/
proxy/
scripts/
static/
docker-compose.yml
Dockerfile
requirements.txt
I've got the images built and pushed into Docker Hub. Also, the environment is working as expected when running docker-compose up in development. Using AWS Elastic Beanstalk dashboard, I create an application and then I proceed to create an environment, using Docker as platform. I have a .zip folder with the structure mentioned before.
When creating the environment, there's first a console log telling me that Configuration files cannot be extracted from the application version appname-source. Check that the application version is a valid zip or war file.
Not sure what it means here, as I'm uploading a .zip file and inside there are Dockerfile and docker-compose.yml.
After it says the app deployed successfully, I get a 502 Bad Gateway error from nginx and then I have the environment with severe health and updating for hours without changes. I've been following the documentation regarding this and I believe it is possible to deploy it with docker-compose.yml, however I'm wondering if my configuration is enough? I linked the Docker Hub images also inside my docker-compose. I'm not able to request logs neither as the state must be 'ready', and it is constantly 'updating'.
Anyone with any experience with such deployment that wants to share any tips or configuration settings? Thank you.
I have an elastic beanstalk environment that the first upload I used with ebextensions to configure all the configurations.
Now, If I want to update the environment again (only change the code) the ebextensions stay the same,
I need to insert the ebextensions into the zip file that I upload to update the beanstalk environment?
Or I can ignore the ebextensions and upload the zip as is?
I create the zip file using Visual Studio and I put the ebextensions inside the code.
Thanks
It depends on what is in your .ebextensions. For example, if you just install some rpm packages, then they will still be installed. But generally you would always include the config files anyway, as EB can deploy your application to new instances and then entire configuration has to be re-done from scratch.
I have a project in EB linked with codePipline from Github, and I have some files inside my project in .ebextensions & .platform folders
My question is: Does AWS Elastic Beanstalk deploy these files each time I deploy a new version to EB or just one time?
How EB worked behind the sense!?
the .ebextensions and .platform folders need to be part of your application code. So if you're using the CodePipeline Elastic Beanstalk deployment task, the input artefact needs to have these folders. This also allows to include these files in your versioning system.
How EB deploys new servers, depends on how you have configured it to deploy the application. You can find more information in the EB development guide.
I have an app that I am deploying through the AWS EB CLI. When I initially setup the environment, I used .ebextensions files and ran eb deploy from the terminal on my machine. This sets everything up correctly including environment variables and node version (8.9) in the beanstalk environment.
Now, if I deploy the app without the .ebextensions directory in the CI, the beanstalk environment gets spun up with default values which sets node back to v6.3 and wipes out the environment variables.
1) Is there a way to keep the current configuration of the beanstalk environment without having to deploy the .ebextensions files every time?
2) If I must deploy the .ebextensions files every time, what is the best approach for sensitive data like passwords?
Side note
I have another app in a different beanstalk environment that I deploy docker containers to. In that instance, the beanstalk environment doesn't get nuked every time I deploy a new container update. I do not send the .ebxtensions files with this deployment either.
I am running an elasticbeanstalk application, with multiple environments. This particular application is hosting docker containers which host a webservice.
To upload and deploy a new version of the application to one of the environments, I can go through the web client and click on "Upload and Deploy" and from the file option I select my latest Dockerrun.aws.json file, which references the latest version of the container that is privately hosted. The upload and deploy works fine and without issue.
To make it simpler for myself and others to deploy I'd like to be able to use the CLI to upload and deploy the Dockerrun.aws.json file. If I use the cli eb deploy command without any special configuration the normal process of zipping up the whole application and sending it to the host occurs and fails (it cannot reason out that it only needs to read the Dockerrun.aws.json file).
I found a documentation tidbit about controlling what is uploaded using the .elasticbeanstalk/config.yml file.
Using this syntax:
deploy:
artifact: Dockerrun.aws.json
The file is uploaded and actually deploys successfully to the first batch of instances, and then always fails to deploy to the second set of instances.
The failure error is of the flavor: 'container exited unexpectedly...'
Can anyone explain, or provide link to the canonical approach for using the CLI to deploy single docker container applications?
So it turns out that the method I listed about with the config.yml was correct. The reason I was seeing a partially successful deployment was because the previously running docker container on the hosts was not being stopped by EB.
I think that what was happening was that EB is sending something like
sudo docker kill --signal=SIGTERM $CONTAINER_ID instead of the more common sudo docker stop $CONTAINER_ID
The specific container I was running didn't respond to SIGTERM and so it would just sit there. When I tested it locally with SIGKILL it would (obviously) stop properly, but SIGTERM alone wouldn't stop it.
The issue wasn't the deployment methodology but rather confusion in the output that EB generated and my misinterpretation.
Since you have asked for a link, I am providing a link which I initially used to successfully test and deploy docker using elasticbeanstalk cli.
Kindly see if this helps you as well: https://fangpenlin.com/posts/2014/11/25/running-docker-with-aws-elastic-beanstalk/