I am working deploying a Laravel application to the AWS ElasticBeanstalk. I configured the CLI and I could deploy the application to an ElasticBeanstalk environment running the command. This is what I have done so far.
I created an ElasticBeanstalk application and an environment in it.
Then I initialised the application for deployment using "eb init" and deployed it using "eb deploy". But I would like to add some additional commands to be run during the deployment. For example, I might run "gulp build" or other commands. Where and how can I figure it? I know that there is an .elasticextension folder but that does not allow us to add custom commands to be run on deployment.
I know that there is an .elasticextension folder but that does not allow us to add custom commands to be run on deployment.
Not sure what do you mean that you can't run commands in .ebextensions during deployment. But the extensions are commonly used for running commands or scripts when you are deploying your app. There are special sections for that:
commands: You can use the commands key to execute commands on the EC2 instance. The commands run before the application and web server are set up and the application version file is extracted.
container_commands: You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed.
There are also platform hooks on Amazon Linux 2 to further fine tune the deployment of your applications.
Finally, if all of them are not suited, you could create dedicated build step in CodePipleline for you application. The dedicated step could be used to create fully deployment version of your application for EB with minimal amount of work to do at EB instances.
Related
I have a large, multi-component django application I am trying to deploy to elastic beanstalk. I am using the multi-docker environment. THis is my current workflow
Git commit triggers AWS code pipeline
AWS Codebuild builds docker image (docker-compose build), runs some tests, and pushes this image to AWS Elastic Container Registry
AWS Code Build calls eb deploy
The issue I am running into is that when I call eb deploy from my local box, the it simply upgrades the application, but when I call it from Code Build, it is upgrading the environment every time, which takes about 30 minutes for some reason
I run the deploy command with -v and confirmed that the same files are being zipped. Any ideas on what is going on here, is my setup incorrect?
I also tried to deploy the application from Code Deploy in the pipeline and can confirm that it also always upgrades the entire environement.
I think that if you use CB to update your EB env, it just replaces it as it is being considered as a new environment. In your local workstation you are using only one single environment, but with new application version.
I would consider replacing CB for updating your EB environment, with the EB deploy provider in your CP. This should successful just upload your new application version to an existing EB environment.
CP natively supports a number of deploy action providers, one of the being Elastic Beanstalk:
You can configure CodePipeline to use Elastic Beanstalk to deploy your code. You can create the Elastic Beanstalk application and environment to use in a deploy action in a stage either before you create the pipeline or when you use the Create Pipeline wizard.
I'm seeing so many different sources how to to achieve CI with Jenkins and EC2 and strangely none seem to fit my needs.
I have 2 EC2 ubuntu instances. One is empty and the other has Jenkins installed on it.
I want to perform a build on the Jenkins machine and copy the jar to the other ubuntu machine. Once the jar is there i want to run mvn spring-boot:run
That's is - a very simple flow which i can't find a good source to follow that doesn't include slaves, dockers etc..
AWS Code Deploy lets you use a Jenkins and deploy it on your EC2 instances.
Quick google search gave me this very detailed instruction on how to setup code pipeline with AWS Code Deploy.
The pipeline uses GitHub -> Jenkins -> EC2 flow, as you need it.
Set up jenkins to do a build then scp the artifact to the other machine
There's an answer here how to setup ssh keys for jenkins to publish via ssh about setting up the keys for ssh
I am running an elasticbeanstalk application, with multiple environments. This particular application is hosting docker containers which host a webservice.
To upload and deploy a new version of the application to one of the environments, I can go through the web client and click on "Upload and Deploy" and from the file option I select my latest Dockerrun.aws.json file, which references the latest version of the container that is privately hosted. The upload and deploy works fine and without issue.
To make it simpler for myself and others to deploy I'd like to be able to use the CLI to upload and deploy the Dockerrun.aws.json file. If I use the cli eb deploy command without any special configuration the normal process of zipping up the whole application and sending it to the host occurs and fails (it cannot reason out that it only needs to read the Dockerrun.aws.json file).
I found a documentation tidbit about controlling what is uploaded using the .elasticbeanstalk/config.yml file.
Using this syntax:
deploy:
artifact: Dockerrun.aws.json
The file is uploaded and actually deploys successfully to the first batch of instances, and then always fails to deploy to the second set of instances.
The failure error is of the flavor: 'container exited unexpectedly...'
Can anyone explain, or provide link to the canonical approach for using the CLI to deploy single docker container applications?
So it turns out that the method I listed about with the config.yml was correct. The reason I was seeing a partially successful deployment was because the previously running docker container on the hosts was not being stopped by EB.
I think that what was happening was that EB is sending something like
sudo docker kill --signal=SIGTERM $CONTAINER_ID instead of the more common sudo docker stop $CONTAINER_ID
The specific container I was running didn't respond to SIGTERM and so it would just sit there. When I tested it locally with SIGKILL it would (obviously) stop properly, but SIGTERM alone wouldn't stop it.
The issue wasn't the deployment methodology but rather confusion in the output that EB generated and my misinterpretation.
Since you have asked for a link, I am providing a link which I initially used to successfully test and deploy docker using elasticbeanstalk cli.
Kindly see if this helps you as well: https://fangpenlin.com/posts/2014/11/25/running-docker-with-aws-elastic-beanstalk/
I have configured my .ebextensions folder to download and install a windows service on the leader ec2 instance.
Problem is that every time i want to update to a new version of the web application (Not the windows service) Those commands execute again and try to re install the service again.
On the other side. Every time i want to update only the widows service, i have to do the work manually through ssh or rdp. Or re-deploy the whole application which triggers the .ebextensions commands.
Is there a more elegant workflow for this i am skipping?
You are encountering Elastic Beanstalk weakest link. You host two different services on the same EB instance, which is unsupported by EB (which is lame I agree).
I resolved the "setup only once" need by appending a test to the setup extentension config file. In my case it's a linux box, but you can do something similar:
commands:
10_setup_win_service:
test: test ! -f /opt/elasticbeanstalk/.post-provisioning-complete
command: <...>
Now to complete this hack I have a file called .ebextensions/99_finalize_setup.config:
commands:
99_write_post_provisioning_complete_file:
command: touch /opt/elasticbeanstalk/.post-provisioning-complete
this approach ensures the win service is installed only once.
Now for your maintenance issue of the win service, you cannot use the EB toolset for that. Your understanding of the options here are correct - either use SSH to automate the work, or do it manually by logging into the server.
I am hosting a Django application on AWS Elastic Beanstalk. I recently made changes to my URLS.py and apparently (according to this thread: Django ignoring changes made to URLS.py file - Amazon AWS ) I need to 'reload the django process / restart the thread'. I figured that meant for me to run
eb stop
and then
eb start
again but when I ran
eb stop
it needed to first terminate my database as well as my EC2 instance, cloudwatch alarm etc. Is there any way for me to restart the DJango process so that it can update the URLS.py file without me having to run
eb stop
eb start
?
You do not need to stop and start your environment. From what I understand you need to update your environment with your updated source code. Did you try git commit folloed by git aws.push?
Take a look here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-reference-get-started.html
Let me know if you run into any issues with git aws.push.
You can also try restart app server on your environment using aws cli:
http://docs.aws.amazon.com/cli/latest/reference/elasticbeanstalk/restart-app-server.html
But as far as I can tell, git aws.push will suffice.
I've had troubles with my Django files not updating after using:
$ eb deploy
The eb cli tool does not have a restart command, however you can login to the AWS console and restart your environment through the actions menu on the dashboard for your eb environment.
This generally fixes any issues that I have. However sometimes I've had to ssh directly into the instance and enable debugging through the settings.
The other command that Rohit referenced is from a different aws cli tool. I haven't personally tried it but here is more documentation on the command and how to install it:
http://docs.aws.amazon.com/cli/latest/userguide/installing.html