AWS Beanstalk Restarts Instance - amazon-web-services

I have created a pipeline using AWS Codepipeline, Github, Jenkins and AWS Elastic Beanstalk (Docker) running a nodejs application. Everytime a build is triggered in AWS Codepipeline and deployment done on the Elastic Beanstalk instance, it's corresponding EC2 instance is terminated and another one created afresh and we only want the app to be deployed without termination of EC2 instance. What could be the cause for termination on every build/deployed?

how many instances do you have in your beanstalk and what deployment method are you using: All at Once, Rolling, Rolling with an Additional Batch or Immutable?
With these responses, we can continue the research.

I switched to Immutable deployment and stopped experiencing the issue as explained here: Difference between rolling, rolling with additional batch and immutable deployments in AWS?
Turns out that Rolling deployments can cause the timeouts especially that I had a single instance needed

Related

Deploying to bare EC2 instances in an ASG?

I have a service that needs to run on our own EC2 instances, since it requires some support from the kernel. My previous experience is all with containers in AWS. The application itself is distributed as a single JAR file and I'm looking for advice for how I should automate deployments. The architecture is:
An ALB in front of the ASG.
EC2 instance running a single Java application.
Any open sockets are open for an hour tops and to not cause any trouble, we have to drain the connections to the EC2 instances before performing an update, so a hard requirement is for the ALB to stop opening new connections for an hour before updating the software. The application is mission critical and ECS has had some issues last year, so I want to minimize the AWS services I depend on. While I could do what I want on my own ECS cluster with custom AMIs, I don't want to do it, since I will run a single instance of the app per host and don't need the extra layer.
My question: What is the simplest method to achieve this using CodePipeline? My understanding is that I need to use a CodeDeploy deployment step to push something to bare EC2 instances. How does draining with an ALB work in this case? We're using CloudFormation for the deployment.
You need to use codedeploy. You can find tutorial on AWS codedeploy documentation.
Codedeploy deployment lifecycle hooks for EC2.
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server

AWS Elastic Beanstalk - how to stop previous docker before starting new one

I have a set of AWS Elastic beanstalk using Docker based configuration for both web server and worker server. The way we have setup is that the java process inside docker allocates 70% of the box memory when starting.
Now the first deployment works fine, but when I try to update application version with in-place Rolling update, Elastic beanstalk tries to start an additional docker container with the java process before stopping the existing one. This fails the deploy as the Java server is not able to allocate the required memory. Is there a way that I can setup AWS to kill the old docker instance before starting the new one during deployment?
I even tried Rolling with additional batch, but that one only works for the first batch and then fails for subsequent ones.
immutable updates can be the way to go for you, it basically recreates the EC2 instances completely on every deploy
Open the Elastic Beanstalk console.
Navigate to the management page for your environment.
Choose Configuration.
In the Rolling updates and deployments configuration category,
choose Modify.
Select immutable on deploy policy
Apply
you can check more on how it works here

AWS CodeDeploy Hook After Finishing Deployment to All Instances?

I have AWS CodeDeploy deploying to a Deployment Group that targets an AutoScalingGroup of EC2 instances that can have between min and max number of instances.
CodeDeploy hooks can be specified on individual instances to launch scripts on those instances at various stages of the deployment process.
Is there a way to launch a script, Lambda function, etc... after CodeDeploy successfully finishes deploying to the final instance in the ASG? In other words, is there an "All Done With Everything" hook that I can use? How are others tackling and solving this problem?
If you're using codepipeline, how about adding another stage after code deploy?
Or you can also trigger SNS topic with AWS CodeDeploy about deployment status as well.
Here: https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-cloudwatch-events.html

how to deploy code on multiple instances Amazon EC2 Autocaling group?

So we are launching an ecommerce store built on magento. We are looking to deploy it on Amazon EC2 instance using RDS as database service and using amazon auto-scaling and elastic load balancer to scale the application when needed.
What I don't understand is this:
I have installed and configured my production magento enviorment on an EC2 instance (database is in RDS). This much is working fine. But now when I want to dynamically scale the number of instances
how will I deploy the code on the dynamically generated instances each time?
Will aws copy the whole instance assign it a new ip and spawn it as a
new instance or will I have to write some code to automate this
process?
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
A detailed explanation or direction towards some resources on the topic will be greatly appreciated.
You do this in the AutoScalingGroup Launch Configuration. There is a UserData section in the LaunchConfiguration in CloudFormation where you would write a script that is ran when ever the ASG scales up and deploys a new instance.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html#cfn-as-launchconfig-userdata
This is the same as the UserData section in an EC2 Instance. You can use LifeCycle hooks that will tell the ASG not to put the EC2 instance into load until everything you want to have configured it set up.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-as-lifecyclehook.html
I linked all CloudFormation pages, but you may be using some other CI/CD tool for deploying your infrastructure, but hopefully that gets you started.
To start, do check AWS CloudFormation. You will be creating templates to design how the infrastructure of your application works ~ infrastructure as code. With these templates in place, you can rollout an update to your infrastructure by pushing changes to your templates and/or to your application code.
In my current project, we have a github repository dedicated for these infrastructure templates and a separate repository for our application code. Create a pipeline for creating AWS resources that would rollout an updated to AWS every time you push to the repository on a specific branch.
Create an infrastructure pipeline
have your first stage of the pipeline to trigger build whenever there's code changes to your infrastructure templates. See AWS CodePipeline and also see AWS CodeBuild. These aren't the only AWS resources you'll be needing but those are probably the main ones, of course aside from this being done in cloudformation template as mentioned earlier.
how will I deploy the code on the dynamically generated instances each time?
Check how containers work, it would be better and will greatly supplement on your learning on how launching new version of application work. To begin, see docker, but feel free to check any resources at your disposal
Continuation with my current project: We do have a separate pipeline dedicated for our application, but will also get triggered after our infrastructure pipeline update. Our application pipeline is designed to build a new version of our application via AWS Codebuild, this will create an image that will become a container ~ from the docker documentation.
we have two triggers or two sources that will trigger an update rollout to our application pipeline, one is when there's changes to infrastructure pipeline and it successfully built and second when there's code changes on our github repository connected via AWS CodeBuild.
Check AWS AutoScaling , this areas covers the dynamic launching of new instances, shutting down instances when needed, replacing unhealthy instances when needed. See also AWS CloudWatch, you can design criteria with it to trigger scaling down/up and/or in/out.
Will aws copy the whole instance assign it a new ip and spawn it as a new instance or will I have to write some code to automate this process?
See AWS ElasticLoadBalancing and also check out more on AWS AutoScaling. On the automation process, if ever you'll push through with CloudFormation, instance and/or containers(depending on your design) will be managed gracefully.
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
As mentioned, earlier having a pipeline for rolling out new versions of your application via CodeBuild, this will create an image with the new code changes and when everything is ready, it will be deployed ~ becomes a container. The old EC2 instance or the old container( depending on how you want your application be deployed) will be gracefully shut down after a new version of your application is up and running. This will give you zero downtime.

AWS CodeDeploy?

My app is created using elastic-beanstalk aws service, do I need to use the AWS CodeDeploy service to deploy my app?
Currently I just do:
eb deploy myApp
Then, a new application version is deployed without using AWS CodeDeploy.
So, AM I doing something wrong?
Elastic beanstalk do it on your behalf. During deployment process you define some polices and roles, which defines elastic beanstalk will call these services on your behalf. Codedeploy is one of that services.
Elasticbeanstalk does automation of your process only and setup thet whole deployment environment for you (php,nginx/apache in case of web), if you look /opt/elasticbeanstalk/, you can see codedeploy folder there, which means that you do not need to do it manually.
AWS code deploy is different workaround and provides more controlling. How you want your changes to be pushed, is it to be pushed on all instances an once or one by one, minimum number of healthy instances.
Check here-
http://cloudacademy.com/blog/how-to-deploy-application-code-from-s3-using-aws-codedeploy/
http://blog.powerupcloud.com/2016/03/24/deployment-automation-using-aws-code-depoly/
https://blogs.aws.amazon.com/application-management/post/Tx33XKAKURCCW83/Automatically-Deploy-from-GitHub-Using-AWS-CodeDeploy
You can update the your application with new version. CLI as follows
$eb deploy --version
You are not doing anything wrong. EB Deploy will enable you to deploy your apps being served from Elastic Beanstalk. AWS Code Deploy on the other hand is more flexible & gives you more control, you can for example, deploy apps you are serving from EC2 thats not being managed by Elastic Beanstalk.
With AWS you can for example deploy to multiple environments ie development, staging, production.
Elastic Beanstalk and CodeDeploy are totally different AWS services and independent of each other and follow different deployment approaches.
What you're doing is totally correct to deploy a new version of your code.
AWS elastic-bean-stalk itself has the nice capability for deploying applications nicely.You dont need to use aws code deploy again.It will be superflous.You can use beanstalk tools itself to deploy the code.
AWS CodeDeploy is a building block service focused on helping developers deploy and update software on any instance, including Amazon EC2 instances and instances running on-premises.
AWS Elastic Beanstalk (as well as AWS OpsWorks btw) are end-to-end application management solutions.
When it comes to deploying new software release on Beanstalk, you better use the own deployment process provided to you by Beanstalk.
eb deploy myApp