AWS EC2 Lifecycle hook: transition after first reboot - amazon-web-services

Somehow I cannot find an answer to seemingly simple question. The setup: auto-scaling group, user data script and CodeDeploy deployment group attached to the ASG. User data script installs packages, makes kernel configuration changes and then reboots the instance. I want to transition instance to LAUNCHED state and trigger code deployment only after the first reboot. What is the easiest way to accomplish that?

Related

How to manage OS updates in an AWS autoscaling environment?

I am looking to implement Autoscaling for my website using 'AWS Autoscaling Target Groups and Load Balancer'.
As the first step, I have created an Amazon Machine Image (AMI) based on the current EC2 instance, and I am using it as a scale base.
My question is when the Autoscaling kicks in and duplicate the Instances, it seems that it uses the AMI created and I wonder how can we manage the OS updates like Security updates to the Kernel e.g. yum updates, apt-get updates etc.
In this type of Scenario what the most simplest way to manage the OS updates in an Autoscaling environment?
When you create a Launch Template, you can specify an User Data script in the Advanced Details section. This script will run when the new EC2 instance is starting up.
This script can contain OS updates and other environment setup. You can read further about User Data scripts in the AWS docs.

Is it possible to determine the exact EC2 targets of a CodeDeploy deployment via CloudWatch events?

AWS CodeDeploy's model defines an Application, which is a long-lived high-level object and represents software that needs to be deployed somewhere. An application can have many Deployment Groups, which represent targets (e.g. particular EC2 servers that have a particular combination of tags). A deployment is the release of one particular revision of software onto a deployment group defined within an application.
It is possible to get feedback on the progress of CodeDeploy via CloudWatch events. Given that EC2 servers can be up or down at the time of a deployment, and given that the tags on EC2 servers may vary over time, is there a way of determining from a CloudWatch CodeDeploy event the exact set of EC2 servers that were targeted by a particular deployment?
Specifically:
If a server is down at the time a deployment is launched, will it be targeted for release when it comes back up?
If I add a new server with identical tags to the first one after I have done the deployment, or I change the tags on the first server, will the CloudWatch event associated with my CodeDeploy event contain details of exactly which servers were targeted for deployment at the time, even if their current state means that they would not be targeted for deployment if I were to re-release the same deployment?
I tested few scenarios using a simple CodeDeploy setup. Deployment group was identified based on instance tags only (no ASG). My observations are as follows:
Server down at the time a deployment is launched
I simulated this scenario by having a stopped instance. The deployment hanged on the stopped instance. It would probably timeout if I let it hang for long. Once the instance was re-started, the deployment continued.
New instances started with the same tag
CodeDeploy did not detect them automatically. Had to redeploy the last deployment so that the new instances get detected and run the up-to-date application version.
Changing a tag of an instance
The instance with changed tag is not included in a new deployment. Thus you end up with one instance running an old version of your application, while the rest run the new version.
Deployment id and list-deployment-targets AWS CLI
The list-deployment-targets prints out IDs of instances for which the deployment happened at the time of deployment. When you redeploy (deployment id does not change in this case), the list will include instances for redeployment. Original list of instances is lost.
Note
Deployments to ASG will behave differently, since CodeDeploy integrates with ASG through its lifestyle hooks.
Hope this helps.

Updating user-data for EC2 instances under an autoscaling group

I want to modify / update the user-data for an EC2 instance. This is attached to an autoscaling cluster.
I understand that the instance needs to be stopped before the user-data can be updated. The problem I am facing is, when I stop the instance to update user-data autoscaler automatically brings a new instance back up.
Is there a way to update user-data without removing the EC2 instance from the autoscaling group?
For instances in an autoscaling group, the user data is generally updated by creating a new launch configuration with your new user data.
Your AutoScaling group should be associated with a launch configuration already. There is an easy option to copy launch configurations from the AWS web console that will replicate all of your existing options. Simply find this launch configuration, copy it, and then replace the old user data before you save the new configuration.
Once the new launch configuration is created, apply it to your autoscaling group. You can begin using it immediately by increasing the desired size of the group to launch a new instance with the new configuration, and then detach the old instance once you're satisfied that the new instance (and any hosted applications) are operational.
You can likewise use this method to change any property of a launch configuration without causing an interruption to your application.
Further Resources:
AWS Documentation - Creating a Launch Configuration
The only way can achieve this is by disabling autoscaling temporarily using programmatic invocation using aws sdk.
You can restart the servers after the autoscaling is disabled.
(node API http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/AutoScaling.html#suspendProcesses-property)

AWS autoscaling starts not ready instances because of userdata script

I have an autoscaling that works great, with a launchconfiguration where i defined a userdata script that is executed on a new instance launch.
The userscript updates basecode and generate cache, this takes some seconds. But as soon as the instance is "created" (and not "ready"), the autoscaling adds it to the load balancer.
It's a problem because while the userdata script is executed, the instance does not answer with a good response (basically, 500 errors are throw).
I would like to avoid that, of course I saw this documentation : http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/InstallingAdd
As with a standalone EC2 instance, you have the option of configuring instances launched into an Auto Scaling group using user data. For example, you can specify a configuration script using the User data field in the AWS Management Console, or the --userdata parameter in the AWS CLI.
If you have software that can't be installed using a configuration script, or if you need to modify software manually before Auto Scaling adds the instance to the group, add a lifecycle hook to your Auto Scaling group that notifies you when the Auto Scaling group launches an instance. This hook keeps the instance in the Pending:Wait state while you install and configure the additional software.
Looks like i'm not in this case. Also, modify the pending hook on the userdata script is complicated. There must be a simple solution to fix my problem.
Thank you for your help !
EC2 instance Userdata does not utilize a lifecycle hook to stop a newly launched instance being brought into service until after it has finished executing.
Stopping your web server at the start of your user data script sounds a little unreliable to me, and therefore I would urge you to utilize the features AutoScaling provides that were designed to solve this very problem.
I have two suggestions:
Option 1:
Using lifecycle hooks isn't at all complicated, once you read through the docs. And in your user data, you can easily use the CLI to control the hook, check this out. In fact, a hook can be controlled from any supported language or scripting language.
Option 2:
If manually taking care of lifecycle hooks doesn't appeal to you, then I would recommend scrapping your user data script and doing a work around with AWS CodeDeploy. You could have CodeDeploy deploy nothing (eg. empty S3 folder) but you could use the deployment hook scripts to replace your user data script. Code Deploy integrates with AutoScaling seamlessly and handles lifecycle hooks automatically. A newly launched instance won't be brought into service by AutoScaling until a deployment has succeeded. Read the docs here and here for more info.
However, I would urge you to go with option 1. Lifecycle hooks were designed to solve the very problem you have. They're powerful, robust, awesome and free. Use them.
#Brooks said the easiest way to "wait" before the ELB serve the instance is to deal with ELB health status.
I solved my problem by shutting down the http server at the start of the userdata script. So the ELB can't have a green health status, and it does not send clients to the instance. I re-start the http server at the end of the script, the health is good so the ELB serve it.

How does the AWS EC2 Auto Scaling synchronisation work automatically?

We started our wordpress blog some time ago with only one single EC2 Instance and a Multi-AZ RDS Database.
The traffic increased with heavy ups and downs (up to 1.500 user per minute), so we decided to use EC2 Auto Scaling. Here is our problem: Every time we changed some code, we have to create a new AMI for the Auto Scaling Group and terminate all instance so new instances will start with the new AMI Data.
Is there a easy way to synchronize all instance automatically, when changing some code on one of them? Perhaps Opsworks could to that but I haven't experience with this. I already searched a couple of days for a tutorial, but could not find anything helpful.
You could configure your AMI to download the latest code on startup, so that you don't have to constantly update the AMI.
Or you could just use Elastic Beanstalk and let it manage all this stuff for you.
If you want an easy way to deploy changes to instances in your autoscaling group, I would recommend using Code Deploy.
Code Deploy integrates nicely with Autoscaling. If a scaling up event occurs, it will start a deployment to the newly launched instance and won't bring that instance into service in the AutoScaling group until the deployment has finished.
The deployments can be as simple as changing the code or else they can involve more thanks to Code Deploys deployment hooks.
Also you can have Code Deploy grab your code from S3, Github or CodeCommit.
Code Deploy is pretty easy to set up and the documentation is great:
Docs AutoScaling Integreation