AWS AutoScaling - Instance Lifecycle - amazon-web-services

Just trying to get a bit of info on aws asg.
Here is my scenario;
launch an asg from a launch config using a default ubuntu ami
provision (install all required packages and config) the instances in the asg using ansible
deploy code to the instances using python
Happy days, every thing is setup.
The problem is if I should change the metrics of my asg so that all instances are terminated and then change it back to the original metrics, the new instances come up blank! No packages, No code!
1 - Is this expected behaviour?
2 - Also if the asg has 2 instances and scales up to 5 will it add 3 blank instances or take a copy of 1 of the running instances with code and packages and spin up the 3 new ones?
If 1 is Yes, how do I go around this? Do I need to use a pre-baked image? But then even that won't have the latest code.
Basically at off peak hours I want to be able to 'zero' out my asg so no instances are running and then bring then back up again during peak hours. It doesn't make sense to have to provision and deploy code again everyday.
Any help/advice appreciated.

The problem happening with your approach is you are deploying the code to the launch instance. So when you change the ASG metrics instance close and come up again they are launched from the AMI so they miss the code and configuration. Always remember in auto scaling the newly launched instances are launched using the AMI the ASG is using and the launched instance.
To avoid this you can use user data that will run the configuration script and pull the data from your repo to the instance on each of the instance when the instance is getting launched from the AMI so that blank instance don't get launched.
Read this developer guide for User Data
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

Yes, this is expected behaviour
It will add 3 blank instances (if by blank you mean from my base image)
Usual options are:
Bake new image every time there is a new code version (it can be easily automated as a step in build/deploy process), but his probably makes sense only if that is not too often.
Configure your instance to pull latest software version on launch. There are several options which all involve user-data scripts. You can pull the latest version directly from SCM, or for example place the latest package to a S3 bucket as part of the build process, and configure the instance on launch to pull it from S3 and self deploy, or whatever you find suitable...

Related

Automatic start of an EC2 instance with auto-scaling

I do not know much about how AWS works since the person who set the whole thing up does not work with us anymore, and I do not specialize in Amazon at all.
I need to set up an auto-scaling on my EC2 instance. I am currently reading all available tutorials to learn the how-to, but there is one thing I cannot find at all. The auto-scaling automaticaly start new instance of EC2 but I cannot find anything about how to do anything in those instance.
Currently, to start our webservices, we need to log into the instance, pull the code from git and launch the whole thing with PM2. I cannot find anything about how to do all those things automatically at the start of the instance.
I think this is supposed to be basic stuff, but as I said, I know next to nothing about how to start, and I do not have much time to learn (my boss just told me I had to be done by the end of the week !)
So if anyone know where to learn this, that would be really helpful. Thanks!
You need a Launch Configuration for setting up an Auto Scaling Group (ASG). The Launch Configuration is where you define all your instance configurations such as type, disk size, security groups, etc. One of these configurations is AMI ID. The AMI ID refers to the image to be used when launching a new instance in the ASG. So you basically need to launch a machine, install everything needed on it, create an image out of it, create a launch configuration using that image, and use that launch configuration in your ASG. This way you do not need to go to the newly added servers every time. But if you like them to run the updated (last) version of your application, you should have a scheduled job in your image which is triggered on-start. This job is responsible for copying the files (e.g. compiled files) from somewhere (a deployment machine for instance) to the newly added instance and then starting it.
The method for configuring an Amazon EC2 instance does not actually require Auto Scaling. The two main options for configuring an instance are:
Launching from a pre-configured AMI that already contains the desired software, or
Running a startup script via User Data, which executes once the instance has launched
You can choose one of the above and then test it by launching an instance via the management console or from a script that calls the AWS Command-Line Interface (CLI).
To incorporate it into Auto Scaling, configure the Auto Scaling Launch Configuration with the same parameters and then each new instance launched by Auto Scaling will automatically be configured.

How to update custom AMIs using packer and integrated with auto-scaling group?

Goal: To maintain the minimum startup period for bringing up instances to load balance and reduce the troubleshooting time.
Approach:
Create base custom-AMI for ec2-instances
Update/rebundle the custom AMI on every release and s/w patches (code & software updates related to the healthy running instance).
2.a. Packer/any CI usage for update is possible? If so, how? (unable to find a step-by-step approach in documentations of package)
Automate the step 1 and step 2 using chef.
Integrate this AMI in the Auto scaling group (experimented this).
Map the Load balancer to ASG [done].
Maintain the desired count of Instances by bringing up instances from updated-AMI in ASG with LB upon failure.
Crux: Terminate the unhealthy instance and bring up the healthy instance with ami asap.
--
P.S:
I have gone through many posts from [http://blog.kik.com/2016/03/09/using-packer-io-to-optimize-and-manage-ami-creation/] and https://alestic.com/.
Using docker is rolled out of discussion.
But still unable to figure out a clear way to do it.
The simplest way to swap out a new AMI in an existing ASG is to update the launch config and then one by one kill any instance using the old AMI ID. The ASG will bring up new instances as needed, which should use the new AMI. If you want to get fancier (like keeping old instances alive for quick rollback) check out tools like Spinnaker which brings up each new AMI as a new corresponding ASG and then remaps the ELB to swap traffic over if no problems are detected, and then later on when you are sure the deploy is good it kills off the old ASG and all associated instances.

Update autoscaling group AMIs and running instances

I'm trying to setup the following for a project
EC2 instances in an auto scaling group, behind an elastic load balancer
A CodeDeploy application to deploy new versions of my application to the EC2 instances
I have a question regarding the AMIs on which the EC2 instances are based. If I want to make some changes to the systems' configuration (say update the libssl package), I see two options:
(1) run packer / manually create a new AMI and setup my auto scaling group to use it. Then, restart the instances so they use the new AMI. This is obviously really slow and causes downtime.
(2) use a configuration management tool such as Ansible to run yum update libssl on the instances, but this would not persist the changes to the instances launched in the future
(3) create a new AMI (manually or using packer) and then use a configuration management tool to shut down the old instances and run new ones using the new AMI. This is the option I think is the best, but I'm not sure how to do it in detail, neither how to avoid downtime. Also, it would remain quite slow (~10min I guess)
What would be the best way to do this (avoiding downtime)? Are there some best practise I should stick to?
Thanks
[Edit] I came accross aws-ha-release from aws-missing-tools, which enables to restart all instances from an auto scaling group without any downtime. I guess this could be used in conjunction with packer to force the running instance to use the new AMI. Any feedback on this? I feel like it's a little bit hacky.
Here are some options:
1 Use Two Autoscale Groups
If you are trying to prevent downtime while deploying new code, take advantage of the fact that an ELB can have multiple autoscale groups/launch configs associated to it.
You can have:
autoscale-A, launchconfig-A which are the autoscale group and launch
configs of your version "A" of your servers.
autoscale-B, launchconfig-B which are the autoscale group and launch configs of
your verison "B" of your servers.
A represents version X of the code, and B represents version X+1 (including any changes to O/S configuration such as libssl)
Now when you want to roll out version X + 1 of your code, simple "bake" a new AMI, configured exactly how you like it, and add the autoscale group B to the elb. Once the autoscale group and its instances are in service, set the max/capacity of the autoscale group A to 0, taking the version X servers out of the ELB. Only your version X + 1 will be running. When new instances come up in the future e.g. if a server fails, they'll be using your X + 1 AMI and have all of it's configuration changes.
Note if your application talks to a database, you will need to ensure that version X of the code and version X + 1 can operate on the same version of the database e.g. if version X + 1 removes a table that version X uses, then you'll get errors from users hitting verison X of your application. #1 works well when there are either no database changes in your code release, or if you've built in backwards compatibility when you roll out a new version of the code.
2 Combine Config Management Tool with the Health Check
If all you are wanting to do is update the O/S e.g. patch a version, then you can combine your thought of using a tool like Ansible with the ELB health check.
When you want to patch a server, scale up your number of instances
temporarily e.g. if you were running 3 instances, scale up to 6.
As part of their user data, run Ansible and only once it
succesfully completes e.g. to update libssl, do you allow the health
check to pass and the EC2 instance to serve traffic from the ELB.
Once the ELB is successfully seeing the new EC2 instances, scale down
the number of instances in the auto scale group to its original capacity (in this case, 3).
Note: The oldest
instances will be the ones that AWS terminates, meaning that the only instances that will be left running are your 3 new instances.
If an instance fails and a new one spins up, it will start with your base AMI, apply any Ansible changes (and only once the changes are present, will the health check pass and it be put in service).
(This is your (2) but fixes the issue of new instances not containing the libssl version change)
Note on speed
Option 1 will allow failed instances to be in service faster than Option 2 (since you are not waiting on Ansible to run) at the expense of having to "pre-bake" your AMI.
Option 2 will allow you greater flexibility and speed for patching production servers e.g. if you need to "patch something now" this might be the quickest way. Having something like Ansible running and the ability to patch the O/S (separating that task from the deploying code task) can come with additional advantages, depending on your use case. Providing an agent-less hook into your server's configuration (libraries, user management, etc) is quite powerful, especially in the cloud.
why don't you evaluate to use the userdata field of your Launch Configuration?
All in all it is 16 KB of pure love, built in in your recipe for spawning new machines.
If using Linux, then you use Bash script, if WIndows then you can use Powershell.
No additional tools, all integrated and for free.
P.S. If you need more than 16 KB, just chain a wget of your additional scripts in your core script and then shell it by creating a chain.

aws - upgrade autoscale base AMI

I have setup a HA architecture using Autoscaling, load balancer , code deploy.
I have a base-image through which autoscale launch any new instance. This base-image will get outdated with time and I may have to upgrade it.
My confusion is how can I provision this base AMI to install desire version of packages ? and How will I provision the already in-service instances ?
For-example - Currently my base AMI have php5.3 but if in future I need PHP5.5 then how can I provision in-service fleet of EC2 instances and as well as base AMI
I have Chef as provisioning server. So how should I proceed for above problem ?
The AMI that an instance uses is determined through the launch configuration when the instance boots. Therefore, the only way to change the AMI of an instance is to terminate it and launch it again.
In an autoscaling scenario, this is relatively easy: update the launch configuration of the autoscaling group to use the new AMI and terminate all instances you want to upgrade. You can do a rolling upgrade by terminating the instances one-by-one.
When your autoscaling group scales up and down frequently and it's OK for you to have multiple versions of the AMI in your autoscaling group, you can just update the launch configuration and wait. Every time the autoscaling process kicks in and a new instance is launched, the new AMI is used. When the autoscaling group has the right 'termination policy' ("OldestInstance", for example), every time the autoscaling process scales down, an instance running the old AMI will be terminated.
So, let's say your have 4 instances running. You update the launch config to use a new AMI. After 4 scale-up actions and 4 scale-down actions, all instances are running the new AMI.
Autoscale has a feature called Launch Configuration, which includes the ability to pass in userdata, which will get executed at launch time. The userdata can be saved within Launch Configuration so that you can automate the process.
I have never worked with Chef and I'm sure there is a Chef-centric way of doing this, but the quick and dirty would be to use userdata.
Your userdata script (i.e. BASH) would then include the necessary sudo apt-get remove / install commands (assuming Ubuntu OS).
The documentation for this is here:
http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_CreateLaunchConfiguration.html
UserData The user data to make available to the launched EC2
instances. For more information, see Instance Metadata and User Data
in the Amazon Elastic Compute Cloud User Guide.
At this time, launch configurations don't support compressed (zipped)
user data files.
Type: String
Length constraints: Minimum length of 0. Maximum length of 21847.
Pattern: [\u0020-\uD7FF\uE000-\uFFFD\uD800\uDC00-\uDBFF\uDFFF\r\n\t]*
Required: No

Best way to manage code changes for application in Amazon EC2 with Auto Scaling

I have multiple instances running behind Load balancer with Auto Scaling in AWS.
Now, if I have to push some code changes to these instances and any new instances that might launch because of auto scaling policy, what's the best way to do this?
The only way I am aware of is, to create a new AMI with latest code, modify the auto scaling policy to use this new AMI and then terminate the existing instances. But this might involve a longer downtime and I am not sure whether the whole process can be automated.
Any pointers in this direction will be highly appreciated.
The way I do my code changes is to have a master server which I edit on the code on. All the slave servers which scale then rsync via ssh by a cron job to bring all the files up to date. All the servers sync every 30 minutes +- a few random seconds to keep from accessing it at the exact same second. (note I leave the Master off of the load balancer so users always have the same code being sent to them. Similarly, when I decide to publish my code changes, I do an rsync from my test server to my master server.
Using this approach, you merely have to put the sync command in the start-up and you don't have to worry about what the code state was on the slave image as it will be up to date after it boots.
EDIT:
We have stopped using this method now and started using the new service AWS CodeDeploy which is made for this exact purpose:
http://aws.amazon.com/codedeploy/
Hope this helps.
We configure our Launch Configuration to use a "clean" off-the-shelf AMI - we use these: http://aws.amazon.com/amazon-linux-ami/
One of the features of these AMIs is CloudInit - https://help.ubuntu.com/community/CloudInit
This feature enables us to deliver to the newly spawned plain vanilla EC2 instance some data. Specifically, we give the instance a script to run.
The script (in a nutshell) does the following:
Upgrades itself (to make sure all security patches and bug fixes are applied).
Installs Git and Puppet.
Clones a Git repo from Github.
Applies a puppet script (which is part of the repo) to configure itself. Puppet installs the rest of the needed software modules.
It does take longer than booting from a pre-configured AMI, but we skip the process of actually making these AMIs every time we update the software (a couple of times a week) and the servers are always "clean" - no manual patches, all software modules are up to date etc.
Now, to upgrade the software, we use a local boto script.
The script kills the servers running the old code one by one. The Auto Scaling mechanism launches new (and upgraded) servers.
Make sure to use as-terminate-instance-in-auto-scaling-group because using ec2-terminate-instance will cause the ELB to continue to send traffic to the shutting-down instance, until it fails the health check.
Interesting related blog post: http://blog.codento.com/2012/02/hello-ec2-part-1-bootstrapping-instances-with-cloud-init-git-and-puppet/
It appears you can manually double auto scaling group size, it will create EC2 instances using AMI from current Launch Configuration. Now if you decrease auto scaling group back to the previous size, old instances will be killed and only instances created from a new AMI will survive.