I have setup a HA architecture using Autoscaling, load balancer , code deploy.
I have a base-image through which autoscale launch any new instance. This base-image will get outdated with time and I may have to upgrade it.
My confusion is how can I provision this base AMI to install desire version of packages ? and How will I provision the already in-service instances ?
For-example - Currently my base AMI have php5.3 but if in future I need PHP5.5 then how can I provision in-service fleet of EC2 instances and as well as base AMI
I have Chef as provisioning server. So how should I proceed for above problem ?
The AMI that an instance uses is determined through the launch configuration when the instance boots. Therefore, the only way to change the AMI of an instance is to terminate it and launch it again.
In an autoscaling scenario, this is relatively easy: update the launch configuration of the autoscaling group to use the new AMI and terminate all instances you want to upgrade. You can do a rolling upgrade by terminating the instances one-by-one.
When your autoscaling group scales up and down frequently and it's OK for you to have multiple versions of the AMI in your autoscaling group, you can just update the launch configuration and wait. Every time the autoscaling process kicks in and a new instance is launched, the new AMI is used. When the autoscaling group has the right 'termination policy' ("OldestInstance", for example), every time the autoscaling process scales down, an instance running the old AMI will be terminated.
So, let's say your have 4 instances running. You update the launch config to use a new AMI. After 4 scale-up actions and 4 scale-down actions, all instances are running the new AMI.
Autoscale has a feature called Launch Configuration, which includes the ability to pass in userdata, which will get executed at launch time. The userdata can be saved within Launch Configuration so that you can automate the process.
I have never worked with Chef and I'm sure there is a Chef-centric way of doing this, but the quick and dirty would be to use userdata.
Your userdata script (i.e. BASH) would then include the necessary sudo apt-get remove / install commands (assuming Ubuntu OS).
The documentation for this is here:
http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_CreateLaunchConfiguration.html
UserData The user data to make available to the launched EC2
instances. For more information, see Instance Metadata and User Data
in the Amazon Elastic Compute Cloud User Guide.
At this time, launch configurations don't support compressed (zipped)
user data files.
Type: String
Length constraints: Minimum length of 0. Maximum length of 21847.
Pattern: [\u0020-\uD7FF\uE000-\uFFFD\uD800\uDC00-\uDBFF\uDFFF\r\n\t]*
Required: No
Related
SO the requirement here is i have an instance that is brought up via Auto scaling Group, now i have added a cronjob to the instance but due to some unavoidable situation it got crashed and my Auto scaling Group brought up a new instance which don't have the cronjob.
This has happened multiple times, so i am planning to add the CronJob in the AMI so whenever instance is brought with the AMI it contains that cronjob.
I am looking for the commands that i can use to actually add the cronjob in the AMI
In my workplace, we have a process of changing the EC2's AMI every month with the new patched private AMI.
Our internal operations team makes these patched AMI available for us as private AMI for EC2.
In our terraform script, we change the name of the AMI to the new one before executing the script via Jenkins.
However, we have noticed that after the script is executed EC2 instance is not affected by the AMI name change, we have to manually terminate each EC2 instance for the AMI change to take effect.
What I want to know is-
Is this a problem someone has faced before?
Is there a way to remove the manual termination of instance in Terraform OR is there a way in Terraform by which the changes will be taken cared of automatically?
The instances in ASG are not being updated with the new AMI because by default, only your launch configuration (LC) or launch template (LT) are updated with the new AMI. This does not automatically causes an update of the instances to use the new LC/LT.
However, since not too long ago, AWS has introduce instance refresh to combat this specific issue. Subsequently, this functionality was added to terraform and is configured using instance_refresh block of aws_autoscaling_group resource.
Thus, you could setup instance_refresh in your aws_autoscaling_group and specify what triggers it. Usually the trigger would be changes to the associated launch_configuration or launch_template.
I have the following flow for Patch update of web app code in AWS
1) An m4.xlarge instance (Patch Instance) in which development team update code at the particular interval.
2) I, then create an AMI and Launch Configuration using that instance.
3) Using newly created Launch Configuration, I update Autoscaling Group to add the new instance(m4.xlarge) with latest AMI.
Now the questions I have are:
1) Can I make my Patch Instance of t2.micro type and make Autoscaling create the new instance with m4.xlarge? This is just for optimization as Patch Instance is underutilized.
2) Any better way for the patch update?
Yes, you can make an AMI of any instance type and use it on a bigger or smaller one.
Could you collaborate on what the Development team is patching?
We have a terraform deployment that creates an auto-scaling group for EC2 instances that we use as docker hosts in an ECS cluster. On the cluster there are tasks running. Replacing the tasks (e.g. with a newer version) works fine (by creating a new task definition revision and updating the service -- AWS will perform a rolling update). However, how can I easily replace the EC2 host instances with newer ones without any downtime?
I'd like to do this to e.g. have a change to the ASG launch configuration take effect, for example switching to a different EC2 instance type.
I've tried a few things, here's what I think gets closest to what I want:
Drain one instance. The tasks will be distributed to the remaining instances.
Once no tasks are running in that instance anymore, terminate it.
Wait for the ASG to spin up a new instance.
Repeat steps 1 to 3 until all instances are new.
This works almost. The problem is that:
It's manual and therefore error prone.
After this process one of the instances (the last one that was spun up) is running 0 (zero) tasks.
Is there a better, automated way of doing this? Also, is there a way to re-distribute the tasks in an ECS cluster (without creating a new task revision)?
Prior to making changes make sure you have the ASG spanned across multiple availability zones and so are the containers. This ensures High Availability when instances are down in one Zone.
You can configure an update policy of Autoscaling group with AutoScalingRollingUpgrade where you can set MinInstanceInService and MinSuccessfulInstancesPercent to a higher value to maintain slow and safe rolling upgrade.
You may go through this documentation to find further tweaks. To automate this process, you can use terraform to update the ASG launch configuration, this will update the ASG with a new version of launch configuration and trigger a rolling upgrade.
Just trying to get a bit of info on aws asg.
Here is my scenario;
launch an asg from a launch config using a default ubuntu ami
provision (install all required packages and config) the instances in the asg using ansible
deploy code to the instances using python
Happy days, every thing is setup.
The problem is if I should change the metrics of my asg so that all instances are terminated and then change it back to the original metrics, the new instances come up blank! No packages, No code!
1 - Is this expected behaviour?
2 - Also if the asg has 2 instances and scales up to 5 will it add 3 blank instances or take a copy of 1 of the running instances with code and packages and spin up the 3 new ones?
If 1 is Yes, how do I go around this? Do I need to use a pre-baked image? But then even that won't have the latest code.
Basically at off peak hours I want to be able to 'zero' out my asg so no instances are running and then bring then back up again during peak hours. It doesn't make sense to have to provision and deploy code again everyday.
Any help/advice appreciated.
The problem happening with your approach is you are deploying the code to the launch instance. So when you change the ASG metrics instance close and come up again they are launched from the AMI so they miss the code and configuration. Always remember in auto scaling the newly launched instances are launched using the AMI the ASG is using and the launched instance.
To avoid this you can use user data that will run the configuration script and pull the data from your repo to the instance on each of the instance when the instance is getting launched from the AMI so that blank instance don't get launched.
Read this developer guide for User Data
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Yes, this is expected behaviour
It will add 3 blank instances (if by blank you mean from my base image)
Usual options are:
Bake new image every time there is a new code version (it can be easily automated as a step in build/deploy process), but his probably makes sense only if that is not too often.
Configure your instance to pull latest software version on launch. There are several options which all involve user-data scripts. You can pull the latest version directly from SCM, or for example place the latest package to a S3 bucket as part of the build process, and configure the instance on launch to pull it from S3 and self deploy, or whatever you find suitable...