In my workplace, we have a process of changing the EC2's AMI every month with the new patched private AMI.
Our internal operations team makes these patched AMI available for us as private AMI for EC2.
In our terraform script, we change the name of the AMI to the new one before executing the script via Jenkins.
However, we have noticed that after the script is executed EC2 instance is not affected by the AMI name change, we have to manually terminate each EC2 instance for the AMI change to take effect.
What I want to know is-
Is this a problem someone has faced before?
Is there a way to remove the manual termination of instance in Terraform OR is there a way in Terraform by which the changes will be taken cared of automatically?
The instances in ASG are not being updated with the new AMI because by default, only your launch configuration (LC) or launch template (LT) are updated with the new AMI. This does not automatically causes an update of the instances to use the new LC/LT.
However, since not too long ago, AWS has introduce instance refresh to combat this specific issue. Subsequently, this functionality was added to terraform and is configured using instance_refresh block of aws_autoscaling_group resource.
Thus, you could setup instance_refresh in your aws_autoscaling_group and specify what triggers it. Usually the trigger would be changes to the associated launch_configuration or launch_template.
Related
SO the requirement here is i have an instance that is brought up via Auto scaling Group, now i have added a cronjob to the instance but due to some unavoidable situation it got crashed and my Auto scaling Group brought up a new instance which don't have the cronjob.
This has happened multiple times, so i am planning to add the CronJob in the AMI so whenever instance is brought with the AMI it contains that cronjob.
I am looking for the commands that i can use to actually add the cronjob in the AMI
I am using ASG for my application and I am attaching running instance to create ASG
Now the main problem is, Whenever new instance via ASG gets created it does not replicate the EC2 Instance attached to it so I cannot take updated or point in time binaries available on that running instance.
I get a fresh blank AMI to work which won't work for me.
Is there any way that I can replicate or I can use the attached instance as a base image in ASG while scaling out
Just trying to get a bit of info on aws asg.
Here is my scenario;
launch an asg from a launch config using a default ubuntu ami
provision (install all required packages and config) the instances in the asg using ansible
deploy code to the instances using python
Happy days, every thing is setup.
The problem is if I should change the metrics of my asg so that all instances are terminated and then change it back to the original metrics, the new instances come up blank! No packages, No code!
1 - Is this expected behaviour?
2 - Also if the asg has 2 instances and scales up to 5 will it add 3 blank instances or take a copy of 1 of the running instances with code and packages and spin up the 3 new ones?
If 1 is Yes, how do I go around this? Do I need to use a pre-baked image? But then even that won't have the latest code.
Basically at off peak hours I want to be able to 'zero' out my asg so no instances are running and then bring then back up again during peak hours. It doesn't make sense to have to provision and deploy code again everyday.
Any help/advice appreciated.
The problem happening with your approach is you are deploying the code to the launch instance. So when you change the ASG metrics instance close and come up again they are launched from the AMI so they miss the code and configuration. Always remember in auto scaling the newly launched instances are launched using the AMI the ASG is using and the launched instance.
To avoid this you can use user data that will run the configuration script and pull the data from your repo to the instance on each of the instance when the instance is getting launched from the AMI so that blank instance don't get launched.
Read this developer guide for User Data
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Yes, this is expected behaviour
It will add 3 blank instances (if by blank you mean from my base image)
Usual options are:
Bake new image every time there is a new code version (it can be easily automated as a step in build/deploy process), but his probably makes sense only if that is not too often.
Configure your instance to pull latest software version on launch. There are several options which all involve user-data scripts. You can pull the latest version directly from SCM, or for example place the latest package to a S3 bucket as part of the build process, and configure the instance on launch to pull it from S3 and self deploy, or whatever you find suitable...
I have setup a HA architecture using Autoscaling, load balancer , code deploy.
I have a base-image through which autoscale launch any new instance. This base-image will get outdated with time and I may have to upgrade it.
My confusion is how can I provision this base AMI to install desire version of packages ? and How will I provision the already in-service instances ?
For-example - Currently my base AMI have php5.3 but if in future I need PHP5.5 then how can I provision in-service fleet of EC2 instances and as well as base AMI
I have Chef as provisioning server. So how should I proceed for above problem ?
The AMI that an instance uses is determined through the launch configuration when the instance boots. Therefore, the only way to change the AMI of an instance is to terminate it and launch it again.
In an autoscaling scenario, this is relatively easy: update the launch configuration of the autoscaling group to use the new AMI and terminate all instances you want to upgrade. You can do a rolling upgrade by terminating the instances one-by-one.
When your autoscaling group scales up and down frequently and it's OK for you to have multiple versions of the AMI in your autoscaling group, you can just update the launch configuration and wait. Every time the autoscaling process kicks in and a new instance is launched, the new AMI is used. When the autoscaling group has the right 'termination policy' ("OldestInstance", for example), every time the autoscaling process scales down, an instance running the old AMI will be terminated.
So, let's say your have 4 instances running. You update the launch config to use a new AMI. After 4 scale-up actions and 4 scale-down actions, all instances are running the new AMI.
Autoscale has a feature called Launch Configuration, which includes the ability to pass in userdata, which will get executed at launch time. The userdata can be saved within Launch Configuration so that you can automate the process.
I have never worked with Chef and I'm sure there is a Chef-centric way of doing this, but the quick and dirty would be to use userdata.
Your userdata script (i.e. BASH) would then include the necessary sudo apt-get remove / install commands (assuming Ubuntu OS).
The documentation for this is here:
http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_CreateLaunchConfiguration.html
UserData The user data to make available to the launched EC2
instances. For more information, see Instance Metadata and User Data
in the Amazon Elastic Compute Cloud User Guide.
At this time, launch configurations don't support compressed (zipped)
user data files.
Type: String
Length constraints: Minimum length of 0. Maximum length of 21847.
Pattern: [\u0020-\uD7FF\uE000-\uFFFD\uD800\uDC00-\uDBFF\uDFFF\r\n\t]*
Required: No
I wanted to take a pre-existing t2.medium EC2 instance and essentially clone it. Knowing no exact feature exists within AWS I scoured their documentation, asked people I know and came up with:
Create snapshot of said EC2 instance
Create AMI of snapshot
Build EC2 instance from AMI
When I went to build the new EC2 I was only given the option for a t1.micro OR m3.medium and above. I tried both (in and out of same region as original) and kept getting "insufficient data" under Status Checks.
Any ideas on what is going on here?
It sounds like you selected "paravirtual" as the virtualization type when you created the AMI, when you should have selected "hvm".
If you are doing this from the console you can create an AMI directly from the pre-existing EC2 instance, skipping the manual snapshot step, which should automatically use the correct settings for the AMI.