AWS Autoscaling Not Cloning Correct Instance - amazon-web-services

I have an instance in AWS that I set up my entire environment (I'll call it my ready instance) on and is running perfectly. I then created a load balancer (ELB) with an autoscaling policy.
When I created a load balancer with an autoscaling policy (min of 2 instances), 2 instances sprung up. The instances were empty, however. For the launch configuration, I specified my ready instance AMI. Isn't this supposed to tell the autoscaling policy which instance to clone? In this case, shouldn't my ready instance be cloned into them and they should have the same content?

Instances are not created based on a clone of a running instance, but rather just the disk image stored in the AMI. It might be a case where you need to create a new AMI from your running instance and use that AMI as the basis for your autoscaling group.

once upon a time even I was this phase of my life.Basically it will just the boot the AMI which you have specified in your as configuration. If your AMI has got old code, then it will boot and serve your client from the out-dated code. Do solve this you can automate the code management process, all you have to do is boot the new ami with a user-data script which will perfom certain actions during the boot. Using user-data script you can automate this process. Autoscaling configuration also have a provision to accept the user-script data.

Related

Production level Auto-scaling in AWS

I have completely understood the concept of Auto-Scaling in AWS. My only question is, what AMI will the launch configuration use while in production environment?
According to my understanding Image of existing instance should be used. Lets say I have used an image of existing instance.
What if there are any changes in existing instance in future? In this scenario we have to update the AMI.
Is there any process to automate this process?
When you create new AMI and set it in a new launch configuration (LC; LC can't be edited) or new version of a launch template (LT), then you will have to update the ASG configuration with the new LC/LT.
However, ASG by default will not update existing instances with new LC/LT. Only new instance that ASG launches will have the new LC/LT, and subsequently, the new AMI. Therefore, you will end up with ASG in which part of instances is running old AMI, and the other part is running new AMI.
You can deal with this in two commonly used ways:
Create your LC/LT and ASG using CloudFormation and specify UpdatePolicy. The update policy will be triggered when LC/LT changes, and existing instances in ASG will be updated based on the rules you specify in the policy.
Perform blue/green deployment of your ASG. How to perform the deployment is described and explained in details in an excellent AWS white paper:
Blue/Green Deployments on AWS
Auto scaling uses AMIs which are a point in time snapshot of your instance. Any changes made thereafter will not be applied to the AMI.
If you want any change to your base image you will need to recreate an image and roll it out across your Launch Configuration/Launch Template again.
There are many tools people use to provision the configuration of instances for AMIs such as Ansible, Chef and Puppet.
AWS also launched an automation tool for building images last year, the EC2 Image Builder
For some additional reading take a look at the golden ami pipeline.

The best way to add post configuration to an ECS Instance

I wonder what is the best way to add a post config step after instance creation when instance are automatically created by an ECS Cluster.
It seems there is no way to add user-data to ECS instance ?
Note : the instance are created automatically by the ECS Cluster itself.
EDIT:
When using ECS, you configure a Cluster. While configuring the cluster you select instance type and other stuff (ssh key, ...) but there is nowhere to give some user-data to the instances that will be created by ECS. So the question is how to do some post-configuration on instances automatically created with ECS.
When using the management console, it's more of a wizard that creates everything needed for you, including the instances using the Amazon Linux ECS optimized AMI, and doesn't give you a whole lot of control beyond that.
To get more fine-grained control, you would have to use another method of creating your cluster, such as the AWS CLI or CloudFormation. These methods allow you (or require you, actually) to create each piece at a time.
Example:
$ aws ecs create-cluster --cluster-name MyEcsCluster
The above command creates you a cluster, and cluster only. You would still have to create an ECS task definition, ECS service—although you could still use the management console for those—and (here's the real answer to your question) the EC2 instances which you want to attach to the cluster (either individually or through an Auto Scaling group). You could create instances from the Amazon Linux ECS optimized AMI, but also add user-data at that time to further configure them (you would also probably use the user-data in this scenario to create the /etc/ecs/ecs.config file to make sure it attaches to the ECS cluster you've created, e.g. echo "ECS_CLUSTER=MyEcsCluster" > /etc/ecs/ecs.config).
The short answer is, it's a more work to gain that sort of flexibility, but it is doable.
Edit: Thinking about it further, you could likely use the management console wizards to create everything once, then manually terminate the instances it created for the cluster (or, rather, delete the Auto Scaling group that creates them) and add your own. This would save you some work.

Updating user-data for EC2 instances under an autoscaling group

I want to modify / update the user-data for an EC2 instance. This is attached to an autoscaling cluster.
I understand that the instance needs to be stopped before the user-data can be updated. The problem I am facing is, when I stop the instance to update user-data autoscaler automatically brings a new instance back up.
Is there a way to update user-data without removing the EC2 instance from the autoscaling group?
For instances in an autoscaling group, the user data is generally updated by creating a new launch configuration with your new user data.
Your AutoScaling group should be associated with a launch configuration already. There is an easy option to copy launch configurations from the AWS web console that will replicate all of your existing options. Simply find this launch configuration, copy it, and then replace the old user data before you save the new configuration.
Once the new launch configuration is created, apply it to your autoscaling group. You can begin using it immediately by increasing the desired size of the group to launch a new instance with the new configuration, and then detach the old instance once you're satisfied that the new instance (and any hosted applications) are operational.
You can likewise use this method to change any property of a launch configuration without causing an interruption to your application.
Further Resources:
AWS Documentation - Creating a Launch Configuration
The only way can achieve this is by disabling autoscaling temporarily using programmatic invocation using aws sdk.
You can restart the servers after the autoscaling is disabled.
(node API http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/AutoScaling.html#suspendProcesses-property)

For AWS ASG, how to set up custom readiness check for new instances?

We have an AutoScaling Group that runs containers (using ECS). When we add OR replace EC2 instances in the ASG, they don't have the docker images we want on them. So, we run a few docker pull commands using cloud-init to fetch the images when they boot up.
However, the ASG thinks that the new instance is ready, and terminates an old instance. But in reality, this new instance isn't ready until all docker images have been pulled.
E.g.
Let's say my ASG's desired count is 5, and I have to pull 5 containers using cloud-init. Now, I want to replace all EC2 instances in my ASG.
As new instances start to boot up, ASG will terminate old instances. But due to the docker pull lag, there will be a time during the deploy, when the actual valid instances will be lesser than 3 or 2.
How can I "mark an instance ready" only when cloud-init is finished?
Note:I think Cloudformation can bridge this communication gap using CFN-Bootstrap. But I'm not using Cloudformation.
What you're look got is AutoScaling Lifecycle Hooks. You can keep an instance in the Pending:Wait state until you're docker pull has completed. You can then move the instance to InService. all of this can be done with the AWS CLI so it should be achievable with an AWS AutoScaling command before and after your docker commands.
The link to the documentation I have provided explains this feature in detail and provides great examples on how to use it.

Is there any way to edit AMI being used for auto scaling in AWS?

I have created Auto scaling group in AWS using a customized AMI. Now to rollout my new code I need to either update all instances running but then if a new instance comes up it won't be updated. So, I need a way to update AMI. One way could be creating new AMI and Autoscaling group.
Thanks in advance.
This is one way to go about it:
Spin up a stand-alone instance using the AMI
Make changes
Stop instance
Create new AMI from this instance
Create a new Launch Configuration that uses the new AMI
Update the Autoscaling Group to use the new Launch Configuration
Slowly terminate the old instances in the Autoscaling Group, and let them be automatically replaced with instances using the new AMI
Of course all this is a pain to deal with manually every time you need to make a change. Elastic Beanstalk and CloudFormat both provide mechanisms to deal with this in a more automated way.
If you are just changing the code you are deploying to your servers, then there are other ways to handle this, such as using AWS CodeDeploy. You could also update the running servers in some automated or manual fashion, and configure the AMI such that any new instances that are created will go get the latest code on startup.