Can we stop EC2 instance from Auto Scaling Group from AWS - amazon-web-services

I have an Auto Scaling Group and I want to stop that instance from Auto Scaling Group rather than terminating, Is it possible to do so?

No. From the official definition:
Auto Scaling is a web service designed to launch or terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks.
When scaling-out, new instances are launched into the Auto Scaling group.
When scaling-in, instances are terminated.
Auto Scaling does not start/start instances.
Some benefits of this approach are:
Instances can be launched in different Availability Zones in case there is a failure in a particular AZ
Failed instances can be easily replaced
There is no limit to the number of instances that could be launched (compared to running out of 'stopped' instances)
Launch Configurations can be updated, so any newly-launched instances will use the new configuration (as opposed to recycling old instances)

Related

where new instance will be launched in Auto scaling?

There is set of rules to terminate instance for Auto Scaling when we have multiple AZ.. Same way if we wanted scale up if we have multiple available zones, where exactly instances will be created .. is there any hierarchy?
According to aws docs, if you have multiple availabilty zones for an autoscaling group, aws try to distribute the instance in evenly manner. So if your desired capacity is 8 and there are 4 instances in az-1 and 3 in az-2, the remaining one instance will be created in az-2.
When one Availability Zone becomes unhealthy or unavailable, Amazon EC2 Auto Scaling launches new instances in an unaffected Availability Zone. When the unhealthy Availability Zone returns to a healthy state, Amazon EC2 Auto Scaling automatically redistributes the application instances evenly across all the Availability Zones for your Auto Scaling group. Amazon EC2 Auto Scaling does this by attempting to launch new instances in the Availability Zone with the fewest instances. If the attempt fails, however, Amazon EC2 Auto Scaling attempts to launch in other Availability Zones until it succeeds.
You can read more about this here.

Why do I need to enable instance scale-in protection in my Auto Scaling Group when enabling a capacity provider within a cluster?

I am making an AWS ECS cluster using EC2 and trying to use capacity providers. I don't really understand why do I need to enable instance scale-in protection inside my AWS Auto Scaling group.
Isn't the point of Auto Scaling the termination of needless EC2 instances?
why do I need to enable instance scale-in protection
This is only needed when you chose to use managed scaling:
When managed scaling is enabled, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group used when creating the capacity provider. On your behalf, Amazon ECS creates an AWS Auto Scaling scaling plan with a target tracking scaling policy based on the target capacity value you specify.
The managed scaling ensures that ECS controlled when instances are removed. By doing this it protects any instances that have some tasks running on it from being terminated:
When managed termination protection is enabled, Amazon ECS prevents Amazon EC2 instances that contain tasks and that are in an Auto Scaling group from being terminated during a scale-in action.
The entire idea is that you enable instance scale-in protection on your ASG so that ECS has control over which instances to terminate based on tasks they run. Without this, your ASG could terminate instances based on other criteria, not nervelessly related to "needless EC2 instances". For example, ASG can choose to terminate instances based due to AZRebalance process. This could lead to ASG terminating instances with running tasks, which may not be what you want.

How can I scale EC2 instances in ASG in Zone Sequence

How can I make sure ASG is scaling EC2 instances in a correct Zone sequence, i.e when I scale ASG from 3 instances to 5 instances, it needs to have 2 nodes in Zone-A, 2 in Zone-B and 1 in Zone-C. But in our case, it ends up in 2 nodes in Zone-A, 1 node in Zone-B and 2 nodes in Zone-C.
AWS ASG launches new instances in all Availability zones you enabled for that particular ASG. This is an extract from the official documentation.
Amazon EC2 Auto Scaling attempts to distribute instances evenly between the Availability Zones that are enabled for your Auto Scaling group. Amazon EC2 Auto Scaling does this by attempting to launch new instances in the Availability Zone with the fewest instances. If the attempt fails, however, Amazon EC2 Auto Scaling attempts to launch the instances in another Availability Zone until it succeeds
If you increase the desired capacity to say 9 (and you have 3 AZ's), you'll see there's a high chance there will be 3 instances on each AZ.
There is no way to control which AZ the AutoScaling Group will launch instances in.
The only work around I can think of is that you could make 1 ASG per AZ and then control the desired on your own via a script instead of using a scaling policy. I would recommend trying to make sure your application is as ephemeral as possible without zonal dependencies so that instances can be added in any zone

How to debug EC2 instances where custom health checks fail

I have an auto-scaling group with EC2 that implement a custom health.
From time to time, the health check fails and instances are terminated and replaced.
The health check itself is implemented as a shell script that runs on the instances. If it detects problems, it will inform the auto scaling group via the AWS API:
aws autoscaling set-instance-health --instance-id $instance --health-status Unhealthy
The problem is only that I have no information about what check failed, beside the notification:
Cause: At 2017-06-13T09:11:47Z an instance was taken out of service in response to a user health-check
What is the recommended way to debug these type of problems. Is there a way to make AWS only stop instances and not terminate them, so their disk state could be inspected?
(First I thought about "enable termination protection", but from my understanding this will not make a difference, here. Autoscaling group will still terminate the instances when the shutdown was requested by a failing custom health check.)
Using the set-instance-health command tells Auto Scaling that the instance is unhealthy and needs to be replaced. Auto Scaling will then terminate the unhealthy instance and launch a new instance to replace it.
If you wish to perform forensic analysis on an unhealthy instance, remove it from the Auto Scaling group with the aws autoscaling detach-instances command:
Removes one or more instances from the specified Auto Scaling group. After the instances are detached, you can manage them independent of the Auto Scaling group.
If you do not specify the option to decrement the desired capacity, Auto Scaling launches instances to replace the ones that are detached.
If there is a Classic Load Balancer attached to the Auto Scaling group, the instances are deregistered from the load balancer. If there are target groups attached to the Auto Scaling group, the instances are deregistered from the target groups.
So, instead of calling set-instance-health, call detach-instances (and optionally replace it). You can then debug the instance. If you wish to send it back into service, use aws autoscaling attach-instances.

OpsWorks load-based instance vs auto scaling group?

Does anyone know what the difference between Automatic Load-based Scaling vs having explicit auto scaling groups on OpsWorks is?
this: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling-loadbased.html
vs https://aws.amazon.com/blogs/devops/auto-scaling-aws-opsworks-instances/
With load-based instances, how does one add one to a target group?
Can you have multiple auto scaling groups in one layer of OpsWorks?
I’m looking at going with an ALB to route our traffic, which cannot act as an independent layer in Opsworks.
So I would need to pipe requests to 1 auto scaling group for one type of requests and the rest to the other other auto scaling group.
I just am not sure what load-based instances are and am perplexed by them not providing a default number of machines to start with.
Which one should I use for ALB routing traffic between the two groups?
OpsWorks is a configuration management tool that utilises Chef to configure your infrastructure. OpsWorks utilises a different approach when it comes to scaling out than a auto-scaling group.
Unlike an auto-scaling group, you have these instances pre-defined on your OpsWorks stack (layer) and they are being launched when a certain metric (threshold) is triggered (CloudWatch data: CPU, memory, load... etc).
OpsWorks will not spawn (create) any new instances, but will only be capable of starting instances you have previously created and set them as load-based instances. This is also only available for OpsWorks and cannot be used for any other service outside of OpsWorks.
AWS EC2 auto-scaling actually can launch very large number of instances (instances which do not need to be created beforehand) into your AWS environment, and same as the OpsWorks load-based scaling, can be triggered by CloudWatch alarms (CPU, memory, Load... etc).
Auto-scaling is not by default available on OpsWorks, and there is no build in way to have an auto-scaling group associated with your OpsWorks stack, but it's possible with a bit of work. Read about it here.
Let me divide the answer for you.
Does anyone know what the difference between Automatic Load-based
Scaling vs having explicit auto scaling groups on OpsWorks is?
Automatic Load-based Scaling:
Amazon Opsworks Service provides you the the feature, automatic load-based scaling where you can add instances to your layer in stack and set the auto scaling configuration policies directly.
Load based scaling scales up or down the instances based upon the load you have set to handle. You need to set the threshold, using the parameters and define the scaling policies.
Explicit Auto Scaling groups on OpsWorks:
Amazon Opsworks Service allows you to add existing instances to your layer in stack. Which means You can set an autoscaling launch configuration and set the scale up and scale down events based on the load. Then create an Autoscaling group and launch instances in it. Then you can go to Opsworks and add these existing instances to your layer in stack. So when the load increases or decreases more or less than the threshold set, the Autoscaling group handles the scaling.
With load-based instances, how does one add one to a target group?
Once you have the Load based Instances Ready either you have launched then directly from Automatic Load-based Scaling in Opsworks or Explicitly using Auto Scaling groups on OpsWork, you can go to Application Load balancer in EC2 Console and configure with necessary configurations and then register the load based instances you have just created with ALB in Register targets TAB.
Can you have multiple auto scaling groups in one layer of OpsWorks?
Yes, you can have multiple auto scaling groups in one layer of OpsWorks.
Which one should I use for ALB routing traffic between the two groups?
You can use any of the group.
so that you can pipe requests to 1 auto scaling group for one type of
requests and the rest to the other other auto scaling group.
Please Refer Autoscaling once.
I just am not sure what load-based instances are
Load Based Instances are the instances which are configured with Load based scaling configuration. You need to set the threshold,configuration and the events to define when to scale up and scale down.
EX: Suppose, If you have 5 instances running at initial stage and as you want your application to be running even your load increases to minimize your downtime, you will set autoscaling configuration such that if average CPU utilization of instances increase more than 70% launch 2 more instances. You can set scale up and scale down on many more factors.
Hope it Helps:)