What is the difference between an Instance and an Instance group - google-cloud-platform

I was wondering what the difference between an instance and an instance group are.
Can anyone explain the difference to me?
Why can we use Autoscaler with an instance group and not with a instance

In GCE, An instance is a single virtual machine that you can customise (CPU, network, endpoint, disk, etc) and manage by you (shutdown, run, build with new image, etc)
https://cloud.google.com/compute/docs/instances/
An instance is a virtual machine hosted on Google's infrastructure.
Instances can run the Linux images provided by Google, or any
customized versions of these images. You can also build and run images
of other operating systems.
While an instance group is a collection of the instances above, for management. i.e. I have two groups, slow instances and fast instances.
Autoscaler is something that you can use with Managed Instance Group. This a group that is created by Instance Manager using an instance template.
https://cloud.google.com/compute/docs/instance-groups/
A Compute Engine Autoscaler automatically adds or removes virtual
machines from a managed instance group based on increases or decreases
in load. This allows your applications to gracefully handle increases
in traffic and reduces cost when the need for resources is lower. You
just define the autoscaling policy and the autoscaler performs
automatic scaling based on the measured load.
Hope it helps.
Cheers.

Related

Choosing Specific instances for scaling in and scaling out in AWS

Using Auto Scaling with Load balance in AWS, We can do the following things, according to my understanding:
we can scale up and scale down according to load.
all instances have the same image.
But I have a different problem:
if we have less load then we should terminate a big machine and start a small machine and vice versa.
small machine and the big machine has a different image
but I am not getting any help from AWS UI.
Can anyone help me on this issue?
Amazon EC2 Auto Scaling can launch new instances and can terminate instances. It only ever adds or removes instances -- it never changes the size of an instance. This is why you'll often see it referred to as "scale-out and scale-in" rather than "scale-up and scale-down".
When a scaling policy is triggered and Auto Scaling needs to launch a new instance, it uses the provided Launch Configuration or Launch Template to determine what type of instance to launch, which network to use, etc.
Therefore, an Auto Scaling group typically consists of all the same size instances since they are all launched from the same Launch Configuration. This is actually a good thing because it makes it easier for the scaling alarms to know when to add/remove instances and it also helps Load Balancers distribute load amongst instances since they assume that all instances are sized the same.
So, rather than "terminate a big machine and start a small machine and vice versa", Auto Scaling simply launches the same sized instance or terminates an instance.
Also, all instances should use the same AMI since load balancers will send traffic to every instance, expecting them to behave the same way.
You could, if you wish, modify the Launch Configuration associated with the Auto Scaling group so that, when it next launches an instance, it launches a different-sized instance. However, Auto Scaling and Load Balancers will not 'know' that it is a different-sized instance.
Basically John answered this question.
As alternative we can have some sophisticated scaling logic in any computing resource. For example AWS Alarms can send SNS notification, that Lambda reads and scale in or out using sophisticated logic you have (big or small instances etc.).

How to use spot instance with amazon elastic beanstalk?

I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.
So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.
To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).
This work fine, but:
how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?
is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?
UPDATE
Elastic Beanstalk add support to spot instance since 2019... see:
https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html
I was asking this myself and found a builtin solution in elastic beanstalk. It was described here as follows:
Add a file under the .ebextensions folder, for our setup we’ve named the file as spot_instance.config (the .config extension is
important), paste the content available below in the file
https://gist.github.com/rahulmamgain/93f2ad23c9934a5da5bc878f49c91d64
The value for EC2_SPOT_PRICE, can be set through the elastic beanstalk environment configuration. To disable the usage of spot
instances, just delete the variable from the environment settings.
If the environment already exists and the above settings are updates, the older auto scaling group will be destroyed and a new one
is created.
The environment then submits a request for spot instances which can be seen under Spot Instances tab on the EC2 dashboard.
Once the request is fulfilled the instance will be added to the new cluster and auto scaling group.
You can use Spot Advisor tool to ascertain the best price for the instances in use.
A price point of 30% of the original price seems like a decent level.
I personally would just use the on-demand price for the given instance type given this price is the upper boundary of what you would be willing to pay. This reduces the likelihood of being out-priced and thus the termination of your instances.
This might be not the best approach for production systems as it is not possible to split between a number of on-demand instances and an additional number of spot instances and there might be a small chance that there are no spot instances available as someone else is buying the whole market with high bids.
For production use cases I would look into https://github.com/AutoSpotting/AutoSpotting, which actively manages all your auto-scaling groups and tries to meet the balance between the lowest prices and a configurable number or percentage of on-demand instances.
As of 25th November 2019, AWS natively supports using Spot Instances with Beanstalk.
Spot instances can be enabled in the console by going to the desired Elastic Beanstalk environment, then selecting Configuration > Capacity and changing the Fleet composition to "Spot instance enabled".
There you can also set options such as the On-Demand vs Spot percentage and the instance types to use.
More information can be found in the Beanstalk Auto Scaling Group support page
Here at Spotinst, we were dealing with exactly that dilemma for our customers.
As Elastic Beanstalk creates a whole stack of services (Load Balancers, ASG’s, Route 53 access point etc..) that are tied together, it isn’t a simple task to manage Spots within it.
After a lot of research, we figured that removing the ASG will always be prone to errors as keeping the configuration intact gets complex. Instead, we simply replicate the ASG and let our Elastigroup and the ASG live side by side with all the scaling policies only affecting the Elastigroup and the ASG configuration updates feeding there as well.
With the instances running inside Elastigroup, you achieve managed Spot instances with full SLA.
Some of the benefits of running your Spot instances in Elastigroup include:
1) Our algorithm makes live choices for the best Spot markets in terms of price and availability whenever new instances spin up.
2) When an interruption happens, we predict it about 15 minutes in advance and take all the necessary steps to ensure (and insure) the capacity of your group.
3) In the extreme case that none of the markets have Spot availability, we simply fall back to an on-demand instance.
Since AWS clearly states that Beanstalk does not support spot instances out-of-the-box you need to tinker a bit with the thing. My customer wanted mixed environment (on-demand + spot) and full spot. What I created for my customer was the following (I had access to GUI only):
For the mixed env:
start the env with regular instance;
copy the respective launch configuration and chose spot instances during the process;
edit Auto Scaling Group and chose the lc you just edited + be sure to change Termination Policy to NewestInstance.
Such setup will allow you to have basic on-demand fleet (not-terminable) + some extra spots if required, e.g., higher-than-usual traffic. Remember that if you terminate the environment and recreate it then all of your edits will be removed.
For full spot env:
similar steps as before with one difference - terminate the running instance and wait for ASG to launch a new one. If you want it to do without downtime, just give an extra instance for the Desired number, wait for it to launch and then terminate on-demand one.

Single instances on google clound to group of instances

I have one instance on google clound, but the cpu usage is over 99% how can I scale or move that instance to an group I am using node js and mysql, this is possible?
You have different options to create an instance group. you can use an unmanaged instance group link to use dissimilar instances that you can arbitrarily add and remove.
Or you can create an image template and use a managed instance group link with identical instances, where you will be able to autoscale the number of instances.
But to give you a better answer, could you explain a little bit your application? Because if the only issue is the CPU usage, you can change the number of CPUs once the instance has stopped by with the edit button.

What is the kill behaviour for resizing an instance group in Google Cloud Platform?

If I have an instance group with 10 machines and I resize the instance group to 9 machines. What determines which instance will go down? Is it FIFO, LIFO, RANDOM? Is it possible to configure this behaviour?
Instance Group Manager, when resized, arbitrarily chooses VMs that will get deleted first. It takes in considerations aspects like:
Status of provisioning, as its better to delete not yet ready/serving VM
Health of the VM, as its better to delete not healthy VM rather then serving one
Version (instance-template) the VM is based on, to prefer converging to desired configuration of target versions.
Additional aspects and their relative priority can be subject to change with additional features added for Managed Instance Groups

Is there a way to STOP not TERMINATE instances using auto-scaling in AWS?

I am looking at using AWS auto-scaling to scale my infrastructure up and down based on various performance metrics (CPU, etc.). I understand how to set this up; however, I don't like that instances are terminated rather than stopped when it is scaled down. This means that when I scale back up, I have to start from scratch with a new instance and re-install my software, etc. I'd rather just start/stop my instances as needed rather than create/terminate. Is there a way to do this?
No, it is not possible to Stop an instance under Auto Scaling. When a Scaling Policy triggers the removal of an instance, Auto Scaling will always Terminate the instance.
However, here's some ideas to cope with the concept of Termination...
Option 1: Use pre-configured AMIs
You can configure an Amazon EC2 instance with your desired software, data and settings. Then, select the EC2 instance in the Management Console and choose the Create Image action. This will create a new Amazon Machine Image (AMI). You can then configure Auto Scaling to use this AMI when launching a new instance. Each new instance will contain exactly the same disk contents.
It's worth mentioning that EBS starts up very quickly from an AMI. Instead of copying the whole AMI to the boot disk, it copies it across on "first access". This means the new instance can start-up immediately rather than waiting for the whole disk to be copied.
Option 2: Use a startup (User Data) script
Each Amazon EC2 instance has a User Data field, which is accessible from the instance. A script can be passed through the User Data field, which is then executed when the instance starts. The script could be used to install software, download data and configure the instance.
The script could do something very simple, like download a configuration script from a source code repository, then execute the script. This means that machine configuration can be centrally managed and version-controlled. Want to update your app? Just launch a new instance with the updated script and throw away the old instance (which is much easier than "updating" an app).
Option 3: Add/Remove instances to an Auto Scaling group
Rather than using Scaling Policies to Launch/Terminate instances for an Auto Scaling group, it is possible to attach/detach specific instances. Thus, you could 'simulate' auto scaling:
When you want to scale-down, detach an instance from the Auto Scaling group, then stop it.
When you want to add an instance, start the instance then attach it to the Auto Scaling group.
This would require your own code, but it is very simple (basically two API calls). You would be responsible for keeping track of which instance to attach/detach.
You can suspend scaling processes, see documentation here:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html#as-suspend-resume
Add that instance to Scale in protection and then stop the instance then it will not delete your instance as it's having the scale in protection.
Actually you have three official AWS options to reboot or even stop an instance which belongs to an Auto Scaling Group:
Put the instance into the Standby state
Detach the instance from the group
Suspend the health check process
Ref.: https://aws.amazon.com/premiumsupport/knowledge-center/reboot-autoscaling-group-instance/
As of April 2021:
Option 4: Use Warm Pools and an Instance Reuse Policy
By default, Amazon EC2 Auto Scaling terminates your instances when your Auto Scaling group scales in. Then, it launches new instances into the warm pool to replace the instances that were terminated.
If you want to return instances to the warm pool instead, you can specify an instance reuse policy. This lets you reuse instances that are already configured to serve application traffic.
This mostly automates option 3 from John's answer.
Release announcement: https://aws.amazon.com/blogs/compute/scaling-your-applications-faster-with-ec2-auto-scaling-warm-pools/
Documentation: https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html
This is to expand a little on #mwalling's answer, because that is the right direction, but needs a little extra work to prevent instance termination.
There is now a way to stop or hibernate scaled in instances!
By default AWS Autoscaling scale in policy is to terminate an instance. Even if you have a warm pool configured. Autoscaling will create a fresh instance to put into the warm pool. Presumably to make sure you start with a fresh machine every time. However, with a instance reuse policy you can make AWS Autoscaling either stop or hibernate a running instance and store that instance in the warm pool.
Advantages include:
Local caches stay populated (use hibernate for in memory cache).
Burstable EC2 instances (those types with T*) keep built up burst credits instead of the newly created instance that have limited or no credits.
Practical example:
We use a burstable EC2 instance for CI/CD work that we scale to 0 instances outside working hours. With a reuse policy our local image repository stays populated with the most important Docker images. Also we keep the built up credit from the previous day and that speeds up automatic jobs we run first thing every morning.
How to implement:
There's currently no way of doing this completely via the management console. So you will need to use AWS CLI or SDK.
First create a warm pool as described in the AWS Documentation
Then execute this command to add a reuse policy:
aws autoscaling put-warm-pool --auto-scaling-group-name <Name-of-autoscaling-group> --instance-reuse-policy ReuseOnScaleIn=true
Reference docs for the command: AWS CLI Autoscaling put-warm-pool documentation
Flow diagram of possible life cycles of EC2 instances:
Image from AWS Documentation: Lifecycle state transitions for instances in a warm pool