AWS EKS - use spot nodes with failover to on-demand - amazon-web-services

Can you please kindly suggest.
I run GPU-based workload, and my instances are g4dn.large. I would like to use spot instances, since on-demand cost a lot :-)
But it is very usual that spot GPU instances are not available for long periods. I have configured initially origian aws cluster autoscaler, with priorities scaling - so i had 2 node groups, one spot and one on-demand, and autoscaler scaled first spot, and if not available - it scaled on-demand.
But after some time all instances became on-demand - due to absent spot capacity.
Autoscaler tries to scale up spot nodegroup, it has no capacity, it scales on-demand then, pod is running, happy time!
But there is no logic to try spot capacity again and to rebalance pods again, so my pod will be rescheduled to spot instance when it comes up. yes, i can delete on-demand node after some time, and if spot capacity can be fullfilled - it will create spot instance.
I have tried Karpenter, and it seems to do some work - but not the way i would like. It is possible to configure "node expiration" in Karpenter, so it will, for example, expire node every 5 minutes. But it doesn't care about spot or ondemand in expiration logic. So if we have spot instance - it will be expired. But if we have no spot capacity at the moment - and we get on-demand, it will expire it in 5 minutes, and tries to get spot capacity.
Can you please kindly suggest, how can i achieve the schema, when my EKS cluster has GPU instances, and if there is no spot capacity - it creates ondemand instance and constantly tries to create spot capacity, and when succeded - it reschedule pod to spot and terminates on-demand instance?
Any help will be extremely appreciated!

should help:
https://github.com/weaveworks/eksctl/blob/main/examples/08-spot-instances.yaml
https://eksctl.io/usage/spot-instances/
https://eksctl.io/usage/schema/#nodeGroups-instancesDistribution
you can modify onDemandBaseCapacity or onDemandPercentageAboveBaseCapacity based on your need on the fail over to on-demand node.

Related

aws spot instances are evicted much sooner and more often than expected

I am trying to use aws spot instances (m5.large in eu-west-2 region) with a maximum bid equal to the price of on demand instances. According to https://aws.amazon.com/ec2/spot/instance-advisor/ these instances should have a < 5% frequency of interruption, however, after launching 40 such instances this morning, I have found that within the hour 34 of them were evicted by aws ("instance-terminated-no-capacity" according to the spot requests page on the ec2 dashboard).
This eviction rate looks much too high compared to both amazon's own advisor and other users experiences. Does anybody know what could be causing this behaviour, if there is any better way to debug it or predict it, or if this is just what I should expect from spot instances?
Thank you!
Actually, for m5.large instance in eu-west-2 region(Oregon) it's 5%-10% frequency of interruption, so you can expect a max of 10%. I'm not saying that issue you are facing is because of this.
AWS terminates your spot instances because of any of these reasons,
The Spot price is above the maximum price.
There isn't enough capacity.
Amazon EC2 can't meet the constraints you placed on your Spot request.
In your case, since you are seeing instance-terminated-no-capacity message it is definitely because of the second reason. Since you've asked for 40 such instances, the amazon spot instance pool might not have enough capacity at that time.
The capacity of available spot instances pool depends on the demand for regular instances, and when users ask for regular on-demand instances, AWS will start terminating spot instances to fulfil those requests if there is not enough capacity
In my experience spot interruption is highly variable, and for some instances more or less likely at different times of the day.
If you need 40 instances and they do not need to be in the same availability zone (AZ) to each other you might reduce the chance of a mass interruption of all/most instances if you spread the machines across different AZs within the region as each availability zone has its own pool of machines. Although you will likely increase the chance that some machines will be interrupted.
Note this is not an option if you are using EMR, then they have to be in the same AZ.

How to reduce the downtime while launching a EKS cluster?

I am trying to launch a kubernetes cluster over EKS which would have multiple pods in it . Once the worker node has maximum pods on it running then a new node launches and the extra pod launches over the new node. Launching of a new node takes time and creates a downtime which I want to reduce. Pod disruption budget is one option but I am not sure how to use it with scaling up of nodes.
A simpler way to approach this would be to have your scaling policies pre-defined to scale up at reasonably lower limits. This way, let's say if your server reaches 60% of the capacity and triggers a scale up - you would have enough grace time to not face a downtime (since the first one can handle requests while the second one bootstraps) and allow the the new server to come up.

How to use existing On-Demand instance in spot fleet?

I'm trying to reduce my expenses and want to start using AWS's spot pricing service. I'm completely new to it, but as I understand I can have instances running for certain amounts of time based on the price that will eventually stop running based on certain conditions. That's fine, I'm also aware you can have spot fleets, and in these fleets you can have an On-Demand instance for when the spot instance is interrupted.
I currently have a an On-Demand instance that hosts an ElasticBeanStalk application (it's an API), is there a way to use this instance inside the spot fleet so that when there's an available spot-instance it's servicing my EBS application then when the spot-instance is interrupted it just goes back to using my current On-Demand instance until another spot-instance is available?
Sadly, spot fleets don't work like this. If your spot instance gets terminated, no on-demand replacement is going to be created for you automatically. If it worked like this, everyone would be using spot instances in my view.
The on-demand portion of your spot fleet is separate from spot portion. This way your application will always run at minimum capacity (without spot). When spot is available, your spot instances will run along side your on-demand. This way you will have more computational power for cheap, which is very beneficial for any heavy processing application (e.g. batch image processing).
Details of how spot fleet and spot instances work are in How Spot Fleet works and How Spot Instances work docs.
Nevertheless, if you would like to have such replacement provisioned you would have to develop a custom solution for that.
There's a third-party solution called Spot.io that not only replaces the spot instance for an on-demand instance in a scenario like the one you describe but it has an algorithm that anticipates the interruption event and stands up an On-demand instance and has it ready before the interruption occurs.

reduce price by on AWS (EC2 and spot instances)

I have a queue of jobs and running AWS EC2 instances which process the jobs. We have an AutoScaling groups for each c4.* instance type in spot and on-demand version.
Each instance has power which is a number equal to number of instances CPUs. (for example c4.large has power=2 since it has 2 CPUs).
The the exact power we need is simply calculated from the number of jobs in the queue.
I would like to implement an algorithm which would periodically check the number of jobs in the queue and change the desired value of the particular AutoScaling groups by AWS SDK to save as much money as possible and maintain the total power of instances to keep jobs processed.
Especially:
I prefer spot instances to on-demand since they are cheaper
EC2 instances are charged per hour, we would like to turn off the instance only at the very last minute of its 1hour uptime.
We would like to replace on-demand instance by spot instances when possible. So, at 55min increase spot-group, at 58 check the new spot instance is running and if yes, decrease on-demand-group.
We would like to replace spot instances by on-demand if the bid would be too high. Just turn off the on-demand one and turn on the spot one.
Seems the problem is really difficult to handle. Anybody have any experience or a similar solution implemented?
You could certainly write your own code to do this, effectively telling your Auto Scaling groups when to add/remove instances.
Also, please note that a good strategy for lowering costs with Spot Instances is to appreciate that the price for a spot instance varies by:
Region
Availability Zone
Instance Type
So, if the spot price for a c4.xlarge goes up in one AZ, it might still be the same cost in another AZ. Also, the price of a c4.2xlarge might then be lower than a c4.xlarge, with twice the power.
Therefore, you should aim to diversity your spot instances across multiple AZs and multiple instance types. This means that spot price changes will impact only a small portion of your fleet rather than all at once.
You could use Spot Fleet to assist with this, or even third-party products such as SpotInst.
It's also worth looking at AWS Batch (not currently available in every region), which is designed to intelligently provide capacity for batch jobs.
Autoscaling groups allow you to use alarms and metrics that are defined outside of the autoscaling group.
If you are using SNS, you should be able to set up an alarm on your SNS queue and use that to scale up and scale down your scaling group.
If you are using a custom queue system, you can push metrics to cloudwatch to create a similar alarm.
You can determine how often scaling actions do occur, but it may be difficult to get the run time to exactly one hour.

How to autoscale EMR task instances

I am using EMR with task instance groups as spot instances. I want to maintain minimum number of task instances always.
Means, whenever EMR terminates task instances because of bid price goes higher than what we set, my application should launch another task instance with little higher bid price.
My research-
Use Cloudwatch to inform when it breaches threshhold, and auto-scale task instances. But as per study, there is no concept of auto-scaling in EMR.
Use Cloudwatch, and notify SQS when threshhold breahes, and there is one service who is always consuming and expand task instances.
Questions
Is there any auto-scaling present in EMR ? If that is available, then my efforts will reduce to just set threshhold, and corresponding expansion task instances action.
If you have any other approach to solve this problem, please suggest.
How Spot Prices Work
When an Amazon EC2 instance is launched with a spot price (including when launched from Amazon EMR), the instance will start if the current spot price is below the provided bid price. If the spot price rises above the bid price, the instance is terminated. Instances are only charged the current spot price.
Therefore, the logic of launching a new spot instance with a "little higher bid price" is not necessary. The instance will always be charged the current spot price, so simply bid as high as you are willing to pay for a spot instance. You will either pay less than the spot price (great!) or your instance will be terminated because the price has gone higher than you are willing to pay (in which case you don't want to pay a "little higher" for the instance).
If you wish to "maintain minimum number of task instances" at all times, then either pay the normal EMR charge (which means the instances won't be terminated) or bid a particularly large price for the spot instances, such as 2 x the normal price. Yes, you might occasionally pay more for instances, but on average your price will be quite low.
If you wish to be particularly sneaky, you could bid up to the normal price for the EC2 instances then, if instances are terminated, launch more task nodes without using spot pricing. That way, your instances won't be terminated and you won't pay more than the normal EC2 price. However, you would have to terminate and replace those instances when the spot price drops, otherwise you are paying too much. That's why it might be better just to provide a high bid price on your spot instances.
Bottom line: Use spot pricing, but bid a high price. You'll get a good price most of the time.
AWS EMR does not have a autoscaling option available. But you can use a work around and integrate Autoscaling using AWS SQS. This is a rough picture what you can integrate.
Launch you EMR cluster using spot instance.
Set up a SQS Queue and create 3 triggers one for CPU threshold , second for EC2 spot instance termination notice and third for changing the spot instance bid prices.
So if the CPU usage increases SQS will trigger an event to launch a new instance to cluster, if there is spot instance termination notice SQS will trigger to launch another instance to balance the load and send a event to change the bid price to launch another spot instance. (This is just rough sketch but I guess you will understand the logic.
This is guide to AWS SQS Autoscaling.
https://docs.aws.amazon.com/autoscaling/latest/userguide/as-using-sqs-queue.html
As has been correctly pointed, the EMR API provides all necessary ingredients to 1) collect monitoring data, and 2) programmatically scale the cluster up and down.
Basically, there are two main options to implement autoscaling for EMR clusters:
Autoscaling Loop: A process that is running on a server and continuously monitors the cluster for its current load. Performance metrics (memory, CPU, I/O, etc) can be collected in regular intervals and stored in a database. Autoscaling rules are evaluated against the performance metrics, and the cluster's task nodes are scaled up or down if required.
Event-Based Autoscaling: Using CloudWatch metrics (e.g., metrics for EMR or EC2), you can programmatically define triggers that are fired under certain conditions (for instance, add nodes if average CPUUtilization of all nodes exceeds 80%).
Both options have their pros and cons. The main advantage of option 2 is that it is a server-less approach (does not require to run your own server). Option 1, on the other hand, does require a server, but therefore comes with more control to customize the logic of your scaling rules. Also, it allows to keep searchable records of the history of the scaling decisions.
You could take a look at Themis, an EMR autoscaling framework developed at Atlassian. Themis implements the autoscaling loop as discussed in option 1 above. Current features include proactive as well as reactive autoscaling, support for spot/on-demand task nodes, it comes with a Web UI, and the tool is very easy to configure.
I have had a similar problem, and I wanted to share one possible alternative. I have written a Java tool to dynamically resize an EMR cluster during the processing. It might help you. Check it out at:
http://www.lopakalogic.com/articles/hadoop-articles/dynamically-resize-emr/
The source code is available on Github