reduce price by on AWS (EC2 and spot instances) - amazon-web-services

I have a queue of jobs and running AWS EC2 instances which process the jobs. We have an AutoScaling groups for each c4.* instance type in spot and on-demand version.
Each instance has power which is a number equal to number of instances CPUs. (for example c4.large has power=2 since it has 2 CPUs).
The the exact power we need is simply calculated from the number of jobs in the queue.
I would like to implement an algorithm which would periodically check the number of jobs in the queue and change the desired value of the particular AutoScaling groups by AWS SDK to save as much money as possible and maintain the total power of instances to keep jobs processed.
Especially:
I prefer spot instances to on-demand since they are cheaper
EC2 instances are charged per hour, we would like to turn off the instance only at the very last minute of its 1hour uptime.
We would like to replace on-demand instance by spot instances when possible. So, at 55min increase spot-group, at 58 check the new spot instance is running and if yes, decrease on-demand-group.
We would like to replace spot instances by on-demand if the bid would be too high. Just turn off the on-demand one and turn on the spot one.
Seems the problem is really difficult to handle. Anybody have any experience or a similar solution implemented?

You could certainly write your own code to do this, effectively telling your Auto Scaling groups when to add/remove instances.
Also, please note that a good strategy for lowering costs with Spot Instances is to appreciate that the price for a spot instance varies by:
Region
Availability Zone
Instance Type
So, if the spot price for a c4.xlarge goes up in one AZ, it might still be the same cost in another AZ. Also, the price of a c4.2xlarge might then be lower than a c4.xlarge, with twice the power.
Therefore, you should aim to diversity your spot instances across multiple AZs and multiple instance types. This means that spot price changes will impact only a small portion of your fleet rather than all at once.
You could use Spot Fleet to assist with this, or even third-party products such as SpotInst.
It's also worth looking at AWS Batch (not currently available in every region), which is designed to intelligently provide capacity for batch jobs.

Autoscaling groups allow you to use alarms and metrics that are defined outside of the autoscaling group.
If you are using SNS, you should be able to set up an alarm on your SNS queue and use that to scale up and scale down your scaling group.
If you are using a custom queue system, you can push metrics to cloudwatch to create a similar alarm.
You can determine how often scaling actions do occur, but it may be difficult to get the run time to exactly one hour.

Related

Cloudwatch Period time

CPU metrics cannot be selected below 1 minute in Cloudwatch service. For example, how can I lower this period time to trigger the Autoscale scale faster? I just need to trigger the AutoScale instances in short time. (By the way, datapoints value 1 to 1)
the minimum granularity for the metrics that EC2 provides is 1 minute.
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html
Would also say that if you need to scale that quickly, wouldn't the startup time be an issue anyway?
You are correct -- basic monitoring of an Amazon EC2 instance provides metrics over 5-minute periods. If you activate EC2 Detailed Monitoring, metrics are provided over 1-minute periods. Extra charges apply for Detailed Monitoring.
When launching a new instance via Amazon EC2 Auto-Scaling, it can take a few minutes for the new instance to launch and for the User Data script (if any) to run. Linux instances are quite fast, but Windows instances take a while on their first boot due to sysprep operations.
You mention that you want to react to a metric in less than one minute. I would suggest that this would not be an ideal way to trigger Auto-scaling. Sometimes a computer can be busy for a while, then can drop down again. Reacting too quickly to a high CPU load would cause the Auto-Scaling group to flap between adding instances and terminating instances. It is better to provision enough capacity for a reasonable amount of extra load and then gradually add more capacity as it is required over time.
If you have a need to react so quickly, then perhaps you should investigate using AWS Lambda to perform small amounts of work in a highly-parallel fashion rather than relying on Amazon EC2 instances.

aws spot instances are evicted much sooner and more often than expected

I am trying to use aws spot instances (m5.large in eu-west-2 region) with a maximum bid equal to the price of on demand instances. According to https://aws.amazon.com/ec2/spot/instance-advisor/ these instances should have a < 5% frequency of interruption, however, after launching 40 such instances this morning, I have found that within the hour 34 of them were evicted by aws ("instance-terminated-no-capacity" according to the spot requests page on the ec2 dashboard).
This eviction rate looks much too high compared to both amazon's own advisor and other users experiences. Does anybody know what could be causing this behaviour, if there is any better way to debug it or predict it, or if this is just what I should expect from spot instances?
Thank you!
Actually, for m5.large instance in eu-west-2 region(Oregon) it's 5%-10% frequency of interruption, so you can expect a max of 10%. I'm not saying that issue you are facing is because of this.
AWS terminates your spot instances because of any of these reasons,
The Spot price is above the maximum price.
There isn't enough capacity.
Amazon EC2 can't meet the constraints you placed on your Spot request.
In your case, since you are seeing instance-terminated-no-capacity message it is definitely because of the second reason. Since you've asked for 40 such instances, the amazon spot instance pool might not have enough capacity at that time.
The capacity of available spot instances pool depends on the demand for regular instances, and when users ask for regular on-demand instances, AWS will start terminating spot instances to fulfil those requests if there is not enough capacity
In my experience spot interruption is highly variable, and for some instances more or less likely at different times of the day.
If you need 40 instances and they do not need to be in the same availability zone (AZ) to each other you might reduce the chance of a mass interruption of all/most instances if you spread the machines across different AZs within the region as each availability zone has its own pool of machines. Although you will likely increase the chance that some machines will be interrupted.
Note this is not an option if you are using EMR, then they have to be in the same AZ.

How to use existing On-Demand instance in spot fleet?

I'm trying to reduce my expenses and want to start using AWS's spot pricing service. I'm completely new to it, but as I understand I can have instances running for certain amounts of time based on the price that will eventually stop running based on certain conditions. That's fine, I'm also aware you can have spot fleets, and in these fleets you can have an On-Demand instance for when the spot instance is interrupted.
I currently have a an On-Demand instance that hosts an ElasticBeanStalk application (it's an API), is there a way to use this instance inside the spot fleet so that when there's an available spot-instance it's servicing my EBS application then when the spot-instance is interrupted it just goes back to using my current On-Demand instance until another spot-instance is available?
Sadly, spot fleets don't work like this. If your spot instance gets terminated, no on-demand replacement is going to be created for you automatically. If it worked like this, everyone would be using spot instances in my view.
The on-demand portion of your spot fleet is separate from spot portion. This way your application will always run at minimum capacity (without spot). When spot is available, your spot instances will run along side your on-demand. This way you will have more computational power for cheap, which is very beneficial for any heavy processing application (e.g. batch image processing).
Details of how spot fleet and spot instances work are in How Spot Fleet works and How Spot Instances work docs.
Nevertheless, if you would like to have such replacement provisioned you would have to develop a custom solution for that.
There's a third-party solution called Spot.io that not only replaces the spot instance for an on-demand instance in a scenario like the one you describe but it has an algorithm that anticipates the interruption event and stands up an On-demand instance and has it ready before the interruption occurs.

What is the advantage of using Spot Fleet Autoscaling instead of AutoScaling Groups with a Spot Price?

Recently Amazon AWS released Auto Scaling for Spot Fleets (https://aws.amazon.com/blogs/aws/new-auto-scaling-for-ec2-spot-fleets/).
Auto Scaling groups already allowed you to set a Spot Price to get Spot Instances at a discount instead of Reserved Instances.
As far as I could see Spot Fleets allow you to define the fleet capacity in terms of vCPU, mixing different instance types to achieve this. I don't think this can be done using Auto Scaling as far as I know.
My use case is pretty simple: use Spot Fleets (or Auto Scaling with Spot instances) to increase the capacity of my cluster at a reduced price while keeping the minimum required Reserved Instances just in case. I could duplicate my ASG, set a spot price and I'd be done, but apparently this functionality now also exists as part of Spot Fleets.
What is the advantage of one over the other? Are there any major reasons to switch?
I might've found a valid reason to keep using Auto Scaling Groups with Spot Instances: you can attach those to a load balancer / target group, while spot fleets can't do that as of now.
Not that it matters too much for my use case any way as my cluster is an ECS cluster and instances will get registered in target groups automatically when launching containers, but it's good to know anyway.
Edit: Apparently Spot Fleets do not support CreationPolicies either, so you're either left to use ASGs or have some fun with WaitConditions if that is needed.
The major advantage of Spot Fleet is that it re-balances instances across availability zones if an AZ is experiencing a price spike. In contrast, ASG would leave you with fewer running instances than you requested.
This, along with the ability to balance across instance types and not just availability zones, makes Spot Fleets much more reliable for maintaining a target capacity than ASG. They come at an opportune time, since spot prices are seeing ever greater volatility.
That said, Spot Fleet currently is missing a lot of features afforded by ASG, as Ruben mentioned.
Spot-fleet has the ability to address workload resource requirements as a number of vCPUs, which is completely in line when someone is going to exploit EC2 Container Service.
But, spot-fleet isn't mature enough as ASG. We can not add tags to spot-fleet instances from UI, so we had to write a script to take care of it. Also, instances launched by SpotFleet has not out of box support for initial code push via CodeDeploy. In order to deploy code for the very first time, we had to create custom scripts and pass it as userdata.

How to autoscale EMR task instances

I am using EMR with task instance groups as spot instances. I want to maintain minimum number of task instances always.
Means, whenever EMR terminates task instances because of bid price goes higher than what we set, my application should launch another task instance with little higher bid price.
My research-
Use Cloudwatch to inform when it breaches threshhold, and auto-scale task instances. But as per study, there is no concept of auto-scaling in EMR.
Use Cloudwatch, and notify SQS when threshhold breahes, and there is one service who is always consuming and expand task instances.
Questions
Is there any auto-scaling present in EMR ? If that is available, then my efforts will reduce to just set threshhold, and corresponding expansion task instances action.
If you have any other approach to solve this problem, please suggest.
How Spot Prices Work
When an Amazon EC2 instance is launched with a spot price (including when launched from Amazon EMR), the instance will start if the current spot price is below the provided bid price. If the spot price rises above the bid price, the instance is terminated. Instances are only charged the current spot price.
Therefore, the logic of launching a new spot instance with a "little higher bid price" is not necessary. The instance will always be charged the current spot price, so simply bid as high as you are willing to pay for a spot instance. You will either pay less than the spot price (great!) or your instance will be terminated because the price has gone higher than you are willing to pay (in which case you don't want to pay a "little higher" for the instance).
If you wish to "maintain minimum number of task instances" at all times, then either pay the normal EMR charge (which means the instances won't be terminated) or bid a particularly large price for the spot instances, such as 2 x the normal price. Yes, you might occasionally pay more for instances, but on average your price will be quite low.
If you wish to be particularly sneaky, you could bid up to the normal price for the EC2 instances then, if instances are terminated, launch more task nodes without using spot pricing. That way, your instances won't be terminated and you won't pay more than the normal EC2 price. However, you would have to terminate and replace those instances when the spot price drops, otherwise you are paying too much. That's why it might be better just to provide a high bid price on your spot instances.
Bottom line: Use spot pricing, but bid a high price. You'll get a good price most of the time.
AWS EMR does not have a autoscaling option available. But you can use a work around and integrate Autoscaling using AWS SQS. This is a rough picture what you can integrate.
Launch you EMR cluster using spot instance.
Set up a SQS Queue and create 3 triggers one for CPU threshold , second for EC2 spot instance termination notice and third for changing the spot instance bid prices.
So if the CPU usage increases SQS will trigger an event to launch a new instance to cluster, if there is spot instance termination notice SQS will trigger to launch another instance to balance the load and send a event to change the bid price to launch another spot instance. (This is just rough sketch but I guess you will understand the logic.
This is guide to AWS SQS Autoscaling.
https://docs.aws.amazon.com/autoscaling/latest/userguide/as-using-sqs-queue.html
As has been correctly pointed, the EMR API provides all necessary ingredients to 1) collect monitoring data, and 2) programmatically scale the cluster up and down.
Basically, there are two main options to implement autoscaling for EMR clusters:
Autoscaling Loop: A process that is running on a server and continuously monitors the cluster for its current load. Performance metrics (memory, CPU, I/O, etc) can be collected in regular intervals and stored in a database. Autoscaling rules are evaluated against the performance metrics, and the cluster's task nodes are scaled up or down if required.
Event-Based Autoscaling: Using CloudWatch metrics (e.g., metrics for EMR or EC2), you can programmatically define triggers that are fired under certain conditions (for instance, add nodes if average CPUUtilization of all nodes exceeds 80%).
Both options have their pros and cons. The main advantage of option 2 is that it is a server-less approach (does not require to run your own server). Option 1, on the other hand, does require a server, but therefore comes with more control to customize the logic of your scaling rules. Also, it allows to keep searchable records of the history of the scaling decisions.
You could take a look at Themis, an EMR autoscaling framework developed at Atlassian. Themis implements the autoscaling loop as discussed in option 1 above. Current features include proactive as well as reactive autoscaling, support for spot/on-demand task nodes, it comes with a Web UI, and the tool is very easy to configure.
I have had a similar problem, and I wanted to share one possible alternative. I have written a Java tool to dynamically resize an EMR cluster during the processing. It might help you. Check it out at:
http://www.lopakalogic.com/articles/hadoop-articles/dynamically-resize-emr/
The source code is available on Github