Passenger - Using "Requests in queue" as an AWS metric for autoscaling - amazon-web-services

I am surprised to find little information regarding EC2 autoscaling with Phusion Passenger.
I actually discovered not so long ago a metric "Requests in queue" being exposed upon running passenger-status
I am wondering whether this stat would make a nice metric to help with autoscaling.
Right now most AWS EC2 Autoscaling guides mention using CPU and Memory to write autoscaling rules but I find this insufficient. When I think about the problem autoscaling should solve, that is being able to scale up to the demand, I'd rather base those rules on the number of pending/completed requests to report a node health or a cluster congestion, and Passenger "Requests in queue" (and also for each process, the "Last Used" and "Processed" count) seems to useful.
I am wondering it it would be possible to report this "Requests in queue" stat (and eventually others) periodically as an AWS metric. I was thinking the following rule would be ideal for autoscaling : If the average number of "requests in queue" on the autoscaled instances is to exceed a threshold value, this would trigger spawning a new machine from the autoscaling group.
Is this possible ?
Has anyone ever tried to implement autoscaling rules based on number of requests in queue this way ?

This is totally possible (and a good approach).
Step 1. Create custom CloudWatch metric for "Requests in queue".
You will have to write your own agent that runs passenger-status, extracts the value and sends it to CloudWatch. You can use any AWS SDK or just AWS CLI: http://docs.aws.amazon.com/cli/latest/reference/cloudwatch/put-metric-data.html
Step 2. Create alarms for scale up and scale down based on your custom metric.
Step 3. Modify scaling policy for your Auto Scaling Group to use your custom alarms to scale up/down.

Related

Autoscaling policy not launching new instances

I have setup different types of dynamic scaling policies within my autoscaling group that uses a launch template to try and see autoscaling launch new instances when these are triggered, but it doesn't do it. I just set up the following tracking policy, and here are the results:
The tracking policy
Alarm is in-alarm stated, and action was "Successfully executed"
Autoscaling activity is not reporting anything
The alarm always logs that an autoscaling action was triggered, however autoscaling does not log any activity.
This autoscaling group is set up with min: 1, max:6, currently there is only 1 instance running.
Where or how can I find the error that is causing this? Is it perhaps something related to permissions? All instances are healthy/in-service
Spent the past couple of days going through threads on stackoverflow and other forums and haven't found anything that can help me find where the issue is.
Also, the timestamps differ in the screenshots as I had just done a re-deployment, but it is the same behavior all the same before I had done that, there is never activity on the autoscaling instance from the action launched within the alarm...
I tried to replicate your scaling policy and environment:-
Created an ASG
assigned a tracking policy of 0.0001
My max capacity is 2, desired 1 and minimum 1.
The key point here is to wait for Cloudwatch ALarms to collect data from data points, for my scale-out activity it took around 5 minutes (the majority of the time is taken in warm up of instances too )
I just found the reason of this issue, under the autoscaling group, go to Details, then to Advanced configurations and then to Suspended processes, nothing should be selected in this field, in my case alarms was set there along a few other actions, that is the reason autoscaling wasn't running on cloudwatch alarms.
Autoscaling group advanced configurations section

How to setup cloudwatch alarm for beanstalk environment memory

I'm trying to setup the Cloudwatch Alarm for memory on all instances of an AWS Elastic Beanstalk environment. I've setup capability to get Memory usage on Cloudwatch using the following tutorial:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-cw.html
Now I want to setup an alarm that would trigger if the MemoryUtilization of any of these instances go beyond a certain threshold. I can select all and setup alert on each of those separately, but I want to make sure that even if Beanstalk scales up the cluster or swaps an instance, the alert doesn't have to be reconfigured.
Is there a way I can setup alarm for a condition where Instance Name = "env-name" and Metric is MemoryUtilization?
What I understand from your question are the following requirements:
You have multiple metrics and want to use a logical OR condition when configuring an alarm, e.g. (avg metric1 > x || avg metric2 > y) ==> set alarm state to ALARM
You want the alarm to consider new metrics as they become available when new instances are launched by elastic beanstalk during scale out.
You want old metrics to not be considered as soon as elastic beanstalk scales in.
I think this is currently not possible.
There is an ongoing discussion on aws discussion forums [1] which reveals that at least (1) is possible using Metric Math. The Metric Math feature supports max. 10 metrics.
Solution
What you need to do is, to create a single metric which transports the information whether the alarm should be triggered or not ('computed metric'). There are multiple ways to achieve this:
For complex metrics you could write a bash script and run it on an EC2 instance using cron. The script would first query existing metrics using a dimension filter ('list-metrics'), then gather each metric ('get-metric-data'), aggregate it and then push the computed metric data point ('put-metric-data').
If the metric is rather simple, you could try the aggregate option of the AWS put-metric-data script [2]:
option_settings:
"aws:elasticbeanstalk:customoption" :
CloudWatchMetrics : "--mem-util --mem-used --mem-avail --disk-space-util --disk-space-used --disk-space-avail --disk-path=/ --auto-scaling --aggregated"
The documentation for the aggregated option says:
Adds aggregated metrics for instance type, AMI ID, and overall for the region.
References
[1] https://forums.aws.amazon.com/thread.jspa?threadID=94984
[2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html#put-metric-data
In the Elastic Beanstalk console for your environment:
Click the Monitoring link in the left-hand side navigation links.
Underneath the Overview, in the Monitoring section, click the Edit button.
Choose AWSEBAutoScalingGroup for the Resource.
Choose MemoryUtilization under CloudWatch Metric.
Modify Statistic and Description as desired.
Click the Add button, and then click the Save button in the Monitoring section.
Scroll down to find the new panel that was added. Click the bell icon in the upper right hand corner of the panel. This will take you to the settings to set up a new alarm.
If you do not see the MemoryUtilization metric available, verify that you have correctly set up the collection of the memory metrics.
Cloudwatch cannot create alarms in a generic way. There are only 2 ways to accomplish the task.
1) Create a startup script in your AMI. When a new instance is launched, it is responsible for its own Cloudwatch alarms. I used this a long time ago, and the approach is solid. However, running scripts on termination isn't reliable, so you'll have to periodically clean out the old alarms.
2) Use a tool that has decent capabilities (ahem.... not Cloudwatch). I recommend Blue Matador. With them, you don't even have to setup the alarms or thresholds, the machine learning automatically baselines your resources and creates alerts for you.
If you got here and don't know Beanstalk or Cloudwatch well enough to contribute, start here: How to Monitor AWS Elastic Beanstalk with CloudWatch

AWS Cloudwatch alarm for each single instance of an auto scaling group

We have configured an Auto Scaling group in AWS. And it works fine. We have configured some alarms for the group, such as: send alarm if the average CPUUtilization > 60 for 2 minutes ... use AWS CLI.
The only problem is, if we want to monitoring each instance in the group. We have to configure them manually. Are they any way to do it automatically like config, template?
Amazon CloudWatch alarms can be created on the Auto Scaling group as a whole, such as Average CPUUtilization. This is because alarms are used to tell Auto Scaling when to add/remove instances and such decisions would be based upon the group as a whole. For example, if one machine is 100% busy but another is 0% busy, then on average the group is only 50% busy.
There should be no reason for placing an alarm on the individual instances in an auto-scaling group, at least as far as triggering a scaling action.
There is no in-built capability to specify an alarm that will be applied against each auto-scaled instance individually. You could do it programmatically by responding to an Amazon SNS notification whenever an instance is added/removed by Auto Scaling, but this would require your own code to be written.
You can accomplish this with lifecycle hooks and a little lambda glue. When you have lifecycle events for adding or terminating an instance, you can create an alarm on that individual instance or remove it (depending on the event) via a lambda function.
To John's point, this is a little bit of an anti-pattern with horizontal scaling and load balancing. However, theory and practice sometimes diverge.

How to autoscale EMR task instances

I am using EMR with task instance groups as spot instances. I want to maintain minimum number of task instances always.
Means, whenever EMR terminates task instances because of bid price goes higher than what we set, my application should launch another task instance with little higher bid price.
My research-
Use Cloudwatch to inform when it breaches threshhold, and auto-scale task instances. But as per study, there is no concept of auto-scaling in EMR.
Use Cloudwatch, and notify SQS when threshhold breahes, and there is one service who is always consuming and expand task instances.
Questions
Is there any auto-scaling present in EMR ? If that is available, then my efforts will reduce to just set threshhold, and corresponding expansion task instances action.
If you have any other approach to solve this problem, please suggest.
How Spot Prices Work
When an Amazon EC2 instance is launched with a spot price (including when launched from Amazon EMR), the instance will start if the current spot price is below the provided bid price. If the spot price rises above the bid price, the instance is terminated. Instances are only charged the current spot price.
Therefore, the logic of launching a new spot instance with a "little higher bid price" is not necessary. The instance will always be charged the current spot price, so simply bid as high as you are willing to pay for a spot instance. You will either pay less than the spot price (great!) or your instance will be terminated because the price has gone higher than you are willing to pay (in which case you don't want to pay a "little higher" for the instance).
If you wish to "maintain minimum number of task instances" at all times, then either pay the normal EMR charge (which means the instances won't be terminated) or bid a particularly large price for the spot instances, such as 2 x the normal price. Yes, you might occasionally pay more for instances, but on average your price will be quite low.
If you wish to be particularly sneaky, you could bid up to the normal price for the EC2 instances then, if instances are terminated, launch more task nodes without using spot pricing. That way, your instances won't be terminated and you won't pay more than the normal EC2 price. However, you would have to terminate and replace those instances when the spot price drops, otherwise you are paying too much. That's why it might be better just to provide a high bid price on your spot instances.
Bottom line: Use spot pricing, but bid a high price. You'll get a good price most of the time.
AWS EMR does not have a autoscaling option available. But you can use a work around and integrate Autoscaling using AWS SQS. This is a rough picture what you can integrate.
Launch you EMR cluster using spot instance.
Set up a SQS Queue and create 3 triggers one for CPU threshold , second for EC2 spot instance termination notice and third for changing the spot instance bid prices.
So if the CPU usage increases SQS will trigger an event to launch a new instance to cluster, if there is spot instance termination notice SQS will trigger to launch another instance to balance the load and send a event to change the bid price to launch another spot instance. (This is just rough sketch but I guess you will understand the logic.
This is guide to AWS SQS Autoscaling.
https://docs.aws.amazon.com/autoscaling/latest/userguide/as-using-sqs-queue.html
As has been correctly pointed, the EMR API provides all necessary ingredients to 1) collect monitoring data, and 2) programmatically scale the cluster up and down.
Basically, there are two main options to implement autoscaling for EMR clusters:
Autoscaling Loop: A process that is running on a server and continuously monitors the cluster for its current load. Performance metrics (memory, CPU, I/O, etc) can be collected in regular intervals and stored in a database. Autoscaling rules are evaluated against the performance metrics, and the cluster's task nodes are scaled up or down if required.
Event-Based Autoscaling: Using CloudWatch metrics (e.g., metrics for EMR or EC2), you can programmatically define triggers that are fired under certain conditions (for instance, add nodes if average CPUUtilization of all nodes exceeds 80%).
Both options have their pros and cons. The main advantage of option 2 is that it is a server-less approach (does not require to run your own server). Option 1, on the other hand, does require a server, but therefore comes with more control to customize the logic of your scaling rules. Also, it allows to keep searchable records of the history of the scaling decisions.
You could take a look at Themis, an EMR autoscaling framework developed at Atlassian. Themis implements the autoscaling loop as discussed in option 1 above. Current features include proactive as well as reactive autoscaling, support for spot/on-demand task nodes, it comes with a Web UI, and the tool is very easy to configure.
I have had a similar problem, and I wanted to share one possible alternative. I have written a Java tool to dynamically resize an EMR cluster during the processing. It might help you. Check it out at:
http://www.lopakalogic.com/articles/hadoop-articles/dynamically-resize-emr/
The source code is available on Github

how to auto resize/scale amazon aws ec2 instance

currently am on the t2.micro & i read that amazon allow an auto scaling option to allow the server to expand/shrink according to the traffic which is perfect.
so my question is:
what exactly should i do in-order to enable the auto scaling/resizing
of the server when needed or when the traffic start to spike ?
is there an option to allow changing the instance type automatically ?
auto scaling i believe means adding more instances and balance the load in between them, so does this mean i need to have a background about load balancing and all that jargon that comes with it or does amazon take care of that automatically ?
am totaly new to the whole server maintenance/provisioning land, so please try to explain as simple as possible. also the only reason i went with amazon because of the automation capabilities it offer but sadly their docs are very complex and many things could go wrong.
If you want to scale your instance and you don't mind about uptime, I can suggest this workaround.
TL;DR: set an alert on AWS CloudWatch to "ping" SNS when a specific alert is triggered (i.e. CPU, RAM > %) and setup a Skeddly action to automatic scale your instance when a SNS endpoint is pinged.
Details:
subscribe to Skeddly, a service to automate actions on AWS.
It's free if you don't use it a lot;
setup a "Change EC2 Instances" action and activate the SNS feature,
then copy the SNS endpoint link; screenshot
be sure to clearly define the instance(s) affected by the action!
go on AWS Simple Notification Service dashboard and create a new
"topic", then select it and choose "subscribe to topic" from the
Actions menu;
you can paste here the SNS endpoint provided by Skeddly, then wait
until the subscription is confirmed (it takes a while);
now move to AWS CloudWatch and setup an alert for any metric that
you find meaningful for your instance up/downscaling, i.e. CPU >= 90%
for 1 day;
for each alarm add a notification selecting the "topic" previously
defined on SNS.
You are done!
Auto scaling with EC2 assumes "horizontal" scaling, adding more instances to a auto scaling group
There is no well used, standard pattern for "vertical" scaling of increasing an individual instance size automatically.
In order to effectively understand and use auto scaling for your application yes, you "need to have a background about load balancing and all that jargon that comes with it". See http://docs.aws.amazon.com/autoscaling/latest/userguide/GettingStartedTutorial.html
I'm assuming you are using the AWS management console. These operations are also possible using the Command Line Interface or AWS CloudFormation.
To resize an instance, you have to stop it then go to Actions > Instance Settings > Change Instance Type
As you can see, this operation is not automatic. In AWS you don't autoscale an instance but an autoscaling group which is a group of instances. So according to your memory/cpu usage, you can automatically start new instances (but not increase the size of the current ones)
To create an autoscaling group, go to Auto Scaling Groups in the EC2 menu:
To create an autoscaling group, you will need to create a Launch Configuration first which describes the properties of the instances you want to automatically scale. Then you will be able to define your scaling policies based on your Cloudwatch alarms (CPU usage, instance status...):