How to setup cloudwatch alarm for beanstalk environment memory - amazon-web-services

I'm trying to setup the Cloudwatch Alarm for memory on all instances of an AWS Elastic Beanstalk environment. I've setup capability to get Memory usage on Cloudwatch using the following tutorial:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-cw.html
Now I want to setup an alarm that would trigger if the MemoryUtilization of any of these instances go beyond a certain threshold. I can select all and setup alert on each of those separately, but I want to make sure that even if Beanstalk scales up the cluster or swaps an instance, the alert doesn't have to be reconfigured.
Is there a way I can setup alarm for a condition where Instance Name = "env-name" and Metric is MemoryUtilization?

What I understand from your question are the following requirements:
You have multiple metrics and want to use a logical OR condition when configuring an alarm, e.g. (avg metric1 > x || avg metric2 > y) ==> set alarm state to ALARM
You want the alarm to consider new metrics as they become available when new instances are launched by elastic beanstalk during scale out.
You want old metrics to not be considered as soon as elastic beanstalk scales in.
I think this is currently not possible.
There is an ongoing discussion on aws discussion forums [1] which reveals that at least (1) is possible using Metric Math. The Metric Math feature supports max. 10 metrics.
Solution
What you need to do is, to create a single metric which transports the information whether the alarm should be triggered or not ('computed metric'). There are multiple ways to achieve this:
For complex metrics you could write a bash script and run it on an EC2 instance using cron. The script would first query existing metrics using a dimension filter ('list-metrics'), then gather each metric ('get-metric-data'), aggregate it and then push the computed metric data point ('put-metric-data').
If the metric is rather simple, you could try the aggregate option of the AWS put-metric-data script [2]:
option_settings:
"aws:elasticbeanstalk:customoption" :
CloudWatchMetrics : "--mem-util --mem-used --mem-avail --disk-space-util --disk-space-used --disk-space-avail --disk-path=/ --auto-scaling --aggregated"
The documentation for the aggregated option says:
Adds aggregated metrics for instance type, AMI ID, and overall for the region.
References
[1] https://forums.aws.amazon.com/thread.jspa?threadID=94984
[2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html#put-metric-data

In the Elastic Beanstalk console for your environment:
Click the Monitoring link in the left-hand side navigation links.
Underneath the Overview, in the Monitoring section, click the Edit button.
Choose AWSEBAutoScalingGroup for the Resource.
Choose MemoryUtilization under CloudWatch Metric.
Modify Statistic and Description as desired.
Click the Add button, and then click the Save button in the Monitoring section.
Scroll down to find the new panel that was added. Click the bell icon in the upper right hand corner of the panel. This will take you to the settings to set up a new alarm.
If you do not see the MemoryUtilization metric available, verify that you have correctly set up the collection of the memory metrics.

Cloudwatch cannot create alarms in a generic way. There are only 2 ways to accomplish the task.
1) Create a startup script in your AMI. When a new instance is launched, it is responsible for its own Cloudwatch alarms. I used this a long time ago, and the approach is solid. However, running scripts on termination isn't reliable, so you'll have to periodically clean out the old alarms.
2) Use a tool that has decent capabilities (ahem.... not Cloudwatch). I recommend Blue Matador. With them, you don't even have to setup the alarms or thresholds, the machine learning automatically baselines your resources and creates alerts for you.
If you got here and don't know Beanstalk or Cloudwatch well enough to contribute, start here: How to Monitor AWS Elastic Beanstalk with CloudWatch

Related

Shutdown group of VMs based on CloudWatch Alarm - Create AWS Lambda that get that check Cloud Watch metric allarms and make some decision

Use Case:
Shutdown group of EC2 instances where average CPU usage is lower then X percent - but not on one of them but on all of them (like average of group).
I have group of VMs that is on Product setup. I need all of them ON to run working product. Sometime nobody uses it and for cost saving I can power off all VMs. I have implemented Cloud Watch alert with action to shutdown when average CPU usage is lower then 20% and it works. But it is per VM. I don't want to have one VM from whole setup as Powered OFF. I need whole setup ON or OFF.
I was wondering to have AWS Lambda that check alerts on all VMs and based on those information make some decision.
Do you think this is way to go? Do you have diffrent idea to achive this goal? Is that possible to check Cloud Watch alarms from AWS Lambda (to determine if alert is ON or OFF)?
Looking for general ideas

Passenger - Using "Requests in queue" as an AWS metric for autoscaling

I am surprised to find little information regarding EC2 autoscaling with Phusion Passenger.
I actually discovered not so long ago a metric "Requests in queue" being exposed upon running passenger-status
I am wondering whether this stat would make a nice metric to help with autoscaling.
Right now most AWS EC2 Autoscaling guides mention using CPU and Memory to write autoscaling rules but I find this insufficient. When I think about the problem autoscaling should solve, that is being able to scale up to the demand, I'd rather base those rules on the number of pending/completed requests to report a node health or a cluster congestion, and Passenger "Requests in queue" (and also for each process, the "Last Used" and "Processed" count) seems to useful.
I am wondering it it would be possible to report this "Requests in queue" stat (and eventually others) periodically as an AWS metric. I was thinking the following rule would be ideal for autoscaling : If the average number of "requests in queue" on the autoscaled instances is to exceed a threshold value, this would trigger spawning a new machine from the autoscaling group.
Is this possible ?
Has anyone ever tried to implement autoscaling rules based on number of requests in queue this way ?
This is totally possible (and a good approach).
Step 1. Create custom CloudWatch metric for "Requests in queue".
You will have to write your own agent that runs passenger-status, extracts the value and sends it to CloudWatch. You can use any AWS SDK or just AWS CLI: http://docs.aws.amazon.com/cli/latest/reference/cloudwatch/put-metric-data.html
Step 2. Create alarms for scale up and scale down based on your custom metric.
Step 3. Modify scaling policy for your Auto Scaling Group to use your custom alarms to scale up/down.

how to auto resize/scale amazon aws ec2 instance

currently am on the t2.micro & i read that amazon allow an auto scaling option to allow the server to expand/shrink according to the traffic which is perfect.
so my question is:
what exactly should i do in-order to enable the auto scaling/resizing
of the server when needed or when the traffic start to spike ?
is there an option to allow changing the instance type automatically ?
auto scaling i believe means adding more instances and balance the load in between them, so does this mean i need to have a background about load balancing and all that jargon that comes with it or does amazon take care of that automatically ?
am totaly new to the whole server maintenance/provisioning land, so please try to explain as simple as possible. also the only reason i went with amazon because of the automation capabilities it offer but sadly their docs are very complex and many things could go wrong.
If you want to scale your instance and you don't mind about uptime, I can suggest this workaround.
TL;DR: set an alert on AWS CloudWatch to "ping" SNS when a specific alert is triggered (i.e. CPU, RAM > %) and setup a Skeddly action to automatic scale your instance when a SNS endpoint is pinged.
Details:
subscribe to Skeddly, a service to automate actions on AWS.
It's free if you don't use it a lot;
setup a "Change EC2 Instances" action and activate the SNS feature,
then copy the SNS endpoint link; screenshot
be sure to clearly define the instance(s) affected by the action!
go on AWS Simple Notification Service dashboard and create a new
"topic", then select it and choose "subscribe to topic" from the
Actions menu;
you can paste here the SNS endpoint provided by Skeddly, then wait
until the subscription is confirmed (it takes a while);
now move to AWS CloudWatch and setup an alert for any metric that
you find meaningful for your instance up/downscaling, i.e. CPU >= 90%
for 1 day;
for each alarm add a notification selecting the "topic" previously
defined on SNS.
You are done!
Auto scaling with EC2 assumes "horizontal" scaling, adding more instances to a auto scaling group
There is no well used, standard pattern for "vertical" scaling of increasing an individual instance size automatically.
In order to effectively understand and use auto scaling for your application yes, you "need to have a background about load balancing and all that jargon that comes with it". See http://docs.aws.amazon.com/autoscaling/latest/userguide/GettingStartedTutorial.html
I'm assuming you are using the AWS management console. These operations are also possible using the Command Line Interface or AWS CloudFormation.
To resize an instance, you have to stop it then go to Actions > Instance Settings > Change Instance Type
As you can see, this operation is not automatic. In AWS you don't autoscale an instance but an autoscaling group which is a group of instances. So according to your memory/cpu usage, you can automatically start new instances (but not increase the size of the current ones)
To create an autoscaling group, go to Auto Scaling Groups in the EC2 menu:
To create an autoscaling group, you will need to create a Launch Configuration first which describes the properties of the instances you want to automatically scale. Then you will be able to define your scaling policies based on your Cloudwatch alarms (CPU usage, instance status...):

AWS CloudWatch Alarms to multiple EC2 instances

I'm wanting to apply a CloudWatch alarm to stop instances which aren't being used in our pre-production environment. We often have instances being spun up, used and then left turned on which is really starting to cost us a fair amount of money.
CloudWatch alarms have a handy feature whereby we can stop based on some metrics - this is awesome and what I'd like to use to constantly keep an eye on the servers with but let it tidy up the instances for me.
The problem with this is that it appears that the CloudWatch alarms need to be created individually against each instance. Is there a way in which I can create one alarm which would share values across all current and future instances which will be started?
ETA - Alternatively, tell me that these options are better than CloudWatch and I'll be happy at that.
AWS EC2 stop all through PowerShell/CMD tools
Add a startup script that creates the CloudWatch alarm to the base image you use to generate your VMs.
http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CLIReference.html
I don't believe this is possible - CloudWatch seems designed to be 'very manual' or 'very automated'. i.e. You can't setup one alarm which would go off if any one instance is idle, you have to setup individual alarms for each instance.
A couple of possible solutions, which are probably not what you want to hear:
Script your instance creation, and add a call to cloudwatch to create an alarm for each instance.
Run a service continually, which looks for instances and checks to ensure that there is an alarm for the instance, create alarms for the new instances, and remove alarms for instances which have been terminated.
I think what you are actually looking for would be auto-scaling:
https://aws.amazon.com/documentation/autoscaling/

Monitor Volume Size on Amazon EBS volume

I was wondering if it is possible to automatically monitor the usage percentage on a EBS volume in aws (the volume I wish to monitor is attached to a instance). Perhaps this can be done with alarms in cloudwatch? For example, I need to be alerted if the volume usage percentage reaches 95%. Any ideas?
Amazon won't do this for you - from their point of view an EBS volume is just a bunch of blocks
In the past I've done this by writing a script (run via a cronjob) that checked the amount of free space on the volume and posted it to cloudwatch (which was setup to trigger an alarm past a certain threshold).
Amazon also provide such a script
AWS provide a perl script which can be used to create CW alerts/metrics as detailed here
https://serverfault.com/questions/439928/making-alarm-in-disk-space-using-cloudwatch
An update on this question.
All the answers are now outdated and the links posted show deprecated procedures.
The new way to get disk usage in EC2, is to use the unified cloudwatch agent which has pre-built capabilities to extract metrics from EC2, if configured correctly.
You can follow the instruction from these docs: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
Now you can actually create a cloudwatch alarm and find the EBS Volume and create a metric using freeDiskSpace and have it send notifications to the SNS.