I have been having some difficulty understanding how to go about the ideal threshold for few of our cloudwatch alarms. I am looking at metrics for error rates, fault rate and failure rate. I am vaguely looking at having an evaluation period of around 15 mins. My metrics are being recorded at a minute level currently. I have the following ideas:
To look at the avg of minute level data over a few days, and set it slightly higher than that.
To try different thresholds (t1,t2 ..) and for a given day, see how many times the datapoints are crossing it in 15 min bins.
Not sure if this is the right way of going about it, do share if there is a better way of going about the problem.
PS 1: I know that thresholds should be based on Service Level Agreements(SLA), but let's say we do not have an SLA yet.
PS 2: Also does can I import data from cloudwatch to excel for some easier manipulation? Currently looking at running a few queries on log insights to calculate error rates.
In your case, maybe you could also try Amazon CloudWatch Anomaly Detection instead of static thresholds:
You can create an alarm based on CloudWatch anomaly detection, which mines past metric data and creates a model of expected values. The expected values take into account the typical hourly, daily, and weekly patterns in the metric.
Related
Recently we faced an outage due to 403 rateLimitExceeded error. We are trying to setup an alert using gcp metric for this error. However the metric for bigquery.googleapis.com/query/count or bigquery.googleapis.com/job/num_in_flight is not showing the number of queries running correctly. We believe we crossed the threshold of 100 concurrent queries several times over the past few days but the metric explorer shows only a maximum of 5 only on few occasions. Do these metrics need any other configs to show the right number or we should use some other way to create an alert that shows that we have crossed 80% of concurrent query no.
Background
I have a website deployed in multiple machines. I want to create a Google Custom Metric that specifies the throughput of it - how many calls were served.
The idea was to create a custom metric that collects information about served requests and 1 time per minute to update the information into a custom metric. So, for each machine, this code can happen a maximum of 1-time per minute. But this process is happening on each machine on my cluster.
Running the code locally is working perfectly.
The problem
I'm getting this error: Grpc.Core.RpcException:
Status(StatusCode=InvalidArgument, Detail="One or more TimeSeries
could not be written: One or more points were written more frequently
than the maximum sampling period configured for the metric. {Metric:
custom.googleapis.com/web/2xx, Timestamps: {Youngest Existing:
'2019/09/28-23:58:59.000', New: '2019/09/28-23:59:02.000'}}:
timeSeries[0]; One or more points were written more frequently than
the maximum sampling period configured for the metric. {Metric:
custom.googleapis.com/web/4xx, Timestamps: {Youngest Existing:
'2019/09/28-23:58:59.000', New: '2019/09/28-23:59:02.000'}}:
timeSeries1")
Then, I was reading in the custom metric limitations that:
Rate at which data can be written to a single time series = one point per minute
I was thinking that Google Cloud Custom Metric will handle the concurrencies issues for me.
According to their limitations, the only option for me to implement realtime monitoring is to put another application that will collect information from all machines and will update it into a custom metric. It sounds to me like too much work for a real use case.
What I'm missing?
Now that you add the machine name on the metric and you get the machines metrics.
To SUM these metrics go to Stackdriver > Metric Explorer, and group your metrics by project-id or label for example, and then SUM the metrics.
https://cloud.google.com/monitoring/charts/metrics-selector#alignment
You can save the chart in a custom dashboard.
I have been trying to register an alert on spike of some metrics using Stackdriver. Here's the query and details:
If there a sudden spike and 500s cross 20
If the total number of requests (200s or others) cross 3000 over 5 mins
To achieve [1], I put the aggregation as mean, aligner as mean (sum over aligner doesn't seem to work - I dont understand why). This query works if the average of requests over 5 mins is over 20 (which is the expected behavior). But I am not able to register any single spike which is the requirement.
Again, for [2] the average over a certain duration works but the summation of requests doesn't seem to work.
If there a way of achieving either or both of the requirements.
PS: Please let me know if you need more data or snippets of the dashboard to understand what I have done till now. I will go ahead and add some accordingly.
I do not believe there is aggregation when trying to set up an alert. As an example for [1], please go to
Stackdriver Monitoring
Alerting
Create a policy and add your conditions
Select your Resource Type
Select your metric, condition and threshold = 20
Response_code_class = 500
Save condition
The alerting UI has changed since the previous answer was written. You can now specify aggregations when creating alerting policies. That said, I don't think you want mean; that's going to smooth out your curve which will defeat your intended use case. A simple threshold alert with a short duration (even zero) ought to do it, I think.
For your second case, you ought to be able to compute a five-minute sum and alert on that. If you still can't get it to work, respond here or file a support ticket and we'll see how we can help you.
Aaron Sher, Stackdriver engineer
I've been scouring different sources (Boto3 docs, AWS docs among others) and most only list a limited number of units as far as time goes. Seconds, Milliseconds, and Microseconds. Say I want to measure a metric in Minutes. How would I go about publishing a custom metric that does this?
Seconds, Microseconds and Milliseconds are the only supported time units: https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html
If you want to graph your data using CloudWatch Dashboards, in Minutes, you could publish the data in Seconds and then use metric math to get the data in Minutes: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html
You give the metric id m1 and then your expression would be m1/60.
You can also use metric math with GetMetricData API, in case you need raw values instead of a graph: https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-data.html
I'm trying to set up a custom dashboard in CloudWatch, and don't understand how "Period" is affecting how my data is being displayed.
Couldn't find any coverage of this variable in the AWS documentation, so any guidance would be appreciated!
Period is the width of the time range of each datapoint on a graph and it's used to define the granularity at which you want to view your data.
For example, if you're graphing total number of visits to your site during a day you could set the period to 1h, which would plot 24 datapoints and you will see how many visitors you had in each hour of that day. If you set the period to 1min, graph will display 1440 datapoints and you will see how many visitors you had each in minute of that day.
See the CloudWatch docs for more details:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#CloudWatchPeriods
Here is a similar question that might be useful:
API Gateway Cloudwatch advanced logging