AWS CloudWatch - Creating a metric from different datapoints in time - amazon-web-services

Is it possible to define CW metric as the difference between the same metrics, but in two consecutive data points in time?
I need to measure how many objects has been put in an S3 bucket for a given time period, so I would use make the difference NumberOfObjects in this time window. PS: I couldn't find any "New objects" metric (which is not the same of PutRequests).

You can use 'DIFF' function.
It returns the difference between each value in the time series and the preceding value from that time series.
Expression:
DIFF(m1)
I do not have much data to test it but in this example I had added 2 new objects and using that expression shows the new objects added that day.
Reference:
Functions supported for metric math

Related

How do I query Prometheus for the timeseries that was updated last?

I have 100 instances of a service that use one database. I want them to export a Prometheus metric with the number of rows in a specific table of this database.
To avoid hitting the database with 100 queries at the same time, I periodically elect one of the instances to do the measurement and set a Prometheus gauge to the number obtained. Different instances may be elected at different times. Thus, each of the 100 instances may have its own value of the gauge, but only one of them is “current” at any given time.
What is the best way to pick only this “current” value from the 100 gauges?
My first idea was to export two gauges from each instance: the actual measurement and its timestamp. Then perhaps I could take the max(timestamp), then and it with the actual metric. But I can’t figure out how to do this in PromQL, because max will erase the instance I could and on.
My second idea was to reset the gauge to −1 (some sentinel value) at some time after the measurement. But this looks brittle, because if I don’t synchronize everything tightly, the “current” gauge could be reset before or after the “new” one is set, causing gaps or overlaps. Similar considerations go for explicitly deleting the metric and for exporting it with an explicit timestamp (to induce staleness).
I figured out the first idea (not tested yet):
avg(my_rows_count and on(instance) topk(1, my_rows_count_timestamp))
avg could as well be max or min, it only serves to erase instance from the final result.
last_over_time should do the trick
last_over_time(my_rows_count[1m])
given only one of them is “current” at any given time, like you said.

aws cloudwatch metrics - AVG over a range

I want to make an average graph of the CDCLatencySource and CDCLatencyTarget of few ARNs.
CDCLatencySource are m1,m2,m3,m4
CDCLatencyTarget are m5,m6,m7,m8
So I made another row - AVG([m1,m4]) for the Source and same for the target.
But it looks like it average only the m1 & m4 and not the whole range.
What am I missing?
You will need to include all metrics, so for your CDCLatencySource it would be AVG([m1,m2,m3,m4]).
Similarly for CDCLatencyTarget the value would be AVG([m5,m6,m7,m8])
The functions do not accept ranges, instead they accept each metric id individually in the list that is passed into the function.
More information for metric math is available here for further reading.
From the docs:
AVG The AVG of a single time series returns a scalar representing the average of all the data points in the metric. The AVG of an array of time series returns a single time series. Missing values are treated as 0.
Thus you need to provide full array of time series:
AVG([m1,m2,m3,m4])
AVG([m5,m6,m7,m8])

Stackdriver Logs-Based Metrics - need sum over alignment period

We have some stackdriver log entries that look something like this:
{
insertId: "xyz"
jsonPayload: {
countOfApples: 100
// other stuff
}
// other stuff
}
We would like to be able to set up a log-based metric that tells us the total number of apples seen in the past 10 mins (or any alignment period) but I have, thus far, been unable to find a means of doing so despite reading through the documentation.
Attempt 1:
Filter for those log-entries where countOfApples is specified and create a Counter metric with countOfApples as a label.
having done this, I can filter based on the countOfApples being above or below a certain value. I cannot see a means of aggregating based on this value. All the aggregation options seem to apply to the number of log entries matching the filter over the alignment period
Attempt 2:
Filter for those log-entries where countOfApples is specified and create a distribution metric with the Field Name set to jsonPayload.CountOfApples
This seems to get closer because I can now see the apple count in the metrics explorer but I cannot find the correct combination of Aligner/Reducers to just give me the total number of apples over the period? Selecting Aligner:delta & Reducer:sum results in an error message:
This aggregation does not produce a valid data type for a Line plot
type. Click here to switch the aligner to sum and the reducer to 99th
percentile
Is it possible to just monitor the total sum of all these values over each alignment period?
As of 2019/05/03, it is not possible to create a counter metric based on the values stored in the logs. Putting the values into a label simply exposes them as strings, which lets you filter but not perform aggregations based on those values. According to the documentation, a counter metric counts log entries, not the values in those log entries. As you've noticed, there aren't enough operations available on distribution metrics to do what you want.
For now, your best bet is to write your own custom metric based on those log values. You can do this by exporting your logs to Cloud Pub/Sub and writing some code to process the logs from Pub/Sub and send custom metrics. Alternatively, you could try to configure the Stackdriver monitoring agent to extract the values using the tail plugin, and send them as custom metrics.
If you just need to graph and explore the values (rather than, e.g., use them for alerting), you could try Cloud Datalab.
If anyone still looking how to solve it, it seems that now it's possible to do sum aggregation on distribution metric using sum_from function. Example:
fetch k8s_container
| metric 'logging.googleapis.com/user/tracking-data-len'
| group_by [], sum(sum_from(value))

Cumulative sum of AWS Cloudwatch Metric

AWS Cloudwatch receives a count of 1 every time I start an image download. I am downloading 1,000s of images (on a cluster of EC2 instances) and would like to track the total progress.
I can't find any documentation on how to plot the cumulative sum of a metric. The AWS Cloudwatch Math Expressions looked promising, but they do not have an integrate function.
Currently, I can plot the sum of the started image downloads but only for periods, as seen below. Ideally, I'd like to plot the integral of this plot:
You can get a cumulative sum over the current range by using the SUM() function that is operated over the original range containing only the number One (1). Remember, you're looking for a single number in the end, so it's not much of a graph, but you need to turn the single value sum back into a time-series.
Define m1 as your metric. This is the metric you will want to use SUM() on.
Define an expression e1 as m1/m1. This results in a time-series with every value equal to 1. This is what will allow you convert that SUM back to a time-series.
Define an expression e2 as SUM(m1) / e1. This is, effectively, the cumulative sum of m1 divided by one for every data-point in the original time-series. It will be a horizontal line on the graph, which will have every point on that horizontal line being the cumulative sum of metric m1. This is required because Cloudwatch can only plot a time-series on the chart, not a single value.
Make m1 and e1 invisible. You need them, but you don't need to see them.
Finally, change the chart type from Line to Number, since you only wanted the cumulative sum anyway.
The reason you can't use SUM() directly is because it is a single value. By dividing by a time-series containing all 1's, the entire graph is the result of the SUM(). Then, changing the chart to a Number effectively hides all the math and presents only the "final result".
Looks like RUNNING_SUM() has been added that does what your need:
Graph with RUNNING_SUM
You can find RUNNING_SUM() under "Add math"->"All functions"
You are correct. All Amazon CloudWatch metrics are for a defined period.
The maximum period for a metric is one day, so this is not suitable for a cumulative counter that you wish to continue beyond one day.
You would need to find an alternate method of storing the count, such as an Amazon DynamoDB table. Use an atomic counter via UpdateItem to increment the count.
You can also use a very long period.
Change your stat to SUM, and set your metric's period to 7 days. You'll get a time series of 1 point with the cumulative sum of all the downloads.
If you give each download a unique dimension value, you can keep your queries separate.

Show CloudWatch metric with unit Seconds in Hours

I have a custom cloud watch metric with unit Seconds. (representing the age of a cache)
As usual values are around 125,000 I'd like to convert them into Hours - for better readability.
Is that possible?
This has changed with the addition of Metrics Math. You can do all sorts of transformations on your data, both manually (from the console) and from CloudFormation dashboard templates.
From the console: see the link above, which says:
To add a math expression to a graph
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
Create or edit a graph or line widget.
Choose Graphed metrics.
Choose Add a math expression. A new line appears for the expression.
For the Details column, type the math expression. The tables in the following section list the functions you can use in the
expression.
To use a metric or the result of another expression as part of the formula for this expression, use the value shown in the Id column. For
example, m1+m2 or e1-MIN(e1).
From a CloudFormation Template
You can add new metrics which are Metrics Math expressions, transforming existing metrics. You can add, subtract, multiply, etc. metrics and scalars. In your case, you probably just want to use divide, like in this example:
Say you have the following bucket request latency metrics object in your template:
"metrics":[
["AWS/S3","TotalRequestLatency","BucketName","MyBucketName"]
]
The latency default is in milliseconds. Let's plot it in seconds, just for fun. 1s = 1,000ms so we'll add the following:
"metrics":[
["AWS/S3","TotalRequestLatency","BucketName","MyBucketName",{"id": "timeInMillis"}],
[{"expression":"timeInMillis / 1000", "label":"LatencyInSeconds","id":"timeInSeconds"}]
]
Note that the expression has access to the ID of the other metrics. Helpful naming can be useful when things get more complicated, but the key thing is just to match the variables you put in the expression to the ID you assign to the corresponding metric.
This leaves us with a graph with two metrics on it: one milliseconds, the other seconds. If we want to lose the milliseconds, we can, but we need to keep the metric values around to compute the math expression, so we use the following work-around:
"metrics":[
["AWS/S3","TotalRequestLatency","BucketName","MyBucketName",{"id": "timeInMillis","visible":false}],
[{"expression":"timeInMillis / 1000", "label":"LatencyInSeconds","id":"timeInSeconds"}]
]
Making the metric invisible takes it off the graph while still allowing us to compute our expression off of it.
Cloudwatch does not do any Unit conversion (i.e seconds into hours etc). So you cannot use the AWS console to display your 'Seconds' datapoint values converted to Hours.
You could either publish your metric values as 'Hours' (leaving the Unit field blank or set it to 'None').
Otherwise if you still want to provide the datapoints with units 'Seconds' you could retrieve the datapoints (using the GetMetricStatistics API) and graph the values using some other dashboard/graphing solution.