Plotted a cloudwatch custim metric using lambda in aws cloudwatch/ but sent wrong dimension. In api call by mistake i sent swapped values for values / dimension name. now i am getting lot of metrics with 0.896, 0.345 etc dimentions. how to delete the. It iss creating garbage in the metric list.see screenshot for details.
Dimension is part of metric's identity:
A dimension is a name/value pair that is part of the identity of a metric.
Since its not possible to delete any metrics, you can't remove/change dimensions of metrics already in the AWS. You have to wait till it expires after 15 months:
CloudWatch does not support metric deletion. Metrics expire based on the retention schedules described above.
For your case, you have to create new metrics with correct dimension and use that in your plots and alarms.
Related
I'm trying to combine certain number of similar metrics into a single alarm in aws cloud watch. For example lets say for data quality monitoring in sagemaker, one among the metrics that are emitted from data quality monitoring job is feature baseline drift distance for each column so let say I've 600 columns so each column will have this metric. Is there a possible way to compress these metrics into a single cloud watch alarm ?
If not, Is there anyway to send the violation report as message via AWS SNS?
While I am not sure exactly on what out come you want when you refer to "compress the metrics into a single alarm." You can look at using metric math
I'm trying to identify the initial creation date of a metric on CloudWatch using the AWS CLI but don't see any way of doing so in the documentation. I can kind of identify the start date if there is a large block of missing data but that doesn't work for metrics that have large gaps in data.
CloudWatch metrics are "created" with the first PutMetricData call that includes the metric. I use quotes around created, because the metric doesn't have an independent existence, it's simply an entry in the time-series database. If there's a gap in time with no entries, the metric effectively does not exist for that gap.
Another caveat to CloudWatch metrics is that they only have a lifetime of 455 days, and individual metric values are aggregated as they age (see specifics at link).
All of which begs the question: what's the real problem that you're trying to solve.
I currently have a bunch of custom metric's based in multiple regions across our AWS account.
I thought I was going crazy but have now confirmed that the metric I created a while ago is expiring when not used for a certain time period (could be 2 weeks).
Here's my setup.
I create a new metric on my log entry - which has no expiry date;
I then go to the main page on CloudWatch --> then to Metrics to view any metrics (I understand this will only display new metric hits when there are hits that match the metric rule).
About 2 weeks ago, I had 9 Metrics logged under my "Custom Namespaces", and I now have 8 - as if it does not keep all the data:
As far as i'm aware, all my metrics should stay in place (unless I remove them), however, it seems as though if these are not hit consistently, the data "expires", is that correct? If so, how are you meant to track historical data?
Thanks
CloudWatch will remove metrics from search if there was no new data published for that metric in the last 2 weeks.
This is mentioned in passing in the FAQ for EC2 metrics, but I think it applies to all metrics.
From 'will I lose the metrics data if I disable monitoring for an Amazon EC2 instance question' in the FAQ:
CloudWatch console limits the search of metrics to 2 weeks after a
metric is last ingested to ensure that the most up to date instances
are shown in your namespace.
Your data is still there however. Data adheres to a different retention policy.
You can still get your data if you know what the metric name is. If you added your metric to a dashboard, it will still be visible there. You can use CloudWatch PutDashboards API to add the metric to a dashboard or use CloudWatch GetMetricStatistics API to get the raw data.
I have a custom metric with many dimensions. Can I setup an alarm against the value of a dimension instead of the metric?
You can't. A metric is identified by a set of dimensions and a namesapce. If you change a dimension value, well then you've created a new metric. You can only alarm on metric datapoint values.
See CloudWatch Concepts for clarification.
I'm trying to push data into a custom metric on AWS CloudWatch but wanted to find out more about the Dimensions and how these are used? I've already read the AWS documentation but it doesn't really explain what they are used for and how it affects the graphing UI in the AWS Management Console.
Are Dimensions a way to breakdown the Metric Value further?
To give a fictitious example, say I have a metric which counts the number of people in a room. The metric's name is called "Population". I report the count once a minute. The Metric Count is set to the number of people. The Dimension field is just a list of Name and Value pairs. Assuming I report a datapoint with a value of 90, can I add two Dimensions as follows:
1. Name: Male, Count: 50
2. Name: Female, Count: 40
Any help will be greatly appreciated.
Yes, you can add dimensions such as you described to your custom metrics.
However, CloudWatch is NOT able to aggregate across these dimensions, as it doesn't know the groups of these dimensions. Basically:
Amazon CloudWatch treats each unique combination of dimensions as a
separate metric. For example, each call to mon-put-data in the
following figure creates a separate metric because each call uses a
different set of dimensions. This is true even though all four calls
use the same metric name (ServerStats).
See more information about dimensions in CloudWatch here
Do note that you can retrieve aggregated value from API, as well as plot a graph in CloudWatch using a math expression. See Using metric math
I should probably also add that you can NOT use metric math in alarms.
update: as #Brooks said Amazon CloudWatch Launches Ability to Add Alarms on Metric Math Expressions
All in all pretty restricted and user-unfriendly compared e.g. to DataDog.