Why do some metrics missing in cloudwatch metrics view? - amazon-web-services

I am using cloudwatch metric view to view dyanmodb metrics. When I search ReadThrottleEvents, only a few tables or index shown in the list. I wonder why the metrics are not visible for all tables? Is there any configuration I need to configure in order to view them?
Below is a screenshot of searching this metrics and I expect every table index should be shown in the list. But I only got 2 results.

If there is no data, they don't show:
Metrics that have not had any new data points in the past two weeks do not appear in the console. They also do not appear when you type their metric name or dimension names in the search box in the All metrics tab in the console, and they are not returned in the results of a list-metrics command. The best way to retrieve these metrics is with the get-metric-data or get-metric-statistics commands in the AWS CLI.

Related

How to compress multiple metrics into a single cloud watch alarm using boto3 AWS

I'm trying to combine certain number of similar metrics into a single alarm in aws cloud watch. For example lets say for data quality monitoring in sagemaker, one among the metrics that are emitted from data quality monitoring job is feature baseline drift distance for each column so let say I've 600 columns so each column will have this metric. Is there a possible way to compress these metrics into a single cloud watch alarm ?
If not, Is there anyway to send the violation report as message via AWS SNS?
While I am not sure exactly on what out come you want when you refer to "compress the metrics into a single alarm." You can look at using metric math

AWS Grafana sees identical data for all custom metrics

I have created some custom metric filters for a cloudtrail log group, 11 in total.
Each metric filter is filtering for multiple related events (one is for iam changes, another is for user logon activity, etc).
I want to log each time one of these metric filters captures an event and show it on an AWS Grafana dashboard.
I have the CDK to deploy the metric filters, they show up in cloudwatch and I can see them graphing events in the AWS Console.
I can even add the datasource and correct permissions to access it from AWS Grafana.
It's only when I go to render the results onto the dashboard panel that I start to see a problem. All of he metrics have the same data.
I have tried to add all metrics and they all show the same data. I have included some screenshots to demonstrate the issue.

How to see the throttled reads graph for a DynamoDb table in AWS CloudWatch?

I am reading the article https://aws.amazon.com/blogs/aws/new-auto-scaling-for-amazon-dynamodb/ and this article mentions the throttled reads graph. I cannot find this graph in AWS. Please see the picture what I see, it is only capacity
In the AWS Console, navigate to th DynamoDB dashboard.
Select the table you'd like to review.
Choose the Metrics tab.
Under "Capcity: table" you should be able to see "Throttled read requests". Click on it for added detail/options.

Cloudwatch - Metrics are expiring

I currently have a bunch of custom metric's based in multiple regions across our AWS account.
I thought I was going crazy but have now confirmed that the metric I created a while ago is expiring when not used for a certain time period (could be 2 weeks).
Here's my setup.
I create a new metric on my log entry - which has no expiry date;
I then go to the main page on CloudWatch --> then to Metrics to view any metrics (I understand this will only display new metric hits when there are hits that match the metric rule).
About 2 weeks ago, I had 9 Metrics logged under my "Custom Namespaces", and I now have 8 - as if it does not keep all the data:
As far as i'm aware, all my metrics should stay in place (unless I remove them), however, it seems as though if these are not hit consistently, the data "expires", is that correct? If so, how are you meant to track historical data?
Thanks
CloudWatch will remove metrics from search if there was no new data published for that metric in the last 2 weeks.
This is mentioned in passing in the FAQ for EC2 metrics, but I think it applies to all metrics.
From 'will I lose the metrics data if I disable monitoring for an Amazon EC2 instance question' in the FAQ:
CloudWatch console limits the search of metrics to 2 weeks after a
metric is last ingested to ensure that the most up to date instances
are shown in your namespace.
Your data is still there however. Data adheres to a different retention policy.
You can still get your data if you know what the metric name is. If you added your metric to a dashboard, it will still be visible there. You can use CloudWatch PutDashboards API to add the metric to a dashboard or use CloudWatch GetMetricStatistics API to get the raw data.

How does Datadog measure DAUs?

Where does the userstats.o1.daus metric take the data from?
I looked in the metrics list and in the app, but I don't seem to find the source of the metric.
The application infrastructure relies on:
AWS
DynamoDB
New Relic
Amplitude
I found the answer thanks to #tqr_aupa_atleti and the support team from Datadog.
On the Datadog dashboard panel, I had to click Metrics -> Summary and look for my metric. I looked at the tags and I could figure out it was a custom metric form my company that uses data from Amplitude.