I can't seem to find this info anywhere.
Specifically, I'm looking at metrics like NumberOfMessagesPublished and NumberOfNotificationsDelivered. How far back does CloudWatch retain this data?
2 weeks according to Monitoring Your Instances Using CloudWatch
These statistics are recorded for a period of two weeks, so that you
can access historical information and gain a better perspective on how
your web application or service is performing.
If you want to archive metrics beyond 2 weeks
If you want to archive metrics beyond 2 weeks you can do so by calling
mon-get-stats command from the command line and storing the results in
Amazon S3 or Amazon SimpleDB.
Related
I'm trying to combine certain number of similar metrics into a single alarm in aws cloud watch. For example lets say for data quality monitoring in sagemaker, one among the metrics that are emitted from data quality monitoring job is feature baseline drift distance for each column so let say I've 600 columns so each column will have this metric. Is there a possible way to compress these metrics into a single cloud watch alarm ?
If not, Is there anyway to send the violation report as message via AWS SNS?
While I am not sure exactly on what out come you want when you refer to "compress the metrics into a single alarm." You can look at using metric math
Is there any data limitation on aws cloudwatch logs to send the logs , because in my case I am getting the logs data 6 million records per 3 days from my application. So is aws cloudwatch logs will able to handle that much data?
Check out the aws quotas page. Not sure what you mean by "60lac" but the limits on CloudWatch are more than adequate for the majority of use cases.
There is no published limit on the overall data volume held. There'll be a practical limit somewhere but it won't be hit by a single AWS customer. If you're using the putLogEvents API you could be constrained by the limit of 5 requests per second per log stream, in which case consider using more streams or larger batches of events (up to 1MB).
Is there a quick way to check how many data (volume wise, GBs, TBs etc) did my specific DMS task transfered for example within last month?
I can't find any note in the documentation on that, I could probably try with boto3 but want to double check first. Thanks for help!
Even with Boto3, you can check the API - DescribeReplicationTasks but likely, there is no information about your data transfer.
Reference: https://docs.aws.amazon.com/dms/latest/APIReference/API_DescribeReplicationTasks.html
If you have only 1 data replication task that is associated with only 1 replication instance, you can check that replication instance's network metric via CloudWatch metric. From CloudWatch metrics, AWS DMS namespace, there will be several network metrics such as NetworkTransitThroughput or NetworkReceiveThroughput. You can choose one and try as below:
Statistic: Sum
Period: 30 days (or up to you)
And you have a 30DAYS_THROUGHPUT.
It looks like custom metrics will be kept for 15 months, if I understand it correctly, since they get aggregated to higher resolution, according to https://aws.amazon.com/cloudwatch/faqs. Does it mean we have to pay for at least 15 months once we create a custom metric?
I have CloudWatch agent installed to collect various metrics using user_data. It creates new metrics for every new instances. After running many tests, I have more than 6,000 custom metrics, but most of them are unused. Since there is no way to delete custom metrics, do I get charged for those unused metrics until they expire (15 months)? I hope I'm wrong on this :]
Please clarify how we get charged for unused custom metrics.
You will not get charged for those. You will get charged for the metric for the duration you publish data onto them. It's not very clear on the CloudWatch pricing page but they hint it in the [original pricing reduction blogpost]{https://aws.amazon.com/blogs/aws/aws-price-reduction-cloudwatch-custom-metrics/).
You will get charged though on the retrieval of those (API costs).
we have the same issue to resolve. waiting for clearification from aws
I currently have a bunch of custom metric's based in multiple regions across our AWS account.
I thought I was going crazy but have now confirmed that the metric I created a while ago is expiring when not used for a certain time period (could be 2 weeks).
Here's my setup.
I create a new metric on my log entry - which has no expiry date;
I then go to the main page on CloudWatch --> then to Metrics to view any metrics (I understand this will only display new metric hits when there are hits that match the metric rule).
About 2 weeks ago, I had 9 Metrics logged under my "Custom Namespaces", and I now have 8 - as if it does not keep all the data:
As far as i'm aware, all my metrics should stay in place (unless I remove them), however, it seems as though if these are not hit consistently, the data "expires", is that correct? If so, how are you meant to track historical data?
Thanks
CloudWatch will remove metrics from search if there was no new data published for that metric in the last 2 weeks.
This is mentioned in passing in the FAQ for EC2 metrics, but I think it applies to all metrics.
From 'will I lose the metrics data if I disable monitoring for an Amazon EC2 instance question' in the FAQ:
CloudWatch console limits the search of metrics to 2 weeks after a
metric is last ingested to ensure that the most up to date instances
are shown in your namespace.
Your data is still there however. Data adheres to a different retention policy.
You can still get your data if you know what the metric name is. If you added your metric to a dashboard, it will still be visible there. You can use CloudWatch PutDashboards API to add the metric to a dashboard or use CloudWatch GetMetricStatistics API to get the raw data.