Cloudwatch - Metrics are expiring - amazon-web-services

I currently have a bunch of custom metric's based in multiple regions across our AWS account.
I thought I was going crazy but have now confirmed that the metric I created a while ago is expiring when not used for a certain time period (could be 2 weeks).
Here's my setup.
I create a new metric on my log entry - which has no expiry date;
I then go to the main page on CloudWatch --> then to Metrics to view any metrics (I understand this will only display new metric hits when there are hits that match the metric rule).
About 2 weeks ago, I had 9 Metrics logged under my "Custom Namespaces", and I now have 8 - as if it does not keep all the data:
As far as i'm aware, all my metrics should stay in place (unless I remove them), however, it seems as though if these are not hit consistently, the data "expires", is that correct? If so, how are you meant to track historical data?
Thanks

CloudWatch will remove metrics from search if there was no new data published for that metric in the last 2 weeks.
This is mentioned in passing in the FAQ for EC2 metrics, but I think it applies to all metrics.
From 'will I lose the metrics data if I disable monitoring for an Amazon EC2 instance question' in the FAQ:
CloudWatch console limits the search of metrics to 2 weeks after a
metric is last ingested to ensure that the most up to date instances
are shown in your namespace.
Your data is still there however. Data adheres to a different retention policy.
You can still get your data if you know what the metric name is. If you added your metric to a dashboard, it will still be visible there. You can use CloudWatch PutDashboards API to add the metric to a dashboard or use CloudWatch GetMetricStatistics API to get the raw data.

Related

CloudWatch Cost - Data Processing

I'd like to know if possible to discover which resource is behind this cost in my Cost Explorer, grouping by usage type I can see it is Data Processing bytes, but I don't know which resource would be consuming this amount of data.
Have some any idea how to discover it on CloudWatch?
This is almost certainly because something is writing more data to CloudWatch than previous months.
As stated this AWS Support page about unexpected CloudWatch logs bill increases:
Sudden increases in CloudWatch Logs bills are often caused by an
increase in ingested or storage data in a particular log group. Check
data usage using CloudWatch Logs Metrics and review your Amazon Web
Services (AWS) bill to identify the log group responsible for bill
increases.
Your screenshot identifies the large usage type as APS2-DataProcessing-Bytes. I believe that the APS2 part is telling you it's about the ap-southeast-2 region, so start by looking in that region when following the instructions below.
Here's a brief summary of the steps you need to take to find out which log groups are ingesting the most data:
How to check how much data you're ingesting
The IncomingBytes metric shows you how much data is being ingested in your CloudWatch log groups in near-real time. This metric can help you to determine:
Which log group is the highest contributor towards your bill
Whether there's been a spike in the incoming data to your log groups or a gradual increase due to new applications
How much data was pushed in a particular period
To query a small set of log groups:
Open the Amazon CloudWatch console.
In the navigation pane, choose Metrics.
For each of your log groups, select the IncomingBytes metric, and then choose the Graphed metrics tab.
For Statistic, choose Sum.
For Period, choose 30 Days.
Choose the Graph options tab and choose Number.
At the top right of the graph, choose custom, and then choose Absolute. Select a start and end date that corresponds with the last 30 days.
For more details, and for instructions on how to query hundreds of log groups, read the full AWS support article linked above.
Apart from the steps which Gabe mentioned what helped me identify the resource which was creating large number of logs was by:
heading over to Cloudwatch
selecting the region which showed in Cost explorer
Selecting Log Groups
From settings under Log Groups, Enabling column Stored bytes to be visible
This showed me which service was causing a lot of logs to be written to Cloudwatch.

Delete custom metric and custom namespaces from cloudwatch

Hi is there any way of getting custom metrics defined on cloudwatch log group be deleted along with its namespace. This is quite weird that we can create a custom metric/namespace using API/Console but cannot delete it either using API or Console from cloudwatch custom metrics/namespaces.
That's correct. You can't delete them. You have to wait till the associated metrics expire. From docs:
Metrics cannot be deleted, but they automatically expire after 15 months if no new data is published to them.
It could be worth nothing, that you are not charged for them. You are only charged when you put new data into them.
This is ongoing issue for years now, starting from 2011:
How could I remove custom metrics in CloudWatch?

Is there a way to get the initial creation date of a CloudWatch metric through the AWS CLI?

I'm trying to identify the initial creation date of a metric on CloudWatch using the AWS CLI but don't see any way of doing so in the documentation. I can kind of identify the start date if there is a large block of missing data but that doesn't work for metrics that have large gaps in data.
CloudWatch metrics are "created" with the first PutMetricData call that includes the metric. I use quotes around created, because the metric doesn't have an independent existence, it's simply an entry in the time-series database. If there's a gap in time with no entries, the metric effectively does not exist for that gap.
Another caveat to CloudWatch metrics is that they only have a lifetime of 455 days, and individual metric values are aggregated as they age (see specifics at link).
All of which begs the question: what's the real problem that you're trying to solve.

AWS CloudWatch unused custom metrics retention and pricing - 2018

It looks like custom metrics will be kept for 15 months, if I understand it correctly, since they get aggregated to higher resolution, according to https://aws.amazon.com/cloudwatch/faqs. Does it mean we have to pay for at least 15 months once we create a custom metric?
I have CloudWatch agent installed to collect various metrics using user_data. It creates new metrics for every new instances. After running many tests, I have more than 6,000 custom metrics, but most of them are unused. Since there is no way to delete custom metrics, do I get charged for those unused metrics until they expire (15 months)? I hope I'm wrong on this :]
Please clarify how we get charged for unused custom metrics.
You will not get charged for those. You will get charged for the metric for the duration you publish data onto them. It's not very clear on the CloudWatch pricing page but they hint it in the [original pricing reduction blogpost]{https://aws.amazon.com/blogs/aws/aws-price-reduction-cloudwatch-custom-metrics/).
You will get charged though on the retrieval of those (API costs).
we have the same issue to resolve. waiting for clearification from aws

How long are metrics retained for SNS in CloudWatch?

I can't seem to find this info anywhere.
Specifically, I'm looking at metrics like NumberOfMessagesPublished and NumberOfNotificationsDelivered. How far back does CloudWatch retain this data?
2 weeks according to Monitoring Your Instances Using CloudWatch
These statistics are recorded for a period of two weeks, so that you
can access historical information and gain a better perspective on how
your web application or service is performing.
If you want to archive metrics beyond 2 weeks
If you want to archive metrics beyond 2 weeks you can do so by calling
mon-get-stats command from the command line and storing the results in
Amazon S3 or Amazon SimpleDB.