Hi is there any way of getting custom metrics defined on cloudwatch log group be deleted along with its namespace. This is quite weird that we can create a custom metric/namespace using API/Console but cannot delete it either using API or Console from cloudwatch custom metrics/namespaces.
That's correct. You can't delete them. You have to wait till the associated metrics expire. From docs:
Metrics cannot be deleted, but they automatically expire after 15 months if no new data is published to them.
It could be worth nothing, that you are not charged for them. You are only charged when you put new data into them.
This is ongoing issue for years now, starting from 2011:
How could I remove custom metrics in CloudWatch?
Related
Is it possible to create alerts for configuration activities?
On the dashboard of my GCP project, I'm able to see the history of activities. However, for security reasons, I would like to be able to receive notifications when certain activities happen, e.g. Set IAM policy on project, deleting instance of project, etc. Is this possible?
I have looked into "metric-based alerting policies", but I'm only able to create alerts for uptime checks. Not sure what else to look for.
You are on the right path. You need to create a log-based metric and then to create an alert when the counter cross a threshold (1 for example)
Now a more straightforward solution is available: In one step, you can use log-based alerts. It allows to set alerts on any log type and content. This new feature is on preview and was announced few days ago.
I have a use case where some process puts a file every 6 hours to an S3 bucket. This bucket has already thousands of files in it and I wanted to generate an sns alert or something if no new file is added in the last 7 hours. what would be a reasonable approach?
Thanks
There are a few potential approaches:
Check the bucket every few minuter
Keep track of the last new file
Use an Amazon CloudWatch Alarm
Check the bucket every few minutes
Configure Amazon CloudWatch Events to trigger an AWS Lambda function every few minutes (depending upon how quickly you want it reported), which obtains a listing of the bucket and check the timestamp that the last object was added. If it is more than 7 hours, send the alert.
This approach is very simple but is doing a lot of work every few minutes, including during the 7 hours after an object was added. Plus, if you have lots of objects, this can consume a lot of Lambda time and API calls.
Keep track of the last new file
Configure an Event on the Amazon S3 bucket to trigger an AWS Lambda function whenever a new file is added to the bucket. Store the current time in a DynamoDB table (or, if you really want to save costs, store it in the Systems Manager Parameter Store or an S3 object in another bucket). This will update the date whenever a new file is added.
Configure Amazon CloudWatch Events to trigger an AWS Lambda function every few minutes (depending upon how quickly you want it reported) that checks the "last updated date" in DynamoDB (or where ever it was stored). If it is more than 7 hours, trigger an alert.
While this approach has more components, it is actually a simpler solution because it never has to look through the list of objects in S3. Instead, it just remembers when the last object was added.
You could come up with an even smarter method that, instead of checking every few minutes, schedules an alert function in 7 hours time. Whenever a new file is added, it changes the schedule to put it 7 hours away again. It's like constantly delaying a dentist appointment. :)
Use an Amazon CloudWatch Alarm
This is a simpler method that uses a CloudWatch Alarm to trigger the notification.
Configure the S3 bucket to trigger a Lambda function whenever an object is added. The Lambda function sends a Custom Metric to Amazon CloudWatch.
Create a CloudWatch Alarm to trigger a notification whenever the SUM of the Custom Metric is zero for the past 6 hours. Also configure it to trigger if the Alarm enters the INSUFFICIENT_DATA state, so that it correctly triggers when no data is sent (which is more likely than a metric of zero since the Lambda function won't send data when no objects are created).
The only downside is that the alarm period only has a few options. It can be set for 6 hours, but I don't think it can be set for 7 hours.
How to alert
As to how to alert somebody, sending a message to an Amazon SNS topic is a good idea. People could subscribe via Email, SMS and various other methods.
The Amazon CloudWatch Alarm method described by #John Rotenstein is definitely the simplest option for most use cases and works well. Just one thing to be aware of: CloudWatch Alarms has a 24hr limit per metric (EvaluationPeriods * Period must be <= 86400s). Therefore, if you expect your bucket to receive files less than once per day then you'll need to use a different method.
It looks like custom metrics will be kept for 15 months, if I understand it correctly, since they get aggregated to higher resolution, according to https://aws.amazon.com/cloudwatch/faqs. Does it mean we have to pay for at least 15 months once we create a custom metric?
I have CloudWatch agent installed to collect various metrics using user_data. It creates new metrics for every new instances. After running many tests, I have more than 6,000 custom metrics, but most of them are unused. Since there is no way to delete custom metrics, do I get charged for those unused metrics until they expire (15 months)? I hope I'm wrong on this :]
Please clarify how we get charged for unused custom metrics.
You will not get charged for those. You will get charged for the metric for the duration you publish data onto them. It's not very clear on the CloudWatch pricing page but they hint it in the [original pricing reduction blogpost]{https://aws.amazon.com/blogs/aws/aws-price-reduction-cloudwatch-custom-metrics/).
You will get charged though on the retrieval of those (API costs).
we have the same issue to resolve. waiting for clearification from aws
I currently have a bunch of custom metric's based in multiple regions across our AWS account.
I thought I was going crazy but have now confirmed that the metric I created a while ago is expiring when not used for a certain time period (could be 2 weeks).
Here's my setup.
I create a new metric on my log entry - which has no expiry date;
I then go to the main page on CloudWatch --> then to Metrics to view any metrics (I understand this will only display new metric hits when there are hits that match the metric rule).
About 2 weeks ago, I had 9 Metrics logged under my "Custom Namespaces", and I now have 8 - as if it does not keep all the data:
As far as i'm aware, all my metrics should stay in place (unless I remove them), however, it seems as though if these are not hit consistently, the data "expires", is that correct? If so, how are you meant to track historical data?
Thanks
CloudWatch will remove metrics from search if there was no new data published for that metric in the last 2 weeks.
This is mentioned in passing in the FAQ for EC2 metrics, but I think it applies to all metrics.
From 'will I lose the metrics data if I disable monitoring for an Amazon EC2 instance question' in the FAQ:
CloudWatch console limits the search of metrics to 2 weeks after a
metric is last ingested to ensure that the most up to date instances
are shown in your namespace.
Your data is still there however. Data adheres to a different retention policy.
You can still get your data if you know what the metric name is. If you added your metric to a dashboard, it will still be visible there. You can use CloudWatch PutDashboards API to add the metric to a dashboard or use CloudWatch GetMetricStatistics API to get the raw data.
I am using AWS CLI client to develop a custom monitoring system. The requirement is that data points needs to be overridden or overwritten, but when using:
aws cloudwatch put-metric-data
I don't see any parameter to overwrite or override a data point if it has already been published. I tested this and found that that when a data point is pushed two or more times, it doesn't overwrite it but adds it (and then you can perform sums, averages, etc.). But for this specific requirement, instead of adding the data points we need to preserve just the last point. Is there any way to do that?
Sorry. No way to do that. There is no concept of overwriting metric data in Cloudwatch.
One can argue deleting an existing metric data and add a new data with the same timestamp and dimension. But Cloudwatch metrics by design cannot be deleted once published. It will disappear after 2 weeks (default lifecycle policy for the metrics is 2 weeks).
So there is no way to preserve the last datapoint for the same timestamp. You have to do some kind of post processing after fetching the data. But if you are using Cloudwatch alarm or dashboard, there is nothing you can do.