aws cloudwatch metric overwrite/override - amazon-web-services

I am using AWS CLI client to develop a custom monitoring system. The requirement is that data points needs to be overridden or overwritten, but when using:
aws cloudwatch put-metric-data
I don't see any parameter to overwrite or override a data point if it has already been published. I tested this and found that that when a data point is pushed two or more times, it doesn't overwrite it but adds it (and then you can perform sums, averages, etc.). But for this specific requirement, instead of adding the data points we need to preserve just the last point. Is there any way to do that?

Sorry. No way to do that. There is no concept of overwriting metric data in Cloudwatch.
One can argue deleting an existing metric data and add a new data with the same timestamp and dimension. But Cloudwatch metrics by design cannot be deleted once published. It will disappear after 2 weeks (default lifecycle policy for the metrics is 2 weeks).
So there is no way to preserve the last datapoint for the same timestamp. You have to do some kind of post processing after fetching the data. But if you are using Cloudwatch alarm or dashboard, there is nothing you can do.

Related

creating an alert when no data was uploaded to BigQuery table in GCP

I have a requirement to send an email notification whenever there is no data getting inserted into my BigQuery table. For this, I am using the Logging and Alerting mechanism But still, I am not able to receive any email. Here are the steps I followed:
I had written a Query in Logs explorer as below:
Now I had created a metric for those logs with Metric type COUNTER and in the filter section obviously I have given the above query.
Now I created a policy in ALERTING under the MONITORING domain. And here is the screenshot attached. The alerting policy which I had selected is for the logging metrics which I had created before.
And then a trigger as below:
And in the Notification channel, added my Email ID.
Can someone please help me if I am missing something? My requirement is to receive an alert when there is no data inserted into a Bigquery table for more than a day.
And also, I could see in Metrics Explorer, the metric which I created is not ACTIVE. Why so?
As mentioned in GCP docs:
Metric absence conditions require at least one successful measurement — one that retrieves data — within the maximum duration window after the policy was installed or modified.
For example, suppose you set the duration window in a metric-absence policy to 30 minutes. The condition isn't met if the subsystem that writes metric data has never written a data point. The subsystem needs to output at least one data point and then fail to output additional data points for 30 minutes.
Meaning, you will need at least 1 datapoint (insert job) to have an incident created for the metric to be missing.
There are two options:
Create an artificial log entry to get the metric started and have at least one time series and data point.
Run an insert job that would match the log-based metric that was created to get the metric started.
With regards to your last question, the metric you created is not active because there hasn't been any written data points to it within the previous 24 hours. As mentioned above, the metric must have at least 1 datapoint written to it.
Refer to custom metrics quota for more info.

Delete custom metric and custom namespaces from cloudwatch

Hi is there any way of getting custom metrics defined on cloudwatch log group be deleted along with its namespace. This is quite weird that we can create a custom metric/namespace using API/Console but cannot delete it either using API or Console from cloudwatch custom metrics/namespaces.
That's correct. You can't delete them. You have to wait till the associated metrics expire. From docs:
Metrics cannot be deleted, but they automatically expire after 15 months if no new data is published to them.
It could be worth nothing, that you are not charged for them. You are only charged when you put new data into them.
This is ongoing issue for years now, starting from 2011:
How could I remove custom metrics in CloudWatch?

Is there a way to get the initial creation date of a CloudWatch metric through the AWS CLI?

I'm trying to identify the initial creation date of a metric on CloudWatch using the AWS CLI but don't see any way of doing so in the documentation. I can kind of identify the start date if there is a large block of missing data but that doesn't work for metrics that have large gaps in data.
CloudWatch metrics are "created" with the first PutMetricData call that includes the metric. I use quotes around created, because the metric doesn't have an independent existence, it's simply an entry in the time-series database. If there's a gap in time with no entries, the metric effectively does not exist for that gap.
Another caveat to CloudWatch metrics is that they only have a lifetime of 455 days, and individual metric values are aggregated as they age (see specifics at link).
All of which begs the question: what's the real problem that you're trying to solve.

How can I find out what is consuming my DynamoDb tables Read Capacity?

We have a DynamoDB table that we thought we'd be able to turn off and delete. We shut down the callers to the web services that queried it (and can see on the web server metrics that the callers have dropped to zero), but the AWS console is still showing Read Capacity consumption greater than zero.
However, every other graph that concerns reads is showing no data: Get latency, Put latency, Query latency, Scan latency, Get records, Scan returned item count, and Query returned item count are all blank. On other tables that I know to be in use, these charts show some data > 0.
On other tables that I know not to be in use, the Read Capacity graph only shows the provisioned line, no consumed line.
This table is still being written to via a Lambda filtering and aggregating events from a Kinesis stream. I've reviewed the Lambda code and it doesn't specifically read anything from the table – does read capacity get consumed when the lambda updates or overwrites the value for an existing key?
I opened a ticket with AWS support and they were able to find the IP that was consuming the read capacity. They used an internal tool to query logs that are not available to customers. They also confirmed that these events do not get emitted to Cloudtrail logs, which only contain events related to the table, such as re-provisioning, queries about metrics, etc.
They also shared this nugget that's relevant to the question:
Q: Does read capacity get consumed when the lambda updates or overwrites the value for an existing key?
A: Yes, when you issue an update item operation, Dynamodb does a Read/Get operation first and then does a PutItem to insert/overwrite existing Item. This is expensive as it consumes both RCU and WCU. I did also verify that there are no UpdateItem operations being made on this table.
They also pointed me at more Cloudwatch metrics that shed some more light on what's going on with the table behind the scenes. Finding this through navigation with a link, you go to
Cloudwatch service
Metrics in the left bar
All Metrics tab
Scroll down to AWS Namespaces section (Custom Namespaces section is on top, if you have defined any custom metrics)
Select DynamoDB
Select Table Operation Metrics
Metrics will be organized by table name. The one that was most helpful was Operation=Query, Metric Name=Returned Item Count.
So the only answer to my question is: Open an AWS Support ticket.

Cloudwatch - Metrics are expiring

I currently have a bunch of custom metric's based in multiple regions across our AWS account.
I thought I was going crazy but have now confirmed that the metric I created a while ago is expiring when not used for a certain time period (could be 2 weeks).
Here's my setup.
I create a new metric on my log entry - which has no expiry date;
I then go to the main page on CloudWatch --> then to Metrics to view any metrics (I understand this will only display new metric hits when there are hits that match the metric rule).
About 2 weeks ago, I had 9 Metrics logged under my "Custom Namespaces", and I now have 8 - as if it does not keep all the data:
As far as i'm aware, all my metrics should stay in place (unless I remove them), however, it seems as though if these are not hit consistently, the data "expires", is that correct? If so, how are you meant to track historical data?
Thanks
CloudWatch will remove metrics from search if there was no new data published for that metric in the last 2 weeks.
This is mentioned in passing in the FAQ for EC2 metrics, but I think it applies to all metrics.
From 'will I lose the metrics data if I disable monitoring for an Amazon EC2 instance question' in the FAQ:
CloudWatch console limits the search of metrics to 2 weeks after a
metric is last ingested to ensure that the most up to date instances
are shown in your namespace.
Your data is still there however. Data adheres to a different retention policy.
You can still get your data if you know what the metric name is. If you added your metric to a dashboard, it will still be visible there. You can use CloudWatch PutDashboards API to add the metric to a dashboard or use CloudWatch GetMetricStatistics API to get the raw data.