AWS Cloudwatch - List Custom Metrics - amazon-web-services

I am trying to figure out how to simply view all of our custom metrics in CloudWatch.
AWS Console is far from helpful, or at least it's not well signposted. I want to try and relate our CloudWatch bill to actual metrics we have to try and determine where I can make some cuts.
For Example:
Our Bill shows 1,600 Metrics charged at $0.30 a piece per month, but I see over 17,000 custom namespaces in the metrics list within the CloudWatch console.
Does anyone know how I can best find this information, or have a nice handy CLI command to view all custom metrics for a region?
I can see the custom namespaces section in cloud watch, but these don't really marry up to the billing page as such. By about a 10 fold.
Thank you.
UPDATE:
I think I may have identified why there is a discrepancy between the billing and the list of metrics:
We have namespace builds, each creating metrics and being destroyed sometimes within hours.
These metrics which were created linger for 15 days according to the AWS FAQ on CloudWatch Metrics.
The overall monthly metrics seemingly is a figure of what it is due to the concurrency of metrics over the month.
However, this still doesn't make the billing breakdown any easier to understand when you're trying to highlight possible outliers in costs.

Related

AWS CloudWatch is charging me without using

Recently i discovered my bill raising without using anything above free tier of with very minor charges.
On the bill management page it was clear that the charges are coming from CloudWatch Alarms as you can see in the picture below.
My question is why and how can i stop them? I can see that the alarms are been created by DynamoDB
auto-scaling, but i can't continue being charged for such a simple thing, i'm sure there is an option to disable it but i can't figure out.
Edited I checked "hide auto-scaling alarms" box but i think it's not the fix, fingers crossed to be:P
This is part of DynamoDB AutoScaling. For a small project, you should consider using DynamoDB without Provisionned Throughput.
AWS Free Tier includes 10 Alarm metrics (not applicable to high-resolution alarms).
See How can I determine why I was charged for CloudWatch usage, and then how can I reduce future charges?

CloudWatch Cost - Data Processing

I'd like to know if possible to discover which resource is behind this cost in my Cost Explorer, grouping by usage type I can see it is Data Processing bytes, but I don't know which resource would be consuming this amount of data.
Have some any idea how to discover it on CloudWatch?
This is almost certainly because something is writing more data to CloudWatch than previous months.
As stated this AWS Support page about unexpected CloudWatch logs bill increases:
Sudden increases in CloudWatch Logs bills are often caused by an
increase in ingested or storage data in a particular log group. Check
data usage using CloudWatch Logs Metrics and review your Amazon Web
Services (AWS) bill to identify the log group responsible for bill
increases.
Your screenshot identifies the large usage type as APS2-DataProcessing-Bytes. I believe that the APS2 part is telling you it's about the ap-southeast-2 region, so start by looking in that region when following the instructions below.
Here's a brief summary of the steps you need to take to find out which log groups are ingesting the most data:
How to check how much data you're ingesting
The IncomingBytes metric shows you how much data is being ingested in your CloudWatch log groups in near-real time. This metric can help you to determine:
Which log group is the highest contributor towards your bill
Whether there's been a spike in the incoming data to your log groups or a gradual increase due to new applications
How much data was pushed in a particular period
To query a small set of log groups:
Open the Amazon CloudWatch console.
In the navigation pane, choose Metrics.
For each of your log groups, select the IncomingBytes metric, and then choose the Graphed metrics tab.
For Statistic, choose Sum.
For Period, choose 30 Days.
Choose the Graph options tab and choose Number.
At the top right of the graph, choose custom, and then choose Absolute. Select a start and end date that corresponds with the last 30 days.
For more details, and for instructions on how to query hundreds of log groups, read the full AWS support article linked above.
Apart from the steps which Gabe mentioned what helped me identify the resource which was creating large number of logs was by:
heading over to Cloudwatch
selecting the region which showed in Cost explorer
Selecting Log Groups
From settings under Log Groups, Enabling column Stored bytes to be visible
This showed me which service was causing a lot of logs to be written to Cloudwatch.

How to find out CloudWatch GetMetricData cost is for which log group?

We have recently huge cost increasing (x8 times) on CloudWatch GetMetricData operation. We have a lot of log groups and different teams on the same Aws Account.
Do you know how could we find out the GetMetricData is for which log group ?
Thanks.
Unfortunately, there's no easy answer your question. We had the same issue where a line on the bill call "GetMetricsData API" was getting completely out of control. It's a shame AWS CloudTrail does not log such request. To discover the root cause, we had to disable all external monitoring tool we had plugged on this account one by one and monitor for a dent in the bill. See this article.
AWS does not tie the charges of GetMetricData to specific CloudWatch Log Groups so sadly this is not possible to see. The only things that you can see on a per log group basis are "processing bytes" and storage. If you believe that those could be close proxies, then you can query them directly via Cost and Usage Reports...but it may be that ingestion costs are not at all tied to querying of metric data.
An alternative hosted solution for seeing all of this data aggregated together would be https://www.vantage.sh/ which will query for all CloudWatch log groups and show you all the costs that it can on a per Log Group basis but you'll need to enabled "Advanced Analytics" from them.

AWS CloudWatch unused custom metrics retention and pricing - 2018

It looks like custom metrics will be kept for 15 months, if I understand it correctly, since they get aggregated to higher resolution, according to https://aws.amazon.com/cloudwatch/faqs. Does it mean we have to pay for at least 15 months once we create a custom metric?
I have CloudWatch agent installed to collect various metrics using user_data. It creates new metrics for every new instances. After running many tests, I have more than 6,000 custom metrics, but most of them are unused. Since there is no way to delete custom metrics, do I get charged for those unused metrics until they expire (15 months)? I hope I'm wrong on this :]
Please clarify how we get charged for unused custom metrics.
You will not get charged for those. You will get charged for the metric for the duration you publish data onto them. It's not very clear on the CloudWatch pricing page but they hint it in the [original pricing reduction blogpost]{https://aws.amazon.com/blogs/aws/aws-price-reduction-cloudwatch-custom-metrics/).
You will get charged though on the retrieval of those (API costs).
we have the same issue to resolve. waiting for clearification from aws

Cloudwatch - Metrics are expiring

I currently have a bunch of custom metric's based in multiple regions across our AWS account.
I thought I was going crazy but have now confirmed that the metric I created a while ago is expiring when not used for a certain time period (could be 2 weeks).
Here's my setup.
I create a new metric on my log entry - which has no expiry date;
I then go to the main page on CloudWatch --> then to Metrics to view any metrics (I understand this will only display new metric hits when there are hits that match the metric rule).
About 2 weeks ago, I had 9 Metrics logged under my "Custom Namespaces", and I now have 8 - as if it does not keep all the data:
As far as i'm aware, all my metrics should stay in place (unless I remove them), however, it seems as though if these are not hit consistently, the data "expires", is that correct? If so, how are you meant to track historical data?
Thanks
CloudWatch will remove metrics from search if there was no new data published for that metric in the last 2 weeks.
This is mentioned in passing in the FAQ for EC2 metrics, but I think it applies to all metrics.
From 'will I lose the metrics data if I disable monitoring for an Amazon EC2 instance question' in the FAQ:
CloudWatch console limits the search of metrics to 2 weeks after a
metric is last ingested to ensure that the most up to date instances
are shown in your namespace.
Your data is still there however. Data adheres to a different retention policy.
You can still get your data if you know what the metric name is. If you added your metric to a dashboard, it will still be visible there. You can use CloudWatch PutDashboards API to add the metric to a dashboard or use CloudWatch GetMetricStatistics API to get the raw data.