I would like to know the actual size of my logs and how fast do they grow.
Looking at Cloudwatch>Metrics>Account>IncomingBytes and choosing that I want to get the SUM for:
last 3 months and a period of 30 Days I do get 43GB, but for a period of 7 days I do get 17 GB and for a period of 1 Day 45 MB
last 4 weeks and a period of 30 days I do get 63GB, but for a period of 7 days I have 784KB, and a period of 1 day 785 KB.
I do not understand It, how could I get the size of my logs right now? and how to find how it increases over time (for example 1 day?)
CloudWatch logs doesn't publish a metric for "bytes right now." And a sum of IncomingBytes will just show the bytes received in whatever period you look at; it doesn't account for existing bytes or bytes that are removed due to a retention policy or deleted stream.
However, you can get the current reported bytes from the log group description. Here's a Python program that iterates all log groups and prints the answer:
import boto3
client = boto3.client('logs')
paginator = client.get_paginator('describe_log_groups')
for page in paginator.paginate():
for group in page['logGroups']:
print(f"{group['logGroupName']}: {group['storedBytes']}")
If it's important to track this over time, I'd wrap it in a Lambda that runs nightly (or however often you want) and reports the number as a custom metric.
The problem was related to the cloudwatch configuration (Graph options), there latest value was selected and should be the"time range value".
After changing It Cloudwatch was showing me that I have some TB and modifying the period was always showing the same values
Related
I'm trying to monitor if my Lambda has been executed within the last 25 hours within New Relic. I want to alert if it hasn't.
I have the following NRQL which gives me the graph I want to see:
SELECT sum(`provider.invocations.Sum`) FROM ServerlessSample WHERE provider.resource = 'my_lambda_name'
I then just want to say that if it dips below 1 for 1500 minutes (25 hours) then alert, but NR only allows me to set an alarm for 120 minutes. Any tips on how to get around this?
Interesting question, as I have seen in New Relic discussion page, or Explorers Hub, there might be solution for your task.
Can you please review this link:
https://discuss.newrelic.com/t/relic-solution-extending-the-functionality-of-nrql-alert-conditions-beyond-a-single-minute/75441
If you think about this for a moment, you might see how NRQL queries using percentile or stddev are a lot less useful than they seem, when used in an alert condition. After all, if you calculate the standard deviation over an hour (or 24 hours), that can be meaningful. But stddev(duration), or percentile(duration,95) calculated over only 60 seconds is less meaningful.
I think that limit is 24 hours but I haven't test it yet.
Hope this will help you, I will try to give it a go as well to see will this work.
Introduction
We are trying to "measure" the cost of usage of a specific use case on one of our Aurora DBs that is not used very often (we use it for staging).
Yesterday at 18:18 hrs. UTC we issued some representative queries to it and today we were examining the resulting graphs via Amazon CloudWatch Insights.
Since we are being billed USD 0.22 per million read/write IOs, we need to know how many of those there were during our little experiment yesterday.
A complicating factor is that in the cost explorer it is not possible to group the final billed costs for read/write IOs per DB instance! Therefore, the only thing we can think of to estimate the cost is from the read/write volume IO graphs on CLoudwatch Insights.
So we went to the CloudWatch Insights and selected the graphs for read/write IOs. Then we selected the period of time in which we did our experiment. Finaly, we examined the graphs with different options: "Number" and "Lines".
Graph with "number"
This shows us the picture below suggesting a total billable IO count of 266+510=776. Since we have choosen the "Sum" metric, this we assume would indicate a cost of about USD 0.00017 in total.
Graph with "lines"
However, if we choose the "Lines" option, then we see another picture, with 5 points on the line. The first and last around 500 (for read IOs) and the last one at approx. 750. Suggesting a total of 5000 read/write IOs.
Our question
We are not really sure which interpretation to go with and the difference is significant.
So our question is now: How much did our little experiment cost us and, equivalently, how to interpret these graphs?
Edit:
Using 5 minute intervals (as suggested in the comments) we get (see below) a horizontal line with points at 255 (read IOs) for a whole hour around the time we did our experiment. But the experiment took less than 1 minute at 19:18 (UTC).
Wil the (read) billing be for 12 * 255 IOs or 255 ... (or something else altogether)?
Note: This question triggered another follow-up question created here: AWS CloudWatch insights graph — read volume IOs are up much longer than actual reading
From Aurora RDS documentation
VolumeReadIOPs
The number of billed read I/O operations from a cluster volume within
a 5-minute interval.
Billed read operations are calculated at the cluster volume level,
aggregated from all instances in the Aurora DB cluster, and then
reported at 5-minute intervals. The value is calculated by taking the
value of the Read operations metric over a 5-minute period. You can
determine the amount of billed read operations per second by taking
the value of the Billed read operations metric and dividing by 300
seconds. For example, if the Billed read operations returns 13,686,
then the billed read operations per second is 45 (13,686 / 300 =
45.62).
You accrue billed read operations for queries that request database
pages that aren't in the buffer cache and must be loaded from storage.
You might see spikes in billed read operations as query results are
read from storage and then loaded into the buffer cache.
Imagine AWS report these data each 5 minutes
[100,150,200,70,140,10]
And you used the Sum of 15 minutes statistic like what you had on the image
F̶i̶r̶s̶t̶,̶ ̶t̶h̶e̶ ̶"̶n̶u̶m̶b̶e̶r̶"̶ ̶v̶i̶s̶u̶a̶l̶i̶z̶a̶t̶i̶o̶n̶ ̶r̶e̶p̶r̶e̶s̶e̶n̶t̶ ̶o̶n̶l̶y̶ ̶t̶h̶e̶ ̶l̶a̶s̶t̶ ̶a̶g̶g̶r̶e̶g̶a̶t̶e̶d̶ ̶g̶r̶o̶u̶p̶.̶ ̶I̶n̶ ̶y̶o̶u̶r̶ ̶c̶a̶s̶e̶ ̶o̶f̶ ̶1̶5̶ ̶m̶i̶n̶u̶t̶e̶s̶ ̶a̶g̶g̶r̶e̶g̶a̶t̶i̶o̶n̶,̶ ̶i̶t̶ ̶w̶o̶u̶l̶d̶ ̶b̶e̶ ̶(̶7̶0̶+̶1̶4̶0̶+̶1̶0̶)̶
Edit: First, the "number" visualization represent the whole selected duration, aggregated with would be the total of (100+150+200+70+140+10)
The "line" visualization will represent all the aggregated groups. which would in this case be 2 points (100+150+200) and (70+140+10)
It can be a little bit hard to understand at first if you are not used to data points and aggregations. So I suggest that you set your "line" chart to Sum of 5 minutes you will need to get value of each points and devide by 300 as suggested by the doc then sum them all
Added images for easier visualization
We are in process of identifying Stackdriver metrics
I am specifically looking at GCP predefined metric subscription/ack_message_count with description Cumulative count of messages acknowledged by Acknowledge requests, grouped by delivery type. Sampled every 60 seconds. After sampling, data is not visible for up to 240 seconds.
Can any one help me understand highlighted part, what does Sampled every 60 seconds. After sampling, data is not visible for up to 240 seconds. mean
once i check this metric will it not able available for next 240 seconds.
Thanks
"Sampled every" refers to granularity. In this case, you'll get a data point for every minute.
"not visible" refers to freshness. In this case, the newest data point will describe the system as it was 4 minutes ago. Put another way, if you do something and watch the graphs you won't see the metric reflect the change for 4 minutes.
From my understanding, the data is polled every 60 seconds but at the metrics creation the time until the data is polled would take up to 240 seconds. The BigQuery section makes this a bit clearer. Because the numbers are as such that it would not be feasible in an other context
Example: Scanned bytes. Sampled every 60 seconds. After sampling, data is not visible for up to 21720 seconds.
from this question DynamoDB read/write capacity explanation someone answered that each query of dynamoDB would take 3 read capacity.
However, after viewing the metrics I got this:
The latest point shows 0.3333333
However, I used 2 GetItem in a single script. So is there any explanation for this? Shouldn't it be 2 read capacity?
Thanks! I'm new to DynamoDB and the read/write capacity can be confusing :(
What you are looking at it averaged over 1 minute so that is a capacity of 60 reads per minute for 1 read unit.
If you only run one test of 2 reads it will smear out to a small number. You need to run over a longer period to get a true measure of you read requirements.
How should I interpret the AWS EC2 CloudWatch NetworkIn and NetworkOut metrics?
What does the Statistic: Average in the chart refer to?
The docs state that "the units for the Amazon EC2 NetworkIn metric are Bytes because NetworkIn tracks the number of bytes that an instance receives on all network interfaces”.
When viewing the chart below, Network In (Bytes), with Statistic: Average and a Period: 5 Minutes (note that the time window is zoomed in to around five hours, not one week), it is not immediately obvious how the average is calculated.
Instance i-aaaa1111 (orange) at 15.29: 2664263.8
If I change Statistic to “Sum”, I get this:
The same instance (i-aaaa1111), now at 15.31: 13321319
It turns out 13321319/5 = 2664263.8, suggesting that incoming network traffic during those five minutes was, on average, 2664263.8 Bytes/minute.
=> 2664263.8/60 ≈ 44404.4 Bytes/second
=> 4404.39/1024 ≈ 43.3KB/s
=> 43.3*8 ≈ 350Kbps
I tested this by repeatedly copying a large file from one instance to another, transferring at an average speed of 30.1MB/s. The CloudWatch metric was 1916943925 Bytes (Average) => around 30.5MB/s
The metric, "Network In (Bytes)", refers to bytes/minute.
It appears in my case that the average is computed over the period specified. In other words: for '15 Minutes', it divides the sum of bytes for the 15-minute period by 15, for '5 Minutes', it divides the sum for the 5-minute period by 5.
Here is why I believe this: I used this chart to debug an upload where rsync was reporting ~710kB/sec (~727,000 bytes / sec) when I expected a faster upload. After selecting lots of different sum values in the EC2 plot, I determined that the sums were correct numbers of bytes for the period specified (selecting a 15 minute period tripled the sum compared to a 5 minute period). Then viewing the average and selecting different periods shows that I get the same value of ~45,000,000 when I select a period of "5 Minutes", "15 Minutes", or "1 Hour".
45,000,000 (bytes/???) / 730,000 (bytes/sec) is approximately 60, so ??? is a minute (60 seconds). In fact, ~45,000,000 / 1024 / 60 = ~730 kB/sec and this is within 3% of what rsync was reporting.
Incidentally, my 'bug' was user error - I had failed to pass the '-z' option to rsync and therefore was not getting the compression boost I expected.