i've created successfully a custom metric by SDK but i'm not able to remove it
I can't find the option from the web console to remove it (from SDK as well, i cant find a method to remove/cancel it)
//the code is not important, i've pasted it just to show it works
IAmazonCloudWatch client = new AmazonCloudWatchClient(RegionEndpoint.EUWest1);
List<MetricDatum> data = new List<MetricDatum>();
data.Add(new MetricDatum()
{
MetricName = "PagingFilePctUsage",
Timestamp = DateTime.Now,
Unit = StandardUnit.Percent,
Value = percentPageFile.NextValue()
});
data.Add(new MetricDatum()
{
MetricName = "PagingFilePctUsagePeak",
Timestamp = DateTime.Now,
Unit = StandardUnit.Percent,
Value = peakPageFile.NextValue()
});
client.PutMetricData(new PutMetricDataRequest()
{
MetricData = data,
Namespace = "mycompany/myresources"
});
it created a metric named "mycompany/myresources" but i can't remove it
Amazon CloudWatch retains metrics for 15 months.
From Amazon CloudWatch FAQs - Amazon Web Services (AWS):
CloudWatch retains metric data as follows:
Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution custom metrics.
Data points with a period of 60 seconds (1 minute) are available for 15 days
Data points with a period of 300 seconds (5 minute) are available for 63 days
Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months)
So, just pretend that your old metrics don't exist. Most graphs and alarms only look back 24 hours, so old metrics typically won't be noticed, aside from appearing as a name in the list of metrics.
Related
Quicksight only supports 24 refreshes / 24 Hrs for FULL REFRESH.
I want to refresh the data every 30 Mins.
Answer:
Scenario:
Let us say I want to fetch the data from the source (Jira) and push it to SPICE and render it in Quicksight Dashboards.
Requirement:
Push the data every 30 Mins once.
Quicksight supports the following:
Full refresh
Incremental refresh
Full refresh:
Process - Old data is replaced with new data.
Frequency - Every 1 Hr once
Refresh count - 24 / Day
Incremental refresh:
Process - New data get appended to the dataset.
Frequency - Every 15 Min once
Refresh count - 96 / Day
Issue:
We need to push the data every 30 Min once.
It is going to be a FULL_REFRESH
When it comes to Full Refresh Quicksight only supports Hourly refresh.
Solution:
We can leverage API support from AWS.
Package - Python Boto 3
Class - Quicksight.client
Method - create_ingestion
Process - You can manually refresh datasets by starting new SPICE ingestion.
Refresh cycle: Each 24-hour period is measured starting 24 hours before the current date and time.
Limitations:
Enterprise edition accounts 32 times in a 24-hour period.
Standard edition accounts 8 times in a 24-hour period.
Sample code:
Python - Boto for AWS:
import boto3
client = boto3.client('quicksight')
response = client.create_ingestion(
DataSetId='string',
IngestionId='string',
AwsAccountId='string',
IngestionType='INCREMENTAL_REFRESH'|'FULL_REFRESH'
)
awswrangler:
import awswrangler as wr
wr.quicksight.cancel_ingestion(ingestion_id="jira_data_sample_refresh", dataset_name="jira_db")
CLI:
aws quicksight create-ingestion --data-set-id dataSetId --ingestion-id jira_data_sample_ingestion --aws-account-id AwsAccountId --region us-east-1
API:
PUT /accounts/AwsAccountId/data-sets/DataSetId/ingestions/IngestionId HTTP/1.1
Content-type: application/json
{
"IngestionType": "string"
}
Conclusion:
Using this approach we can achieve 56 Full Refreshes for our dataset also we can go one step further and get the peak hours of our source tool (Jira) and configure the data refresh accordingly. This way we can even achieve a refresh frequency of 10 Min once.
Ref:
Quicksight
Quicksight Gallery
SPICE
Boto - Python
Boto - Create Ingestion
AWS Wrangler
CLI
API
I'm having trouble wrapping my head around GCP Logs Based Metrics. I have the following messages being logged from a cloud function:
insertId: qwerty
jsonPayload:
accountId: 60da91d2-7391-4979-ba3b-4bfb31fa7777
message: Replay beginning. Event tally=1
metric: stashed-events-tally
tally: 5
labels:
execution_id: n2iwj3335agb
What I'd like to do is sum up the values in the tally field. I've looked into logs based metrics and most of the examples I've seen seem to concern themselves with COUNTing the number of log messages that match the given filter. What I need to do is SUM the tally value.
Here's what I have so far (I'm using terraform to deploy the logs based metric):
resource "google_logging_metric" "my_metric" {
name = "mymetric"
filter = "resource.type=cloud_function AND resource.labels.function_name=${google_cloudfunctions_function.function.name} AND jsonPayload.metric=stashed-events-tally"
metric_descriptor {
metric_kind = "DELTA"
value_type = "DISTRIBUTION"
display_name = "mymetric"
}
value_extractor = "EXTRACT(jsonPayload.tally)"
bucket_options {
linear_buckets {
num_finite_buckets = 10
width = 1
}
}
}
Do I have to do something specific to SUM those values up, or is that defined wherever the metric is consumed (e.g. on a monitoring dashboard)?
As I say, I'm having trouble wrapping my head around this.
When you instrument your code, you have 2 steps:
Get the metrics
Visualize/create alert on metrics
The Log-based metric simply converts a log in a metric.
Then, if you want to perform a sum (over a time window of course), you have to ask your dashboarding system to perform that operation, with Cloud Monitoring for instance
I have a requirement to PUT about 20 records per second to an S3 bucket.
That comes to roughly 20 * 60 * 60 * 24 * 30 = 51,840,000 PUTs per month.
I do not need any transformations but I would certainly want the PUTs to be GZIPped and partitioned by year/date/month/hour.
Option 1 - Just do PUTObjects on S3
Price comes to about ~$260 a month
I would have to do the GZIP/Partitions etc on the client side
Option 2 - Introduce a Firehose and wire it to S3
And let's say I buffer only once in 10 minutes then that is about 6 * 24 * 30 = 4,320 PUTs. Price of S3 comes down to $21. With each record about 20 KB Firehose pricing is about 1000GB * 0.029 comes to about $30. So total pricing is $51. Costs for data transfer / storage etc are same in both approaches I believe.
Firehose provides GZip/Partitions/buffering for me OOTB
It appears like Option 2 is the best for my use case. Am I missing something here?
Thanks for looking!
I'v started to play with DynamoDb and I'v created "dynamo-test" table with hash PK on userid and couple more columns (age, name). Read and write capacity is set to 5. I use Lambda and API Gateway with Node.js. Then I manually performed several API calls through API gateway using similar payload:
{
"userId" : "222",
"name" : "Test",
"age" : 34
}
I'v tried to insert the same item couple times (which didn't produce error but silently succeeded.) Also, I used DynamoDb console and browsed for inserted items several times (currently there are 2 only). I haven't tracked how many times exactly I did those actions, but that was done completely manually. And then after an hour, I'v noticed 2 alarms in CloudWatch:
INSUFFICIENT_DATA
dynamo-test-ReadCapacityUnitsLimit-BasicAlarm
ConsumedReadCapacityUnits >= 240 for 12 minutes
No notifications
And the similar alarm with "...WriteCapacityLimit...". Write capacity become OK after 2 mins, but then went back again after 10 mins. Anyway, I'm still reading and learning how to plan and monitor these capacities, but this hello world example scared me a bit if I'v exceeded my table's capacity :) Please, point me to the right direction if I'm missing some fundamental part!
It's just an "INSUFFICIENT_DATA" message. It means that your table hasn't had any reads or writes in a while, so there is insufficient data available for the CloudWatch metric. This happens with the CloudWatch alarms for any DynamoDB table that isn't used very often. Nothing to worry about.
EDIT: You can now change a setting in CloudWatch alarms to ignore missing data, which will leave the alarm at its previous state instead of changing it to the "INSUFFICIENT_DATA" state.
We have a statistic coming from a third party tool that is running on our servers. We want to post this statistic to cloud watch every 5 minutes. The stat is an incrementing number. We have no control over getting this number or the fact that it is incrementing.
The stat is basically, "number of dropped messages".
We want to be able to alarm whenever for a period of 15 minutes, if the number of dropped messages is greater than a certain threshold.
In order to do this with CloudWatch, we have been maintain state over what the past stat was and subtract this from the current stat, in order to compute the difference (the number of dropped messages since the last time we posted the metric) and then post the difference to CloudWatch
Is there a way to post the raw numbers to CloudWatch and have CloudWatch figure out the difference?
So let's say these are our metrics:
12:00 - 0 -> post to cloud watch "0"
12:05 - 2225 -> post to cloud watch "225"
12:10 - 3350 -> post to cloud watch "1135"
12:15 - 7700 -> post to cloud watch "4350"
Instead of computing the difference since the last metric, can we just post 2000, 2225, 3350 and 7700, and be able to place an alarm on the difference between two periods?
You can achieve this through CloudWatch Metric Math (released in April 2018). See documentation.
In your particular case, you could use RATE or STDEV functions