I'm using CloudWatch to monitor cpu_usage_system metric from CWagent.
I'm plotting data that is more then 24h old.
When using the regular CloudWatch browsing tab to view the data I see data points, when I do the same with CloudWatch SQL I do not.
Answer from AWS support:
CloudWatch Metrics Insights currently supports the latest three hours of data only. When you graph with a period larger than one minute, for example five minutes or one hour, there could be cases where the oldest data point differs from the expected value. This is because the Metrics Insights queries return only the most recent 3 hours of data, so the oldest datapoint, being older than 3 hours, accounts only for observations that have been measured within the last three hours boundary.
In simple words, Currently, we can query only the most recent 3 hours of data (nothing other than that). The document link having more information on this is mentioned below.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-metrics-insights-limits.html
Related
Recently we faced an outage due to 403 rateLimitExceeded error. We are trying to setup an alert using gcp metric for this error. However the metric for bigquery.googleapis.com/query/count or bigquery.googleapis.com/job/num_in_flight is not showing the number of queries running correctly. We believe we crossed the threshold of 100 concurrent queries several times over the past few days but the metric explorer shows only a maximum of 5 only on few occasions. Do these metrics need any other configs to show the right number or we should use some other way to create an alert that shows that we have crossed 80% of concurrent query no.
I want to create a dashboard/chart in Google Cloud Monitoring where I can see the total number of rows of my BigQuery table at all times.
With resource type "bigquery_dataset" and metric "uploaded_row_count" I only see the number of new rows per second with aligner "rate".
If I choose "sum" as aligner it only shows the number of new rows added for the chosen alignment period.
I'm probably missing something but how do I see the total number of rows of a table?
PubSub subscriptions have this option with metric "num_undelivered_messages" and also Dataflow jobs with "element_count".
Thanks in advance.
There's an ongoing feature request for BigQuery table attribute on GCP's Cloud Monitoring metrics, but there's no ETA when this feature be rolled out . Please star and comment if you wanted the feature to be implemented in the future.
Cloud Monitoring only charts and monitors any (numeric) metric data that your Google Cloud project collects. On this case, system metrics generated for BigQuery. By looking at the documentation, only the metric for uploaded rows are available, which has the behaviour that you're seeing in the chart. The total number of rows however is currently not available.
Therefore as of this writing, unfortunately what you want is not possible, due to the the Cloud Monitoring limitations on BigQuery, there are only work around that you can try to do.
For other readers who are ok with #Mikhail Berlyant comment, here is a thread for querying metadata including row counts.
I want to set weekly Google Play transfer, but it can not be saved.
At first, I set daily a play-transfer job. It worked. I tried to change transfer frequency to weekly - every Monday 7:30 - got an error:
"This transfer config could not be saved. Please try again.
Invalid schedule [every mon 7:30]. Schedule has to be consistent with CustomScheduleGranularity [daily: true ].
I think this document shows it can change transfer frequency:
https://cloud.google.com/bigquery-transfer/docs/play-transfer
Can Google Play transfer be set to weekly?
By default transfer is created as daily. From the same docs:
Daily, at the time the transfer is first created (default)
Try to create brand new weekly transfer. If it works, I would think it is a web UI bug. Here are two other options to change your existing transfer:
BigQuery command-line tool: bq update --transfer_config
Very limited number of options are available, and schedule is not available for update.
BigQuery Data Transfer API: transferConfigs.patch Most transfer options are updatable. Easy way to try it is with API Explorer. Details on transferconfig object. schedule field need to be defined:
Data transfer schedule. If the data source does not support a custom
schedule, this should be empty. If it is empty, the default value for
the data source will be used. The specified times are in UTC. Examples
of valid format: 1st,3rd monday of month 15:30, every wed,fri of
jan,jun 13:15, and first sunday of quarter 00:00. See more explanation
about the format here:
https://cloud.google.com/appengine/docs/flexible/python/scheduling-jobs-with-cron-yaml#the_schedule_format
NOTE: the granularity should be at least 8 hours, or less frequent.
Google API metrics only shows 1 hour to 30 days metrics from today/now. It shows the total but when you narrow the graph it wont update total for that gap of time.
How do I see total amount of request for specific point in time.
Besides, it only provides requests per second, which is a constant variation form my app.
I have tried using "Traffic by API" graph on Google Cloud Platform and narrowing it to shorter time.
Expected the results at the bottom to update with a count of requests of the shorter period of time.
Screen cap of one day of metrics adding up 34,238 requests
Screen cap of graph narrowed from 18hs to 21hs but counts still on 34,238
Looking into it, it seems you can go into the desired API and then into metrics where you will find a drop down.
In that drop down you can choose "Traffic by API" and then filter the graph for the period you are interested. Then download/export that graph and process it on Excel.
In the output file you will find request per second, that you can multiply by 60000 and then create a pivot table to add up everything.
I'm trying to set up a custom dashboard in CloudWatch, and don't understand how "Period" is affecting how my data is being displayed.
Couldn't find any coverage of this variable in the AWS documentation, so any guidance would be appreciated!
Period is the width of the time range of each datapoint on a graph and it's used to define the granularity at which you want to view your data.
For example, if you're graphing total number of visits to your site during a day you could set the period to 1h, which would plot 24 datapoints and you will see how many visitors you had in each hour of that day. If you set the period to 1min, graph will display 1440 datapoints and you will see how many visitors you had each in minute of that day.
See the CloudWatch docs for more details:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#CloudWatchPeriods
Here is a similar question that might be useful:
API Gateway Cloudwatch advanced logging