Can I set the timezone for reports in Amazon Cloudwatch? - amazon-web-services

I have account on amazon cloud, and I am using Cloudwatch to monitor my website. But all the reports/charts in Cloudwatch are coming in as UTC time. I have to calculate time against my timezone (UTC+05:30, Indian Standard Time).
Is there any way to set the timezone for my reports?

They've (very) recently added a drop down that allows you to set your local timezone.

The latest version of AWS DynamoDB seems to have rearranged the UI elements. Currently you can find the timezone selector when you click on custom period selector. There are only two options. By default, it shows the chart in "UTC". The second option is "Local timezone".

Related

Item Duration in Cache

I am trying to create a metric to measure the amount of time that an item has been in a cache using Elasticache. There does not seem to be any built in metric for this in Cloud Watch, and I have struggled to run a query in logs insights to obtain this information.
I have tried running a query in log insights to create this metric, but it requires matching of an ID and the query language used in AWS does not seem to support these types of conditional queries. So I am unsure of how to solve this problem

Dynamic setting the timestamp fields in superset dashboards

I'm building few dashboards in Apache superset. All my available timestamp fields are in UTC timezone. (for example fields are, class_start_time & class_end_time).
I want that in the timezone the dashboard is opened all the timestamp fields will be automatically converted.
For example, I'm opening dashboard in Norway , so the UTC data should be converted to CET timezone of Norway.
I have tried to add some value here in Hours offset but its not working.
Can you please guide how we can achieve this.?
Just for reference :
In Kibana dashboards (ELK stack) have feature to automatically convert the timezone into which it is being opened. So I need same thing in Superset.
Normally you would be able to set this with environment variables when you start the program or container. In Apache Superset, this is not possible. There is an ongoing discussion on Github about this issue. One GitHub user posts the problem and workaround, which is far from workable:
Daylight savings causes issues where users have to update datasource
timezone offset for each datasource twice per year.
So the only thing you can do is update the hours offset twice a year. To make matters even worse, if you use Postgresql, this may not even be possible due to a bug as described here.

Get google cloud uptime history to a third party application

I am trying to get my application(where hosted in google cloud) uptime history to a my own page. Is there any api so something on google cloud? I only need to get date and the up/down percentage or time.
I am already configure the uptime checks on google console. But I need to integrate this into my application.
Yes, you can but it's not obvious and it may be easier to use something other than Cloud Monitoring to export uptime data to a non-GCP site :-)
If you do want to use Cloud Monitoring to source this data into an off-GCP page, one of the Cloud Monitoring SDKs may be best. You can create a URL too (see below) but you'll need to authenticate this URL and that may make it too complex.
By way of an example, here's an Uptime check I created against my blog:
I recommend Google APIs Explorer as it's an excellent way to understand Google's services (via the REST APIs) and to test an approach.
First: List|Get Uptime Check(s)
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.uptimeCheckConfigs/list
Plug in to the form on the right hand side parent, the value of projects/${PROJECT}
If your Project ID is freddie-210224-66311747 then you'd type project/freddie-210224-66311747.
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.uptimeCheckConfigs/get
For this one, you need to provide name, the value of projects/${PROJECT}/uptimeCheckConfigs/${UPTIME_CHECK}
If your Uptime check is called test, then you'd type projects/freddie-210224-66311747/uptimeCheckConfigs/test
NOTE In my case, I used an Uptime check name that included periods (my.blog.com) and this was converted (to my-blog-com). So, you may want to list first to check the name.
Click "Execute" (You don't need to have API Key checked but it makes no difference).
What I learned is that Uptime checks are Metrics like all others. I confirmed this by watching the Chrome Dev Tools while I was watching Uptime checks.
Ensure that you use the correct metric name. You can use Monitoring's Metrics Explorer to confirm this:
The Resource Type is Uptime Check URL (uptime_url)
One (!) of the Metrics you may use is Request Latency (monitoring.googleapis.com/uptime_check/request_latency)
If you populate the Metrics Explorer, you should see the same data plotted as with the Uptime Check page.
Click Query Editor to get your Uptime Metric represented as Cloud Monitoring Query Language (MQL), remove any line-feeds. You can use:
fetch uptime_url | metric 'monitoring.googleapis.com/uptime_check/request_latency' | group_by 1m, [value_request_latency_mean: mean(value.request_latency)] | every 1m
So, now we want to query Montioring Metric Time-series
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/query
The value for name is projects/${PROJECT}
For query, paste in the MQL from above retain the quotes, i.e. "fetch uptime_url ..."
Hit EXECUTE
You should receive a snapshot of the time-series data underlying your Uptime URL. You can revise the MQL to reflect exactly the subset that you need. At 2021-02-24T20:55:38 the latency was 20.869:
So, to get e.g. request latencies for your uptime checks, you can use the Monitoring API's TimeSeries Query method and, with a suitable Query, this will yield JSON data including an array of Point (values). These values could then be transformed and surfaced into your external page.

Can AWS Quicksight remember control / filter setting per user?

I have an AWS QuickSight dashboard defined with a parameter having dynamic defaults per user. The dashboard contains a filter defined by the parameter with corresponding control.
Is there any possibility to remember the setting of the controls / filters for each user, so next time when they view the dashboard the previous setting will be the default?
Thanks.
Filter and parameter persistence was launched in late 2020. The default behavior for all users is that dashboards will look exactly the same as how they left it the last time.
For embedded dashboards you can turn persistence on or off when calling the API to get the embedded dashboard URL (there is a StatePersistenceEnabled value you can set to TRUE/FALSE in that command).
https://docs.aws.amazon.com/quicksight/latest/APIReference/API_GetDashboardEmbedUrl.html
As of today according to AWS Support this feature is not available in Amazon QuickSight.

One or more points were written more frequently than the maximum sampling period configured for the metric

Background
I have a website deployed in multiple machines. I want to create a Google Custom Metric that specifies the throughput of it - how many calls were served.
The idea was to create a custom metric that collects information about served requests and 1 time per minute to update the information into a custom metric. So, for each machine, this code can happen a maximum of 1-time per minute. But this process is happening on each machine on my cluster.
Running the code locally is working perfectly.
The problem
I'm getting this error: Grpc.Core.RpcException:
Status(StatusCode=InvalidArgument, Detail="One or more TimeSeries
could not be written: One or more points were written more frequently
than the maximum sampling period configured for the metric. {Metric:
custom.googleapis.com/web/2xx, Timestamps: {Youngest Existing:
'2019/09/28-23:58:59.000', New: '2019/09/28-23:59:02.000'}}:
timeSeries[0]; One or more points were written more frequently than
the maximum sampling period configured for the metric. {Metric:
custom.googleapis.com/web/4xx, Timestamps: {Youngest Existing:
'2019/09/28-23:58:59.000', New: '2019/09/28-23:59:02.000'}}:
timeSeries1")
Then, I was reading in the custom metric limitations that:
Rate at which data can be written to a single time series = one point per minute
I was thinking that Google Cloud Custom Metric will handle the concurrencies issues for me.
According to their limitations, the only option for me to implement realtime monitoring is to put another application that will collect information from all machines and will update it into a custom metric. It sounds to me like too much work for a real use case.
What I'm missing?
Now that you add the machine name on the metric and you get the machines metrics.
To SUM these metrics go to Stackdriver > Metric Explorer, and group your metrics by project-id or label for example, and then SUM the metrics.
https://cloud.google.com/monitoring/charts/metrics-selector#alignment
You can save the chart in a custom dashboard.