GCS metrices in google monitoring dashboard - google-cloud-platform

While coming to storage metrices, there are just options provided 'object count', 'Total Bytes' , 'Total byte seconds' . What if I need few more metrices like object whose retention period are about to over, maximum size of an object in the bucket etc. How can I achieve such metrices using monitoring dashboard?

Tried to replicate on my GCP environment but unfortunately their was no metrices for retention period and maximum size of an object in the bucket, I recommend you to create a feature request for another metrices dashboard for Cloud Storage. Regarding to your other inquiry you can check Metric Explorer from the documentation you can choose a specific metrics to create a chart for a specific metric categories using configuration (Console) or fetching of data using the query editor (MQL or PROMQL) then save to a custom dashboard.

Related

BigQuery data transfer service for Google Play specific App data

Can Google BigQuery data transfer service allow me to transfer specific app data automatically?
For example, I have 10 apps in my Google play console, I only want to transfer to BQ within only 3 apps. Is it possible to make this work or any approach?
Also, I just read the price of doc, The monthly charge is $25 per unique Package Name in the Installs_country table.
I don't quite understand how to calculate my cost with that example.
Thank you.
For your requirement, you can download the reports in Cloud Storage of a specific app by selecting the app from Google Play Store for which you want to get the data and then send it to BigQuery using BigQuery Data Transfer Service. The cost calculation of Google Play, it is calculated as $25 per month per unique package and stored in the Installs_country table in BigQuery.
For selecting the specific app, follow the steps given below :
Go to the Play Console.
Click on Download Reports and select the type of report you want.
Under "Select an application," type and select the app for which you want to get the data.
Select the year and month for which you want to download the report.
If you are storing data in a Cloud Storage bucket then that will incur cost and the pricing for data transfer from one storage bucket to another storage bucket can be checked in this link and since you are storing and querying in BigQuery that will also be chargeable.For BigQuery pricing details you can check this documentation. You can use the Billing Calculator to calculate your costs.

creating an alert when no data was uploaded to BigQuery table in GCP

I have a requirement to send an email notification whenever there is no data getting inserted into my BigQuery table. For this, I am using the Logging and Alerting mechanism But still, I am not able to receive any email. Here are the steps I followed:
I had written a Query in Logs explorer as below:
Now I had created a metric for those logs with Metric type COUNTER and in the filter section obviously I have given the above query.
Now I created a policy in ALERTING under the MONITORING domain. And here is the screenshot attached. The alerting policy which I had selected is for the logging metrics which I had created before.
And then a trigger as below:
And in the Notification channel, added my Email ID.
Can someone please help me if I am missing something? My requirement is to receive an alert when there is no data inserted into a Bigquery table for more than a day.
And also, I could see in Metrics Explorer, the metric which I created is not ACTIVE. Why so?
As mentioned in GCP docs:
Metric absence conditions require at least one successful measurement — one that retrieves data — within the maximum duration window after the policy was installed or modified.
For example, suppose you set the duration window in a metric-absence policy to 30 minutes. The condition isn't met if the subsystem that writes metric data has never written a data point. The subsystem needs to output at least one data point and then fail to output additional data points for 30 minutes.
Meaning, you will need at least 1 datapoint (insert job) to have an incident created for the metric to be missing.
There are two options:
Create an artificial log entry to get the metric started and have at least one time series and data point.
Run an insert job that would match the log-based metric that was created to get the metric started.
With regards to your last question, the metric you created is not active because there hasn't been any written data points to it within the previous 24 hours. As mentioned above, the metric must have at least 1 datapoint written to it.
Refer to custom metrics quota for more info.

GCP BigQuery resource level costs aggregation

Am working on a solution to get the resource costs from the BigQuery.
We have recently moved to BigQuery implementation for collecting billing information since CSV/JSON has been deprecated for use from GCP.
Though Bigquery provides only SKU level billing information, our application needs to collect the resource level billing information as well.
Usage reports can be exported as a CSV in cloud storage and which contains Measurement id, Resource id and USage units per resource. But, in BigQuery we don't have Measurement id to match with usage reports to get the resource level billing information.
The billing result from Bigquery is as follows.
Needed information on how we can collect the resource level costs along with BigQuery line items.
Following query can give you (user email & total bytes processed ). Since we know Total bytes BQ has processed for last ´N´ days, accordingly we can estimate cost of BQ analysis
SELECT
user_email,
sum(total_bytes_processed)
FROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
WHERE creation_time BETWEEN TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 10 DAY)
AND CURRENT_TIMESTAMP()
AND job_type = "QUERY"
AND end_time BETWEEN TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 10 DAY) AND CURRENT_TIMESTAMP()
group by user_email
The only way, for now, is to use labels to get the resource level costs in GCP which is recommended by GCP as well.
Use labels to gain visibility into GCP resource usage and spending
https://cloud.google.com/blog/topics/cost-management/use-labels-to-gain-visibility-into-gcp-resource-usage-and-spending

Google Stackdrive custom metrics - data retention period

I'm using GCP Stackdrive custom metrics and created few dashboard graphs to show the traffic on the system. The problem is that the graph system is keeping the data for few weeks - not forever.
From Stackdrive documentation:
See Quotas and limits for limits on the number of custom metrics and
the number of active time series, and for the data retention period.
If you wish to keep your metric data beyond the retention period, you
must manually copy the data to another location, such as Cloud Storage
or BigQuery.
Let's decide to work with Cloud Storage as a container to store data for the long term.
Questions:
How does this "manual data copy" is working? Just write the same data into two places (Google storage and Stackdrive)?
How the stackdrive is connecting the storage and generating graph of it?
You can use Stackdriver's Logs Export feature to export your logs into either of three sinks, Google Cloud Storage, BigQuery or Pub/Sub topic. Here are the instructions on how to export stackdriver logs. You are not writing logs in two places in real-time but exporting logs based on the filters you set.
One thing to keep in mind is you will not be able to use stackdriver graphs or alerting tools with the exported logs.
In addition, if you export logs into bigquery, you can plug a Datastudio graphe to see your metrics.
You can also do this with Cloud Storage export but it's less immediate and less handy
I'll suggest this guide on creating a pipeline to export metrics to BigQuery for long-term storage and analytics.
https://cloud.google.com/solutions/stackdriver-monitoring-metric-export

Cloudwatch - Metrics are expiring

I currently have a bunch of custom metric's based in multiple regions across our AWS account.
I thought I was going crazy but have now confirmed that the metric I created a while ago is expiring when not used for a certain time period (could be 2 weeks).
Here's my setup.
I create a new metric on my log entry - which has no expiry date;
I then go to the main page on CloudWatch --> then to Metrics to view any metrics (I understand this will only display new metric hits when there are hits that match the metric rule).
About 2 weeks ago, I had 9 Metrics logged under my "Custom Namespaces", and I now have 8 - as if it does not keep all the data:
As far as i'm aware, all my metrics should stay in place (unless I remove them), however, it seems as though if these are not hit consistently, the data "expires", is that correct? If so, how are you meant to track historical data?
Thanks
CloudWatch will remove metrics from search if there was no new data published for that metric in the last 2 weeks.
This is mentioned in passing in the FAQ for EC2 metrics, but I think it applies to all metrics.
From 'will I lose the metrics data if I disable monitoring for an Amazon EC2 instance question' in the FAQ:
CloudWatch console limits the search of metrics to 2 weeks after a
metric is last ingested to ensure that the most up to date instances
are shown in your namespace.
Your data is still there however. Data adheres to a different retention policy.
You can still get your data if you know what the metric name is. If you added your metric to a dashboard, it will still be visible there. You can use CloudWatch PutDashboards API to add the metric to a dashboard or use CloudWatch GetMetricStatistics API to get the raw data.