Stackdriver log-based metrics does not display the values as reported by logging - google-cloud-platform

My goal is to base my metrics directly from log values. The problem is when I display them as graph it looks like they are distributed. How can I change it so that it displays the values from the logs?

Unfortunately Stackdriver doesn't work in that way, you shouldn't expect that Stackdriver shows you "52" in this case. Have a look at the official documentation where "logs-based metrics can be one of two metric types: counter or distribution" and "counter metrics count the number of log entries matching" and "distribution metrics is to track latencies". You have to choose another tool for this task.

Assuming you created this as a distribution metric, I would expect this to work. Please take a look at this blog post to make sure you're using aligners and aggregators correctly.

Related

One or more points were written more frequently than the maximum sampling period configured for the metric

Background
I have a website deployed in multiple machines. I want to create a Google Custom Metric that specifies the throughput of it - how many calls were served.
The idea was to create a custom metric that collects information about served requests and 1 time per minute to update the information into a custom metric. So, for each machine, this code can happen a maximum of 1-time per minute. But this process is happening on each machine on my cluster.
Running the code locally is working perfectly.
The problem
I'm getting this error: Grpc.Core.RpcException:
Status(StatusCode=InvalidArgument, Detail="One or more TimeSeries
could not be written: One or more points were written more frequently
than the maximum sampling period configured for the metric. {Metric:
custom.googleapis.com/web/2xx, Timestamps: {Youngest Existing:
'2019/09/28-23:58:59.000', New: '2019/09/28-23:59:02.000'}}:
timeSeries[0]; One or more points were written more frequently than
the maximum sampling period configured for the metric. {Metric:
custom.googleapis.com/web/4xx, Timestamps: {Youngest Existing:
'2019/09/28-23:58:59.000', New: '2019/09/28-23:59:02.000'}}:
timeSeries1")
Then, I was reading in the custom metric limitations that:
Rate at which data can be written to a single time series = one point per minute
I was thinking that Google Cloud Custom Metric will handle the concurrencies issues for me.
According to their limitations, the only option for me to implement realtime monitoring is to put another application that will collect information from all machines and will update it into a custom metric. It sounds to me like too much work for a real use case.
What I'm missing?
Now that you add the machine name on the metric and you get the machines metrics.
To SUM these metrics go to Stackdriver > Metric Explorer, and group your metrics by project-id or label for example, and then SUM the metrics.
https://cloud.google.com/monitoring/charts/metrics-selector#alignment
You can save the chart in a custom dashboard.

Stackdriver alert for % with label value?

I have a custom metric of type Count, which measures the count of a particular operation. It has a label called "success", which can be either "Success" or "Failure". I'd like to create an alert condition if the Failure % is above a certain threshold, perhaps 20%. Is that possible? If so, how would I do that? Or, do I need to change the metric itself to support this, and if so, how?
You can personalize your stackdriver alerting by targeting this labels with condition triggers where you will be able to set the percent of time series violates as you want 20%. You can follow this guide to accomplish what you want.
I think what I may need is to create a "metric ratio":
https://cloud.google.com/monitoring/alerts/policies-in-json#json-ratio
With the API, you can create a policy that computes the ratio of two
related metrics and fires when that ratio crosses a threshold.
But somewhat unfortunately:
Note: You can't create ratio-based policies through the UI.

AWS Pinpoint: How to view custom metrics

It is clear from the documentation that I can add custom metrics for a custom event.
How do I view these metrics in the Pinpoint console? From the Pinpoint console, it is obvious how to view attributes. I can go to Analytics > Events, select my custom event, and narrow down the events to whatever attributes I desire. I am asking about how to view metrics. To be clear, these differ by being continuous values whereas attributes are discrete. The documentation says that I can do this. See below how I can filter by attributes manually: (attribute is circled)
See the docs on custom events here: https://docs.aws.amazon.com/pinpoint/latest/developerguide/integrate-events.html
Similarly, creating a funnel only allows filtering for attributes. How can I filter for metrics?
Thank you for your time!
When I first asked this question, AWS had the ability to record metrics with the Swift SDK, but not view them in the Pinpoint API, which is absurd, because then you can only record metrics. What's the point? I asked in the AWS forums, and a couple months later, they responded something along the lines of "Please wait - coming soon."
This feature is now available, whereas before it simply wasn't.
Go to Pinpoint, your project, then click the Analytics drop-down menu, then click events. You can see that you can sort by metric. If you look at my outdated screenshot above, you'll see that this was not an option.

How to get Spike Alert on Stackdriver?

I have been trying to register an alert on spike of some metrics using Stackdriver. Here's the query and details:
If there a sudden spike and 500s cross 20
If the total number of requests (200s or others) cross 3000 over 5 mins
To achieve [1], I put the aggregation as mean, aligner as mean (sum over aligner doesn't seem to work - I dont understand why). This query works if the average of requests over 5 mins is over 20 (which is the expected behavior). But I am not able to register any single spike which is the requirement.
Again, for [2] the average over a certain duration works but the summation of requests doesn't seem to work.
If there a way of achieving either or both of the requirements.
PS: Please let me know if you need more data or snippets of the dashboard to understand what I have done till now. I will go ahead and add some accordingly.
I do not believe there is aggregation when trying to set up an alert. As an example for [1], please go to
Stackdriver Monitoring
Alerting
Create a policy and add your conditions
Select your Resource Type
Select your metric, condition and threshold = 20
Response_code_class = 500
Save condition
The alerting UI has changed since the previous answer was written. You can now specify aggregations when creating alerting policies. That said, I don't think you want mean; that's going to smooth out your curve which will defeat your intended use case. A simple threshold alert with a short duration (even zero) ought to do it, I think.
For your second case, you ought to be able to compute a five-minute sum and alert on that. If you still can't get it to work, respond here or file a support ticket and we'll see how we can help you.
Aaron Sher, Stackdriver engineer

Count number of GCP log entries during a specified time

Is it possible to count number of occurrences of a specific log message over a specific period of time from GCP Stackdriver logging? To answer the question "How many times did this event occur during this time period." Basically I would like the integral of the curve in the chart below.
It doesn't have to be a moving window, this time it's more of a one-time-task. A count-aggregator or similar on the advanced log query would also work if that would be available.
The query looks like this:
(resource.type="container"
logName="projects/xyz-142842/logs/drs"
"Publish Message for updated entity"
) AND (timestamp>="2018-04-25T06:20:53Z" timestamp<="2018-04-26T06:20:53Z")
My log based metric for the graph above looks like this:
My Dashboard is setup like this:
I ended up building stacked bars.
With correct zoom level I can sum up the number of occurrences easy enough. It would have been a nice feature to get the count directly from a graph (the integral), but this works for now.
There are multiple ways to do so, the two that I saw actually working and that can apply to your situation are the following:
Making use of Logs-based Metrics. They can, for example, record the number of log entries containing particular error messages, or they can extract latency information reported in log entries.
Stackdriver Logging logs-based metrics can be one of two metric types: counter or distribution. [...] Counter metrics count the number of log entries matching an advanced logs filter. [...] Distribution metrics accumulate numeric data from log entries matching a filter.
I would advise you to go through the Documentation to check this feature completely cover your use case.
You can export your logs to Big query, once you have them there you can make use of the classical tools like groupby, select and all the tool that BigQuery offers you.
Here you can find a very minimal step to step guide regarding how to export the logs and how to Analyzing Audit Logs Using BigQuery, but I am sure you can find online many resources.
The product and the approaches are really different, I would say that BigQuery is more flexible, but also more complex to be configure and to properly use it. If you find a third better way please update your question with those information.
At first you have to create a metric :
Go to Log explorer.
Type your query
Go to Actions >> Create Metric.
In the monitoring dashboard
Create a chart.
Select the resource and metric.
Go to "Advanced" and provide the details as given below :
Preprocessing step : Rate
Alignment function : count
Alignment period : 1
Alignment unit : minutes
Group by : log
Group by function : count
This will give you the visualisation in a bar chart with count of the desired events.
There is one more option.
You can read your custom metric using Stackdriver Monitoring API ( https://cloud.google.com/monitoring/api/v3/ ) and process it in script with whatever aggregation you need.
If you are working with python - you may look into gcloud python library https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/monitoring
It will be very simple script and you can stream results of calculation into bigquery table and use it in your dashboard
With PacketAI, you can send logs of arbitrary formats, including from GCP. then the logs dashboard will automatically parse and group into patterns as shown in this video. https://streamable.com/n50kr8
Counts and trends of different log patterns are also displayed
Disclaimer: I work for PacketAI