Why can I choose severity as filter, but not textPayload when using log based metrics in Metric Explorer - google-cloud-platform

I am trying to extract some metrics using the metrics explorer. I select the metric: Log Entries and resource GKE.
As far as I understand from the documentation both severity and textPayload fields are first-class citizen, but I am able to select the severity field (and also log) but no textPayload (both are under metric label group).
Is there a way to filter by textPayload?
Is there a reason why I can't filter by this field specially ?(I could not find any documentation explaining why some fields are accessible and others are not)

Severity and logName are predefined labels for metrics. That is why you can find them in the list. You can find references for this in documentation at page https://cloud.google.com/logging/docs/logs-based-metrics/labels under "Default labels". Labels are available for filter and group by -fields.
If you want to use textPayload or part of it as filter or group by, you can create user-defined metrics and define custom labels based on textPayload. Then when you select the user-defined metric in metrics explorer, you can find your labels there.

Related

How do I build a query using MQL to fetch data from Log based metric?

I have a log based metric where the resource type is not defined. How do I build a MQL query for a log based metric for the following:
logging.googleapis.com/user/MY_METRIC
If I use the "configuration view" in Metric Explorer the data shows. However when I switch to MQL I can't seem to create the correct filter.
fetch logging.googleapis.com/user/MY_METRIC gives me syntax errors for resource type not defined.

Google Cloud Billing - Filter By Label Not Showing

I added resource labels to a few VMs to be able to pull a more granular billing breakdown by label. However, when I go to the billing report, I don't see any option to filter by Label. Is this a permission issue or am I missing something?
If I embed "label=" in the url, the label option will show, but it still doesn't retrieve the matching key pair.
As per my analysis your issue can be due to below reasons :
As per the official doc it says that
When filtering your billing breakdown by label keys, you are not able
to select labels applied to a project. You can select other
user-created labels that you set up and applied to Google Cloud
services.
This might be the reason you are unable to filter the label.
Google does not recommend creating large numbers of unique labels,
such as for timestamps or individual values for every API call. Refer
to these common use cases for labels and Refer this link for
requirements of label.
You need to enable “resourcemanager.projects.get “ permissions and
also enable “resourcemanager.projects.update” to add or modify the
label.
Refer to this link to create the label.

GCP log explorer filter for list item count more than 1

I am trying to write a filter in GCP log explorer, which can look for a count of the values of an attribute.
Example:
I am trying to find the logs like below, which has two items for "referencedTables" attribute.
GCP Log Explorer Screenshot
I have tried below options which doesn't work -
protoPayload.metadata.jobChange.job.jobStats.queryStats.referencedTables.*.count>1
protoPayload.metadata.jobChange.job.jobStats.queryStats.referencedTables.count>1
Also tried Regex looking for "tables" keyword occurrence twice -
protoPayload.metadata.jobChange.job.jobStats.queryStats.referencedTable=~"(\tables+::\tables+))"
Also tried Regex querying second item, which means there are more than one items -
protoPayload.metadata.jobChange.job.jobStats.queryStats.referencedTables1=~"^[A-Za-z0-9_.]+$"
Note that - these types of logs are BigQuery audit logs, that are logged in GCP logging service, when you run "insert into.. select" type of queries in BigQuery.
I think you can't use logging filters to filter across log entries only within a log entry.
One solution to your problem is log-based metrics where you'd create a metric by extracting values from logs but you'd then have to use MQL to query (e.g. count) the metric.
A more simple (albeit ad hoc) solution is to use use gcloud logging read to --filter the logs (possibly --format the results in JSON for easier processing) and then pipeline the results into a tool like jq where you could count the results.

Stackdriver log-based metrics does not display the values as reported by logging

My goal is to base my metrics directly from log values. The problem is when I display them as graph it looks like they are distributed. How can I change it so that it displays the values from the logs?
Unfortunately Stackdriver doesn't work in that way, you shouldn't expect that Stackdriver shows you "52" in this case. Have a look at the official documentation where "logs-based metrics can be one of two metric types: counter or distribution" and "counter metrics count the number of log entries matching" and "distribution metrics is to track latencies". You have to choose another tool for this task.
Assuming you created this as a distribution metric, I would expect this to work. Please take a look at this blog post to make sure you're using aligners and aggregators correctly.

Count number of GCP log entries during a specified time

Is it possible to count number of occurrences of a specific log message over a specific period of time from GCP Stackdriver logging? To answer the question "How many times did this event occur during this time period." Basically I would like the integral of the curve in the chart below.
It doesn't have to be a moving window, this time it's more of a one-time-task. A count-aggregator or similar on the advanced log query would also work if that would be available.
The query looks like this:
(resource.type="container"
logName="projects/xyz-142842/logs/drs"
"Publish Message for updated entity"
) AND (timestamp>="2018-04-25T06:20:53Z" timestamp<="2018-04-26T06:20:53Z")
My log based metric for the graph above looks like this:
My Dashboard is setup like this:
I ended up building stacked bars.
With correct zoom level I can sum up the number of occurrences easy enough. It would have been a nice feature to get the count directly from a graph (the integral), but this works for now.
There are multiple ways to do so, the two that I saw actually working and that can apply to your situation are the following:
Making use of Logs-based Metrics. They can, for example, record the number of log entries containing particular error messages, or they can extract latency information reported in log entries.
Stackdriver Logging logs-based metrics can be one of two metric types: counter or distribution. [...] Counter metrics count the number of log entries matching an advanced logs filter. [...] Distribution metrics accumulate numeric data from log entries matching a filter.
I would advise you to go through the Documentation to check this feature completely cover your use case.
You can export your logs to Big query, once you have them there you can make use of the classical tools like groupby, select and all the tool that BigQuery offers you.
Here you can find a very minimal step to step guide regarding how to export the logs and how to Analyzing Audit Logs Using BigQuery, but I am sure you can find online many resources.
The product and the approaches are really different, I would say that BigQuery is more flexible, but also more complex to be configure and to properly use it. If you find a third better way please update your question with those information.
At first you have to create a metric :
Go to Log explorer.
Type your query
Go to Actions >> Create Metric.
In the monitoring dashboard
Create a chart.
Select the resource and metric.
Go to "Advanced" and provide the details as given below :
Preprocessing step : Rate
Alignment function : count
Alignment period : 1
Alignment unit : minutes
Group by : log
Group by function : count
This will give you the visualisation in a bar chart with count of the desired events.
There is one more option.
You can read your custom metric using Stackdriver Monitoring API ( https://cloud.google.com/monitoring/api/v3/ ) and process it in script with whatever aggregation you need.
If you are working with python - you may look into gcloud python library https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/monitoring
It will be very simple script and you can stream results of calculation into bigquery table and use it in your dashboard
With PacketAI, you can send logs of arbitrary formats, including from GCP. then the logs dashboard will automatically parse and group into patterns as shown in this video. https://streamable.com/n50kr8
Counts and trends of different log patterns are also displayed
Disclaimer: I work for PacketAI