"Only numeric data" error on logs-base metric GCP - google-cloud-platform

I have use Ops agent to send log to Cloud Logging
Uploaded logs
And then I used these logs to create logs-base metric with field name is jsonPayload.data
Create logs-base metric
After that, I review logs of that metric to make sure input data is correct
Review input data
But finally, the result is Cloud metric show error Only numeric data can be drawn as a line chart. I have checked at "review logs" step and make sure that input data is numeric. Can anyone help me explain that?
Error
Sorry, I'm new to stackoverflow, so I can't upload image directly.

You can see the Metric by changing the aligner to percentile.

Related

GCP Alert email customization

I am trying to add more text to 'alert' email that Google send when something happens in cloud. For example I build an log based alert and when some threshhold reached then 'alert' email send. I read multiple time the document: Using Markdown and variables in documentation templates but whatever I put into 'Documentation' field come as simple text - without actual value of the field :( For example I receive an email with:
Error Text: ${log.extracted_label.rawLogIndex}
For example I have a log entry like this:
I want the value of labels\error_stack to be send in alert email - how I can do that ? Could you add an example ?
You can include the log data using the variables in the document section of the alert policy. For this log-based alert needs to be created.
To answer your question: In order to use the variable in documentation, you need to create a label for your logs. You can create a label using extractor expressions. These expressions will notify the cloud logging to extract the label's value from the logs you defined.
This document will help to understand the labels for log-based metrics along with example. You can follow this tutorial to create a log-based alerts.

AWS Cloud Watch: Metric Filter Value Extraction from Log

I have api.log logs being sent to CloudWatch and I want to create a metric filter to extract the userId of the user tried to access application.
A sample log entry looks like:
2022-12-06T19:13:59.329Z 2a-b0bc-7a79c791f19c INFO Validated that user fakeId has access to the following gated roles: create, update and delete
And the value I would like to extract is: fakeId
I read through this guide and it seems pretty straight forward because user [fakeId] seconds is unique to just this line. This guide on metric filter syntax seems to only show examples for extracting values from JSON logs and this official example list doesn't cover it.
Based on the documentation and a few other stackoverflow answers, I tried these things:
[validation="Validated", that="that", user="user", userId, ...]
[,,user="user",userId,...]
[,,user=user,userId,...]
but it didn't. Any help would be really appreciated!

How can I trigger an alert based on log output?

I am using GCP and want to create an alert after not seeing a certain pattern in the output logs of a process.
As an example, my CLI process will output "YYYY-MM-DD HH:MM:SS Successfully checked X" every second.
I want to know when this fails (indicated by no log output). I am collecting logs using the normal GCP log collector.
Can this be done?
I am creating the alerts via the UI at:
https://console.cloud.google.com/monitoring/alerting/policies/create
You can create an alert based on log metric. For that, create a log based metric in Cloud Logging with the log filter that you want.
Then create an alert, aggregate per minute the metrics and set an alert when the value is below 60.
You won't have an alert for each missing message but based on a minute, you will have an alert when the expected value isn't reached.

Log Buckets from Google

Is it possible to download a Log Storage (Log bucket) from Google Cloud Platform, specifically the one created by default? In case someone knows they can explain how to do it.
The possible solution for the question is you need to choose the required logs and then get the logs for the time period of 1 day to download them in JSON or CSV format.
Step1- From the logging console goto advanced filtering mode
Step2- To choose the log type use filtering query, for example
resource.type="audited_resource"
logName="projects/xxxxxxxx/logs/cloudaudit.googleapis.com%2Fdata_access"
resource.type="audited_resource"
logName="organizations/xxxxxxxx/logs/cloudaudit.googleapis.com%2Fpolicy"
Step3- You can download them as JSON and CSV format
If you have a huge number of audit logs generated per day then above one will not work out. So, you need to export logs to Cloud storage and a big query for further analysis. Please note that cloud logging doesn’t charge to export logs but destination charges might apply.
Another option, you can use the following gcloud command to download the logs.
gcloud logging read "logName : projects/Your_Project/logs/cloudaudit.googleapis.com%2Factivity" --project=Project_ID --freshness=1d >> test.txt

Filter AWS Cloudwatch Lambda's Log

I have a Lambda function and its logs in Cloudwatch (Log group and Log Stream). Is it possible to filter (in Cloudwatch Management Console) all logs that contain "error"? For example logs containing "Process exited before completing request".
In Log Groups there is a button "Search Events". You must click on it first.
Then it "changes" to "Filter Streams":
Now you should just type your filter and select the beginning date-time.
So this is kind of a side issue, but it was relevant for us. (I posted this to another answer on StackOverflow but thought it would be relevant to this conversation too)
We've noticed that tailing and searching logs gets really slow after a log group has a lot of Log Streams in it, like when an AWS Lambda Function has had a lot of invocations. This is because "tail" type utilities and searching need to connect to each log stream to run. Log Events get expired and deleted due to the policy you set on the Log Group itself, but the Log Streams never get cleaned up. I made a few little utility scripts to help with that:
https://github.com/four43/aws-cloudwatch-log-clean
Hopefully that save you some agony over waiting for those logs to get searched.
You can also use CloudWatch Insights (https://aws.amazon.com/about-aws/whats-new/2018/11/announcing-amazon-cloudwatch-logs-insights-fast-interactive-log-analytics/) which is an AWS extension to CloudWatch logs that gives a pretty powerful query and analytics tool. However it can be slow. Some of my queries take up to a minute. Okay, if you really need that data.
You could also use a tool I created called SenseLogs. It downloads CloudWatch data to your browser where you can do queries like you ask about. You can use either full text and search for "error" or if your log data is structured (JSON), you can use a Javascript like expression language to filter by field, eg:
error == 'critical'
Posting an update as CloudWatch has changed since 2016:
In the Log Groups there is a Search all button for a full-text search
Then just type your search: