GCP Alert email customization - google-cloud-platform

I am trying to add more text to 'alert' email that Google send when something happens in cloud. For example I build an log based alert and when some threshhold reached then 'alert' email send. I read multiple time the document: Using Markdown and variables in documentation templates but whatever I put into 'Documentation' field come as simple text - without actual value of the field :( For example I receive an email with:
Error Text: ${log.extracted_label.rawLogIndex}
For example I have a log entry like this:
I want the value of labels\error_stack to be send in alert email - how I can do that ? Could you add an example ?

You can include the log data using the variables in the document section of the alert policy. For this log-based alert needs to be created.
To answer your question: In order to use the variable in documentation, you need to create a label for your logs. You can create a label using extractor expressions. These expressions will notify the cloud logging to extract the label's value from the logs you defined.
This document will help to understand the labels for log-based metrics along with example. You can follow this tutorial to create a log-based alerts.

Related

AWS Cloud Watch: Metric Filter Value Extraction from Log

I have api.log logs being sent to CloudWatch and I want to create a metric filter to extract the userId of the user tried to access application.
A sample log entry looks like:
2022-12-06T19:13:59.329Z 2a-b0bc-7a79c791f19c INFO Validated that user fakeId has access to the following gated roles: create, update and delete
And the value I would like to extract is: fakeId
I read through this guide and it seems pretty straight forward because user [fakeId] seconds is unique to just this line. This guide on metric filter syntax seems to only show examples for extracting values from JSON logs and this official example list doesn't cover it.
Based on the documentation and a few other stackoverflow answers, I tried these things:
[validation="Validated", that="that", user="user", userId, ...]
[,,user="user",userId,...]
[,,user=user,userId,...]
but it didn't. Any help would be really appreciated!

How do I check the information sent by the GTM event trigger to Recommendations AI project?

Could someone inform what is the most correct way to verify what is the information sent by the GTM event tag screenshot to an Recommendations AI project?
Confirm your triggers are not misfiring by using the preview mode in GTM: https://support.google.com/tagmanager/answer/6107056?hl=en
As Eike mentioned, open the Developer's console Network tab https://developer.chrome.com/docs/devtools/network/
Clear the log in the Network tab. And select All in the Network filters.
Conduct an action that triggers your tag.
In the Network tab, there are now new activity is shown. Select the one that looks like your endpoint and that has a valid response code (it's almost always 200)
Scroll down to the Query String parameters and assess your payload:
Like so:

How to write structured logs so that message field is used by GCP Log Viewer?

Question
Is there a way to write structured logs from Cloud Functions such that the message field is automatically displayed as the primary message in GCP Log Viewer?
Using this test, I've found that a field named message is sometimes used. For example, given these logs:
{"severity":"trace","time":"2020-03-09T12:21:13.660044125-07:00","message":"Trace Basic"}
{"severity":"debug","time":"2020-03-09T12:21:13.660047625-07:00","message":"Debug Basic"}
{"severity":"info","time":"2020-03-09T12:21:13.660049425-07:00","message":"Info Basic"}
{"severity":"warn","time":"2020-03-09T12:21:13.660051425-07:00","message":"Warn Basic"}
{"severity":"error","time":"2020-03-09T12:21:13.660053225-07:00","message":"Error Basic"}
{"severity":"info","time":"2020-03-09T12:21:13.660055125-07:00","message":"One line message"}
{"severity":"info","time":"2020-03-09T12:21:13.660057025-07:00","message":"Line 1 of 2\nLine 2 of 2 for zlog.Info"}
{"severity":"info","myIntField":532,"myStringField":"howdy","myMultilineStringField":"Line 1 of 2\nLine 2 of 2 for zlog.Info with fields","time":"2020-03-09T12:21:13.660059925-07:00","message":"With Fields Example"}
The GCP Log Viewer will display something like this:
Notice the final entry, which should have a message With Fields Example instead has a top level message { "fields": { ... } }.
Extra Detail
Cloud Run has a document describing special structured logging fields (i.e. severity and message) that Stackdriver logging will automatically pick up and use to populate the DEBUG/WARN/INFO/ERROR icon and top level message for the log entry in Stackdriver console.
Special JSON fields in messages
When you provide a structured log as a JSON dictionary, some special
fields are stripped from the jsonPayload and are written to the
corresponding field in the generated LogEntry as described in the
documentation for special fields.
For example, if your JSON includes a severity property, it is removed
from the jsonPayload and appears instead as the log entry's severity.
The message property is used as the main display text of the log entry
if present. For more on special properties read the Logging Resource
section below.
The corresponding document for Cloud Functions has no information about special fields.
This was a bug in Google Cloud Platform logging that was fixed.
https://issuetracker.google.com/issues/151316427

Cloud watch logs prepending timestamp to each line

We have cloud watch log agent setup and the logs streamed are appending a timestamp to beginning of each line which we could see after export.
2017-05-23T04:36:02.473Z "message"
Is there any configuration on cloud watch log agent setup that helps not appending this timestamp to each log entry?
Is there a way to export cloud watch logs only the messages of log events? We dont want the timestamp on our exported logs.
Thanks
Assume that you are able to retrieve those logs using your Lambda function (Python 3.x).
Then you can use Regular Expression to identify the timestamp and write a function to strip it from the event log.
^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d{3}Z\t
The above will identify the following timestamp: 2019-10-10T22:11:00.123Z
Here is a simple Python function:
def strip(eventLog):
timestamp = "r'^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d{3}Z\t'"
result = re.sub(timestamp, "", eventLog)
return result
I don't think it's possible, I needed the same exact behavior you are asking for and looks like it's not possible unless you implement a man in the middle processor to remove the timestamp from every log message as suggested in the other answer
Checking the CloudWatch Logs Client API in the first place, it's required to send the timestamp with every log message you send to CloudWatch Logs (API reference)
And the export logs to S3 task API also has no parameters to control this behavior (API reference)

Filter AWS Cloudwatch Lambda's Log

I have a Lambda function and its logs in Cloudwatch (Log group and Log Stream). Is it possible to filter (in Cloudwatch Management Console) all logs that contain "error"? For example logs containing "Process exited before completing request".
In Log Groups there is a button "Search Events". You must click on it first.
Then it "changes" to "Filter Streams":
Now you should just type your filter and select the beginning date-time.
So this is kind of a side issue, but it was relevant for us. (I posted this to another answer on StackOverflow but thought it would be relevant to this conversation too)
We've noticed that tailing and searching logs gets really slow after a log group has a lot of Log Streams in it, like when an AWS Lambda Function has had a lot of invocations. This is because "tail" type utilities and searching need to connect to each log stream to run. Log Events get expired and deleted due to the policy you set on the Log Group itself, but the Log Streams never get cleaned up. I made a few little utility scripts to help with that:
https://github.com/four43/aws-cloudwatch-log-clean
Hopefully that save you some agony over waiting for those logs to get searched.
You can also use CloudWatch Insights (https://aws.amazon.com/about-aws/whats-new/2018/11/announcing-amazon-cloudwatch-logs-insights-fast-interactive-log-analytics/) which is an AWS extension to CloudWatch logs that gives a pretty powerful query and analytics tool. However it can be slow. Some of my queries take up to a minute. Okay, if you really need that data.
You could also use a tool I created called SenseLogs. It downloads CloudWatch data to your browser where you can do queries like you ask about. You can use either full text and search for "error" or if your log data is structured (JSON), you can use a Javascript like expression language to filter by field, eg:
error == 'critical'
Posting an update as CloudWatch has changed since 2016:
In the Log Groups there is a Search all button for a full-text search
Then just type your search: