Google Cloud Logging shows each log line wrapped in quotes - google-cloud-platform

a low-impact question more out of curiosity than anything else. I am logging lines from a Java container on Kubernetes, which logs a JSON format that should be compatible with Google Cloud. The JSON format is as follows.
{"message":"s.s.DefaultSandboxService - Used final flags Flags(true,true,true,false,false,false,None) ","timestamp":{"seconds":1630049408,"nanos":159000000},"severity":"INFO","thread":"application-akka.actor.default-dispatcher-11639"}
The output is shown as follows in Logs Explorer:
Every log line in the output is surrounded with " quotes. Why is that? I don't see similar behavior in other containers.

Related

Environment for print Capture on AWS GLUE

Where can I see, for example, the prints that are written in my AWS GLUE script? Like a terminal screen that shows me the messages that were stored in a print. I need to print the schema being generated for my data output and see if it matches what I need and understand where my script is breaking.
Print statements are captured in CloudWatch logs
You can view them in the Console by clicking the Logs link in the History tab

Get GCP Logging Log Viewer to highlight custom labels

A great feature in debugging Google Cloud Functions calls is the highlighting of the functionName and the execution id (see photo). Is it possible to also get your own logs (generated by the python logging client) to show up highlighted?
You can use summary fields to show these values
https://cloud.google.com/logging/docs/view/logs-viewer-interface#add_summary_fields

Seeing either request body/parameters or response in the log

In the gCloud Log, is there a way to have it log the request or the response from the API?
For example, I notice that using the text recognition API under different light settings for the same text will produce a range of very different results - useful to track these things.
Yes, by writing to the Stackdriver logs in your code. Stackdriver does not log request or response bodies. This is something your code will need to do. Depending on your program language this is as simple as a print statement.

Cloud watch logs prepending timestamp to each line

We have cloud watch log agent setup and the logs streamed are appending a timestamp to beginning of each line which we could see after export.
2017-05-23T04:36:02.473Z "message"
Is there any configuration on cloud watch log agent setup that helps not appending this timestamp to each log entry?
Is there a way to export cloud watch logs only the messages of log events? We dont want the timestamp on our exported logs.
Thanks
Assume that you are able to retrieve those logs using your Lambda function (Python 3.x).
Then you can use Regular Expression to identify the timestamp and write a function to strip it from the event log.
^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d{3}Z\t
The above will identify the following timestamp: 2019-10-10T22:11:00.123Z
Here is a simple Python function:
def strip(eventLog):
timestamp = "r'^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d{3}Z\t'"
result = re.sub(timestamp, "", eventLog)
return result
I don't think it's possible, I needed the same exact behavior you are asking for and looks like it's not possible unless you implement a man in the middle processor to remove the timestamp from every log message as suggested in the other answer
Checking the CloudWatch Logs Client API in the first place, it's required to send the timestamp with every log message you send to CloudWatch Logs (API reference)
And the export logs to S3 task API also has no parameters to control this behavior (API reference)

Filter AWS Cloudwatch Lambda's Log

I have a Lambda function and its logs in Cloudwatch (Log group and Log Stream). Is it possible to filter (in Cloudwatch Management Console) all logs that contain "error"? For example logs containing "Process exited before completing request".
In Log Groups there is a button "Search Events". You must click on it first.
Then it "changes" to "Filter Streams":
Now you should just type your filter and select the beginning date-time.
So this is kind of a side issue, but it was relevant for us. (I posted this to another answer on StackOverflow but thought it would be relevant to this conversation too)
We've noticed that tailing and searching logs gets really slow after a log group has a lot of Log Streams in it, like when an AWS Lambda Function has had a lot of invocations. This is because "tail" type utilities and searching need to connect to each log stream to run. Log Events get expired and deleted due to the policy you set on the Log Group itself, but the Log Streams never get cleaned up. I made a few little utility scripts to help with that:
https://github.com/four43/aws-cloudwatch-log-clean
Hopefully that save you some agony over waiting for those logs to get searched.
You can also use CloudWatch Insights (https://aws.amazon.com/about-aws/whats-new/2018/11/announcing-amazon-cloudwatch-logs-insights-fast-interactive-log-analytics/) which is an AWS extension to CloudWatch logs that gives a pretty powerful query and analytics tool. However it can be slow. Some of my queries take up to a minute. Okay, if you really need that data.
You could also use a tool I created called SenseLogs. It downloads CloudWatch data to your browser where you can do queries like you ask about. You can use either full text and search for "error" or if your log data is structured (JSON), you can use a Javascript like expression language to filter by field, eg:
error == 'critical'
Posting an update as CloudWatch has changed since 2016:
In the Log Groups there is a Search all button for a full-text search
Then just type your search: