I'm running a Container Optimized OS container on GCE with Cloud Logging wired up. The service is installed correctly and I'm getting logs, however the structured logs aren't parsed:
How can I get Cloud Logging to parse the log entry correctly?
You can write structured logs to Logging in several ways by following this official documentation.
By using Logging agent google-fluentd you can parse the JSON Message. This is a Cloud Logging-specific packaging of the Fluentd log data collector. The Logging agent comes with the default Fluentd configuration and uses Fluentd input plugins to pull event logs from external sources such as files on disk, or to parse incoming log records. Refer to this logging agent configuration for more information which helps you in parsing the JSON Message.
Refer to this similar SO1 and SO2 issue which gives you more information in resolving your issue.
For anyone that runs into this issue, it appears the problem has to do with the timestamp format in the time field of the JSON. In particular, RFC3399 timestamps are not accepted. Use ISO 8601 timestamps instead.
This seems to contradict the documentation but a Googler friend of mine confirmed this internally and switching to ISO 8601 timestamps did fix the issue for me.
Related
It's the first time I use Google Cloud Platform, so please be understanding!
I've built a scheduled workflow that simply runs a Batch job. The job runs Python code and uses the standard logging library for logging. When the job is executed, I can correctly see all the entries in Cloud Logging, but all the entries have severity ERROR although they're all INFO.
One possible reason I've been thinking about is that I haven't used the setup_logging function as described in the documentation here. The thing is, I didn't want to run the Cloud Logging setup when I run the code locally.
The questions I have are:
why does logging "work" (in the sense that logs end up in Cloud Logging) even if I did not use the setup_logging function? What is it's real role?
why do my INFO entries show up with ERROR severity?
if I include that snippet and that snippet solves this issue, should I include an if statement in my code that detects if I am running the code locally and skips that Cloud Logging setup step?
According to the documentation, you have to use a setup to send correctly logs to Cloud Logging.
This setup allows then to use the Python logging standard library.
Once installed, this library includes logging handlers to connect
Python's standard logging module to Logging, as well as an API client
library to access Cloud Logging manually.
# Imports the Cloud Logging client library
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.setup_logging()
Then you can use the Python standard library to add logs to Cloud Logging.
# Imports Python standard library logging
import logging
# The data to log
text = "Hello, world!"
# Emits the data using the standard logging module
logging.warning(text)
why does logging "work" (in the sense that logs end up in Cloud Logging) even if I did not use the setup_logging function? What is
it's real role?
Without the setup, the log will be added to Cloud Logging but not with the correct type and as expected. It's better to use the setup.
why do my INFO entries show up with ERROR severity?
The same reason explained above
if I include that snippet and that snippet solves this issue, should I include an if statement in my code that detects if I am running the
code locally and skips that Cloud Logging setup step?
I think no need to add a if statement you run the code locally. In this case, the logs should be printed in the console even if the setup is present.
I have Lambda function that is integrated with Code Guru. I want make some profiling. I added needed layer and enviroment variables. I changed runtine to Java 8 (Corretto). I also added permission for role that is used to execute Lambda.
In Cloud Watch I see logs from profiler. Last colected log is
INFO: Attempting to report profile data: start=2020-12-15T13:14:14.639Z end=2020-12-15T13:20:05.534Z force=false memoryRefresh=false numberOfTimesSampled=28
I've seen in other examples that I should have more logs with information about succes or failure like this:
PM software.amazon.codeguruprofilerjavaagent.ProfilingCommand submitProfilingData
INFO: Successfully reported profile
but it didn't happen. There is no information. What can be reason of this?
All I know is, we can fetch logs using stack driver Logging or monitoring services. But from where these logs are being fetched from?
If i know where these logs are fetched from then no need of doing API calls or using another service to see my logs. I can simply download them and use my own code to process them.
Is there any way to do this?
There is a capability of Stack driver logging called "Exporting". Here is a link to the documentation. At a high level, exporting is the idea that when a new log message is written to a log, a copy of that message is then exported. The targets of the export (called sinks) can be:
Cloud Storage
Big Query
Pub/Sub
From your description, if you set up Cloud Storage as a sink, then you will have new files written to your Cloud Storage bucket that you can then retrieve and process.
The following image (copied from the docs) gives the best overview:
If you don't wish to use exports of new log entries, you can use either the API or gcloud to read the current logs. Realize that GCP held logs (within Stackdriver) expire after a period of time (30 days). See gcloud logging read.
We are starting to enable Stackdriver for our project, but while I have a ubuntu instance with stackdriver logging, and I am getting the logs shipped back (I can see the logs in Stackdriver Logging), when I browse to Error Reporting, it's just a blank screen with a button to "Setup Error Reporting", which takes me to some API documentation which I think is tailored for new application coding. We are running nginx and the logging is working, but I can't for the life of me figure out how to get the Error Reporting to work properly, if that's even doable.
"Setup Error Reporting" should guide you to the setup documentation (not API documentation). Depending on the platform you are using, you might need to perform some changes in your application's code or log format. Read more at https://cloud.google.com/error-reporting/docs/setup/
If you have Stackdriver Logging setup and on Google Compute Engine, the requirement is for your exception stack traces to be log in single log entries.
Is it possible to filter out columns of Appengine Logs from streaming into Big Query when Project Sinks are used in the Google Log Exporter?
We do not currently support partial log entry content in general in Stackdriver Logging. You can see the full spec for the LogSink resource here.