Is it possible to filter out columns of Appengine Logs from streaming into Big Query when Project Sinks are used in the Google Log Exporter?
We do not currently support partial log entry content in general in Stackdriver Logging. You can see the full spec for the LogSink resource here.
Related
I'm running a Container Optimized OS container on GCE with Cloud Logging wired up. The service is installed correctly and I'm getting logs, however the structured logs aren't parsed:
How can I get Cloud Logging to parse the log entry correctly?
You can write structured logs to Logging in several ways by following this official documentation.
By using Logging agent google-fluentd you can parse the JSON Message. This is a Cloud Logging-specific packaging of the Fluentd log data collector. The Logging agent comes with the default Fluentd configuration and uses Fluentd input plugins to pull event logs from external sources such as files on disk, or to parse incoming log records. Refer to this logging agent configuration for more information which helps you in parsing the JSON Message.
Refer to this similar SO1 and SO2 issue which gives you more information in resolving your issue.
For anyone that runs into this issue, it appears the problem has to do with the timestamp format in the time field of the JSON. In particular, RFC3399 timestamps are not accepted. Use ISO 8601 timestamps instead.
This seems to contradict the documentation but a Googler friend of mine confirmed this internally and switching to ISO 8601 timestamps did fix the issue for me.
I am able to see the compliance state for VMs(on whom I have applied custom OS policy via OS Configuration Management in VM Manager) in a given project and zone in the Google Cloud console as well as via using API like below:
GET https://osconfig.googleapis.com/v1alpha/projects/PROJECT_ID/locations/ZONE/instanceOSPoliciesCompliances
Is there a way I can view compliance state via Google Cloud Logs Explorer?
If I click on View in the Logs tab above, I am directed to Logs Explorer with the Query framed as:
resource.type="gce_instance"
resource.labels.instance_id="<instance_id>"
labels.os_policy_assignment="projects/<project_id>/locations/<zone>/osPolicyAssignments/<assignment>#<some_alphanumeric_id>"
labels.os_policy_id="<custom-policy-id>"
labels.task_type="APPLY_CONFIG_TASK"
But this does not provide me any information on the Compliance State as shown in the screenshot above.
How can I frame a query to get the Compliance State related logs?
To view compliance state in Logs use the following query,
resource.type="gce_instance"
resource.labels.instance_id="<instance_id>"
labels.os_policy_assignment="projects/<project_id>/locations/<zone>/osPolicyAssignments/<assignment>#<some_alphanumeric_id>"
labels.os_policy_id="<custom-policy-id>"
labels.task_type="APPLY_CONFIG_TASK"
jsonPayload.message:"state: COMPLIANT"
We can find compliant state of VM in “jsonPayload.message” field of a log.
I have a database on a Google Cloud SQL instance. I want to connect the database to pgBadger which is used to analyse the query. I have tried finding various methods, but they are asking for the log file location.
I believe there are 2 major limitations preventing an easy set up that would allow you to use pgBadger with logs generated by a Cloud SQL instance.
The first is the fact that Cloud SQL logs are processed by Stackdriver, and can only be accessed through it. It is actually possible to export logs from Stackdriver, however the outcome format and destination will still not meet the requirements for using pgBadger, which leads to the second major limitation.
Cloud SQL does not allow changes in all required configuration directives. The major one is the log_line_prefix, which currently does not follow the required format and it is not possible to change it. You can actually see what flags are supported in Cloud SQL in the Supported flags documentation.
In order to use pgBadger you would need to reformat the log entries, while exporting them to a location where pgBadger could do its job. Stackdriver can stream the logs through Pub/Sub, so you could develop an app to process and store them in the format you need.
I hope this helps.
I am using a Google Cloud virtual machine to run several python scripts scheduled on a cron, I am looking for some way to check that they ran.
When I look in my logs I see nothing, so I guess simply running a .py file is not logged? Is there a way to turn on logging at this level? What are the usual approaches for such things?
The technology for recording log information in GCP is called Stackdriver. You have a couple of choices for how to log within your application. The first is to instrument your code with Stackdriver APIs which explicitly write data to the Stackdriver subsytem. Here are the docs for that and here is further recipe.
A second story is that you install the Stackdriver Logging Agent on your Compute Engine. This will then allow you to tap into other sources of logging output such as local syslog.
How to change the log level of Java profiler? I am running the profiler outside GCP.
Although, Profiler is working fine. It is repeatedly logging following errors:
E0803 12:37:37.677731 22 cloud_env.cc:61] Request to the GCE metadata server failed, status code: 404
E0803 12:37:37.677788 22 cloud_env.cc:148] Failed to read the zone name
How can I disable these logs?
For Stackdriver Logging you can use log exclusion filters to create customised filters for logs you want to exclude.
In the Logs Viewer panel, you can enter a filter expression that matches the log entry you want to exclude. This documentation explains about various interfaces to create filters.
You may also want to export the log entries before excluding them, if you do not want to permanently lose the excluded logs.
With respect to this issue in general (i.e. for third party logging), I went ahead and created a feature request on your behalf. Please star it so that you could receive updates about this feature request and do not hesitate to add additional comments to provide details of the desired implementation. You can track the feature request by following this link.