I find myself toggling between different 'resources' to view my logs in stackdriver. For example:
Is there a way to view all combined logs from a single view? Most go to "Global", but many do not as well.
Just convert the filter to advanced view and select the resource types that you need
resource.type="bigquery_resource" OR resource.type="gce_instance" OR resource.type="audited_resource"
That will show you the logs for bigquery and audit and gce in a single view.
I believe you can do textual search across all logs, but it will not be as granular as resource type based search. You can try this with the Advanced filter feature.
On the top right corner of the search box, click the drop down menu and select "Convert to advanced filter". Then delete the "resource.type=xyz" line and type the string you want to search.
As alternative the gcloud sdk cli to search the logs without specifying resource type - for instance, gcloud logging read error --limit 1 would look for "error" string across all your logs and show only 1 matching entry.
Related
How can I see the logs regarding the VMs that has been created on Google Cloud Compute Engine ?
Google Operations Logging (formerly Stackdriver) provides this information.
To see the details on Google Compute Engine instances that were created in a project, filter based upon the API operation v1.compute.instances.insert and the resouce type resource.type=gce_instance.
Note: the resouce type is not always necessary but it is best to be declarative in what logging should search for.
Example using the CLI:
gcloud logging read "resource.type=gce_instance protoPayload.methodName=v1.compute.instances.insert" --format json
Example using the Google Cloud Console - Legacy Logs Viewer
Go to https://console.cloud.google.com/logs/viewer
Verify that the desired Google Cloud project is displayed in the title area.
In the box showing the text "Filter by label or text search" click the dropdown and select "Convert to advanced filter".
In the box showing the text "Filter by label or text search" enter the following (as two lines in the filter box):
resource.type="gce_instance"
protoPayload.methodName="v1.compute.instances.insert"
Select the desired date search range in the box "Last hour"
Click the "Submit Filter" button
Maybe I missed something, under https://console.cloud.google.com/apis/api/sqladmin.googleapis.com/overview
I saw there are a lot of errors, but when I go to Logs Viewer, I couldn't find anything. Is any way I can obtain the error log?
Basically, you should create a query to obtain the data you need in the Log Viewer UI: specify a type of resource and an instance name whose logs you want to view.
GCP Console => Operations => Logging => Logs Viewer
=> Query builder => Resource
Cloud SQL Database = my-project:my-sql-instance
The query builder will show a query preview like below:
resource.type="cloudsql_database"
resource.labels.database_id="my-project:my-sql-instance"
Once you click the “Run Query” button, the log entries will appear. By default log entries for the last 1 hour are shown. You can use the "Edit time" option to change this.
Please see Cloud Logging > Doc > Basic logs queries for more details.
In AWS console, I can search for a string in all log streams of a log group? Right now, I have to go inside each log stream and then do search which takes a lot of time, if I want to search across the log streams.
Once you click the log group in the CloudWatch Logs console, but before you click into an individual log stream, there is a button at the top right of the page labeled "Search Log Group". Click that, and it will take you to a page where you can search across all logs in the log group in a given time frame.
What you need is the CloudWacth Log Insight.
It cost some money to do data scanning this way though
I just created a model on ML Engine with:
gcloud ml-engine models create test_model --enable-logging
I went into the GUI and created a version. I'm hitting this model for predictions but where do I go in the GUI to find the logs for online predictions?
Thanks!
The logs can be found in StackDriver Logging:
Go to https://console.cloud.google.com
Click on the "hamburger" icon in the top left
Find the "Logging" option under "STACKDRIVER"
Click on "Logs" (you can get directly here with a link similar to: https://console.cloud.google.com/logs/viewer?project=my_project, just subsituate your actual project name)
Locate the drop down menu that allows you to select your logs.
Hover over "Cloud ML Model Version
Either click the model you're interested in or hover over it, if you want to select a specific version
(Optional) If selecting a specific version, click on it.
That said, I'll file a feature request to have a link in a more convenient place alongside the model and/or version.
I have a Lambda function and its logs in Cloudwatch (Log group and Log Stream). Is it possible to filter (in Cloudwatch Management Console) all logs that contain "error"? For example logs containing "Process exited before completing request".
In Log Groups there is a button "Search Events". You must click on it first.
Then it "changes" to "Filter Streams":
Now you should just type your filter and select the beginning date-time.
So this is kind of a side issue, but it was relevant for us. (I posted this to another answer on StackOverflow but thought it would be relevant to this conversation too)
We've noticed that tailing and searching logs gets really slow after a log group has a lot of Log Streams in it, like when an AWS Lambda Function has had a lot of invocations. This is because "tail" type utilities and searching need to connect to each log stream to run. Log Events get expired and deleted due to the policy you set on the Log Group itself, but the Log Streams never get cleaned up. I made a few little utility scripts to help with that:
https://github.com/four43/aws-cloudwatch-log-clean
Hopefully that save you some agony over waiting for those logs to get searched.
You can also use CloudWatch Insights (https://aws.amazon.com/about-aws/whats-new/2018/11/announcing-amazon-cloudwatch-logs-insights-fast-interactive-log-analytics/) which is an AWS extension to CloudWatch logs that gives a pretty powerful query and analytics tool. However it can be slow. Some of my queries take up to a minute. Okay, if you really need that data.
You could also use a tool I created called SenseLogs. It downloads CloudWatch data to your browser where you can do queries like you ask about. You can use either full text and search for "error" or if your log data is structured (JSON), you can use a Javascript like expression language to filter by field, eg:
error == 'critical'
Posting an update as CloudWatch has changed since 2016:
In the Log Groups there is a Search all button for a full-text search
Then just type your search: