I have Lambda function that is integrated with Code Guru. I want make some profiling. I added needed layer and enviroment variables. I changed runtine to Java 8 (Corretto). I also added permission for role that is used to execute Lambda.
In Cloud Watch I see logs from profiler. Last colected log is
INFO: Attempting to report profile data: start=2020-12-15T13:14:14.639Z end=2020-12-15T13:20:05.534Z force=false memoryRefresh=false numberOfTimesSampled=28
I've seen in other examples that I should have more logs with information about succes or failure like this:
PM software.amazon.codeguruprofilerjavaagent.ProfilingCommand submitProfilingData
INFO: Successfully reported profile
but it didn't happen. There is no information. What can be reason of this?
Related
It's the first time I use Google Cloud Platform, so please be understanding!
I've built a scheduled workflow that simply runs a Batch job. The job runs Python code and uses the standard logging library for logging. When the job is executed, I can correctly see all the entries in Cloud Logging, but all the entries have severity ERROR although they're all INFO.
One possible reason I've been thinking about is that I haven't used the setup_logging function as described in the documentation here. The thing is, I didn't want to run the Cloud Logging setup when I run the code locally.
The questions I have are:
why does logging "work" (in the sense that logs end up in Cloud Logging) even if I did not use the setup_logging function? What is it's real role?
why do my INFO entries show up with ERROR severity?
if I include that snippet and that snippet solves this issue, should I include an if statement in my code that detects if I am running the code locally and skips that Cloud Logging setup step?
According to the documentation, you have to use a setup to send correctly logs to Cloud Logging.
This setup allows then to use the Python logging standard library.
Once installed, this library includes logging handlers to connect
Python's standard logging module to Logging, as well as an API client
library to access Cloud Logging manually.
# Imports the Cloud Logging client library
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.setup_logging()
Then you can use the Python standard library to add logs to Cloud Logging.
# Imports Python standard library logging
import logging
# The data to log
text = "Hello, world!"
# Emits the data using the standard logging module
logging.warning(text)
why does logging "work" (in the sense that logs end up in Cloud Logging) even if I did not use the setup_logging function? What is
it's real role?
Without the setup, the log will be added to Cloud Logging but not with the correct type and as expected. It's better to use the setup.
why do my INFO entries show up with ERROR severity?
The same reason explained above
if I include that snippet and that snippet solves this issue, should I include an if statement in my code that detects if I am running the
code locally and skips that Cloud Logging setup step?
I think no need to add a if statement you run the code locally. In this case, the logs should be printed in the console even if the setup is present.
When I'm in AWS --> Lambda > Functions > myFunctionName, whenever I edit the Code and then run the Test (clicking on Test button to Left of Deploy), it is running the test on the old (deployed) version of the code. This happens whether or not I File > Save first.
This is easy to confirm by just adding a console.log("blah"); and seeing that it does not appear in the Test output.
The Test > Execution Results also confirm the test is running on $LATEST (see bolded section):
Response
null
Function Logs
START RequestId: <snip> Version: $LATEST
<snip>
Of course I can test my version by deploying it, but isn't there any way to test BEFORE I deploy? (Sorry if this is an ignorant question - I feel I'm missing something dead obvious...)
Assuming you are doing it via AWS Console.
In the console, only LATEST is available for editing.
And for testing the function you need to deploy a version.
Lambda function versions
You can change the function code and settings only on the unpublished version of a function. When you publish a version, the code and most of the settings are locked to maintain a consistent experience for users of that version
You can test separately by publishing versions Lambda function versions
You can Lambda function aliases
for better handling these versions.
Configuring functions in the console
All I know is, we can fetch logs using stack driver Logging or monitoring services. But from where these logs are being fetched from?
If i know where these logs are fetched from then no need of doing API calls or using another service to see my logs. I can simply download them and use my own code to process them.
Is there any way to do this?
There is a capability of Stack driver logging called "Exporting". Here is a link to the documentation. At a high level, exporting is the idea that when a new log message is written to a log, a copy of that message is then exported. The targets of the export (called sinks) can be:
Cloud Storage
Big Query
Pub/Sub
From your description, if you set up Cloud Storage as a sink, then you will have new files written to your Cloud Storage bucket that you can then retrieve and process.
The following image (copied from the docs) gives the best overview:
If you don't wish to use exports of new log entries, you can use either the API or gcloud to read the current logs. Realize that GCP held logs (within Stackdriver) expire after a period of time (30 days). See gcloud logging read.
How to change the log level of Java profiler? I am running the profiler outside GCP.
Although, Profiler is working fine. It is repeatedly logging following errors:
E0803 12:37:37.677731 22 cloud_env.cc:61] Request to the GCE metadata server failed, status code: 404
E0803 12:37:37.677788 22 cloud_env.cc:148] Failed to read the zone name
How can I disable these logs?
For Stackdriver Logging you can use log exclusion filters to create customised filters for logs you want to exclude.
In the Logs Viewer panel, you can enter a filter expression that matches the log entry you want to exclude. This documentation explains about various interfaces to create filters.
You may also want to export the log entries before excluding them, if you do not want to permanently lose the excluded logs.
With respect to this issue in general (i.e. for third party logging), I went ahead and created a feature request on your behalf. Please star it so that you could receive updates about this feature request and do not hesitate to add additional comments to provide details of the desired implementation. You can track the feature request by following this link.
We are starting to enable Stackdriver for our project, but while I have a ubuntu instance with stackdriver logging, and I am getting the logs shipped back (I can see the logs in Stackdriver Logging), when I browse to Error Reporting, it's just a blank screen with a button to "Setup Error Reporting", which takes me to some API documentation which I think is tailored for new application coding. We are running nginx and the logging is working, but I can't for the life of me figure out how to get the Error Reporting to work properly, if that's even doable.
"Setup Error Reporting" should guide you to the setup documentation (not API documentation). Depending on the platform you are using, you might need to perform some changes in your application's code or log format. Read more at https://cloud.google.com/error-reporting/docs/setup/
If you have Stackdriver Logging setup and on Google Compute Engine, the requirement is for your exception stack traces to be log in single log entries.