Why are INFO level logs not showing in Google Cloud Stackdriver? (cloud endpoints Java) - java.util.logging

I do not have any exclusions or anything set, but INFO level logs are not showing in Stackdriver from my Cloud Endpoints java code.
NOTE: INFO level logs generated by GAE are logging, such as:
just not the ones in my code.
I'm using Cloud Endpoints java
Here's a snippet of my Java code
import java.util.logging.Logger;
import java.util.logging.Level;
...
LOGGER.log(Level.WARNING, "testing warning log 1");
LOGGER.log(Level.INFO, "testing info log");
LOGGER.log(Level.WARNING, "testing warning log 2");

Related

"Error: unknown shorthand flag: 'n' in -nstances" when trying to connect Google Cloud Proxy to Postgresql (Django)

I'm following a google tutorial to set up Django on Cloud Run with Postgresql connected via Google Cloud Proxy. However I keep hitting an error on this command in the Google Cloud Shell.
cloud shell input:
xyz#cloudshell:~ (project-xyz)$ ./cloud-sql-proxy -instances="amz-reporting-files-21:us-west1-c:api-20230212"=tcp:5432
returns:
Error: unknown shorthand flag: 'n' in -nstances=amz-reporting-files-21:us-west1-c:Iamz-ads-api-20230212=tcp:5432
Usage:
cloud-sql-proxy INSTANCE_CONNECTION_NAME... [flags]
Flags:
-a, --address string () Address to bind Cloud SQL instance listeners. (default "127.0.0.1")
--admin-port string Port for localhost-only admin server (default "9091")
-i, --auto-iam-authn () Enables Automatic IAM Authentication for all instances
-c, --credentials-file string Use service account key file as a source of IAM credentials.
--debug Enable the admin server on localhost
--disable-metrics Disable Cloud Monitoring integration (used with --telemetry-project)
--disable-traces Disable Cloud Trace integration (used with --telemetry-project)
--fuse string Mount a directory at the path using FUSE to access Cloud SQL instances.
--fuse-tmp-dir string Temp dir for Unix sockets created with FUSE (default "/tmp/csql-tmp")
-g, --gcloud-auth Use gcloud's user credentials as a source of IAM credentials.
--health-check Enables health check endpoints /startup, /liveness, and /readiness on localhost.
-h, --help Display help information for cloud-sql-proxy
--http-address string Address for Prometheus and health check server (default "localhost")
--http-port string Port for Prometheus and health check server (default "9090")
--impersonate-service-account string Comma separated list of service accounts to impersonate. Last value
is the target account.
-j, --json-credentials string Use service account key JSON as a source of IAM credentials.
--max-connections uint Limit the number of connections. Default is no limit.
--max-sigterm-delay duration Maximum number of seconds to wait for connections to close after receiving a TERM signal.
-p, --port int () Initial port for listeners. Subsequent listeners increment from this value.
--private-ip () Connect to the private ip address for all instances
--prometheus Enable Prometheus HTTP endpoint /metrics on localhost
--prometheus-namespace string Use the provided Prometheus namespace for metrics
--quiet Log error messages only
--quota-project string Specifies the project to use for Cloud SQL Admin API quota tracking.
The IAM principal must have the "serviceusage.services.use" permission
for the given project. See https://cloud.google.com/service-usage/docs/overview and
https://cloud.google.com/storage/docs/requester-pays
--sqladmin-api-endpoint string API endpoint for all Cloud SQL Admin API requests. (default: https://sqladmin.googleapis.com)
-l, --structured-logs Enable structured logging with LogEntry format
--telemetry-prefix string Prefix for Cloud Monitoring metrics.
--telemetry-project string Enable Cloud Monitoring and Cloud Trace with the provided project ID.
--telemetry-sample-rate int Set the Cloud Trace sample rate. A smaller number means more traces. (default 10000)
-t, --token string Use bearer token as a source of IAM credentials.
-u, --unix-socket string (*) Enables Unix sockets for all listeners with the provided directory.
--user-agent string Space separated list of additional user agents, e.g. cloud-sql-proxy-operator/0.0.1
-v, --version Print the cloud-sql-proxy version
While my input is "-instances" the error message returns "-nstances" as if it's either truncating somehow, or as if it's matching my input to the "-i" flag inadvertently.
I've tried shortening my project name to avoid truncating, and tried inputting the command inside a yaml file instead of running it in google cloud shell.
Looks like -instances is not a valid flag for Cloud SQL Proxy tool and hence the error.
Remove that flag, something like below should work.
./cloud-sql-proxy amz-reporting-files-21:us-west1-c:api-20230212 -p 5432
Please refer to the supported flags here.
This is using the latest cloud-sql-proxy version 2.0.0.

How to use logger in DAG callbacks with Airflow running on Google Composer?

We are running Apache Airflow in a Google Cloud Composer environment. This runs a pre-built Airflow on Kubernetes, our image version is composer-2.0.32-airflow-2.3.4.
In my_dag.py, we can use the logging module to log something, and the output is visible under "Logs" in Cloud Composer.
import logging
log = logging.getLogger("airflow")
log.setLevel(logging.INFO)
log.info("Hello Airflow logging!")
However, when using the same logger in a callback (e.g. on_failure_callback of a DAG), the log lines do not appear anywyhere - not in the Airflow workers, nor the airflow-scheduler nor dag-processor-manager. I am triggering a DAG failure by setting a short (e.g. 5 minute) timeout, and I confirmed that the callback is indeed running by making an HTTP request to a webhook inside the callback. The webhook is called but the logs are nowhere to be found.
Is there a way to log something in a callback, and find the logs somewhere in Airflow?
Unfortunately in the on_failure_callback method, the logs doesn't appears in the DAG tasks logs (Webserver), but normally the logs are written in Cloud Logging.
In Cloud Logging, select the Cloud Composer Environment resource, then the location (europe-west1) and, finally, the name of the composer environment: composer-log-error-example.
Then select the airflow-worker :
You can check this link
Also for the log in Airflow DAGs and method called by on_failure_callback, I usually directly use the Python logging without other init and it works well :
import logging
def task_failure_alert(context):
logging.info("Hello Airflow logging!")

node js application logs on cloud logging

My NodeJS application is deployed on cloud run and I want to write my NodeJS application log separately on cloud logging , so I created a log bucket called log-test-bucket and gave cloud run service account "logging.bucketWriter" permission
I am using this code on my nodejs application for logging
const winston = require('winston');
// Imports the Google Cloud client library for Winston const
{LoggingWinston} = require('#google-cloud/logging-winston');
const loggingWinston = new LoggingWinston();
// Create a Winston logger that streams to Stackdriver Logging
//Logs will be written to: "projects/YOUR_PROJECT_ID/logs/winston_log"
const logger = winston.createLogger({ level: 'info', transports: [
new winston.transports.Console(),
// Add Stackdriver Logging
loggingWinston, ], });
// Writes some log entries logger.error('warp nacelles offline');
logger.info('shields at 99%');
so in this code I don't know what is the correct path I have to mention or I have to do some other things , so please help me on this.
I am following this link https://cloud.google.com/logging/docs/setup/nodejs
Thanks

Google cloud functions missing logs issue

I have a small python CF conencted to a PubSub topic that should send out some emails using the sendgrid API.
The CF can dynamically load & run functions based on a env var (CF_FUNCTION_NAME) provided (monorepo architecture):
# main.py
import logging
import os
from importlib import import_module
def get_function(function_name):
return getattr(import_module(f"functions.{function_name}"), function_name)
def do_nothing(*args):
return "no function"
cf_function_name = os.getenv("CF_FUNCTION_NAME", False)
disable_logging = os.getenv("CF_DISABLE_LOGGING", False)
def run(*args):
if not disable_logging and cf_function_name:
import google.cloud.logging
client = google.cloud.logging.Client()
client.get_default_handler()
client.setup_logging()
print("Logging enabled")
cf = get_function(cf_function_name) if cf_function_name else do_nothing
return cf(*args)
This works fine, except for some issues related to Stackdriver logging:
The print statement "Logging enabled" shoud be printed every invocation, but only happens once?
Exceptions rasied in the dynamically loaded function are missing in the logs, instead the logs just show 'finished with status crash', which is not very useful.
Screenshot of the stackdriver logs of multiple subsequent executions:
stackdriver screenshot
Is there something I'm missing here?
Is my dynamic loading of funcitons somehow messing witht the logging?
Thanks.
I don't see any issue here. When you load your function for the first time, one instance is created and the logging is enabled (your logging trace). Then, the instance stay up until its eviction (unpredictable!).
If you want to see several trace, perform 2 calls in the same time. Cloud Function instance can handle only one request at the same time. 2 calls in parallel imply the creation of another instance and thus, a new logging initialisation.
About the exception, same things. If you don't catch and print it, nothing will be logged. Simply catch them!
It seems like there is an issue with Cloud Functions and Python for a month now, where errors do not get logged automatically with tracebacks and categorized correctly as "Error": GCP Cloud Functions no longer categorizes errors correctly with tracebacks

Presto server on AWS - Cannot connect to discovery server

Trying to run Presto coordinator server with discovery server embedded on AWS CDH4 cluster
config.properties:
coordinator=true
datasources=jmx
http-server.http.port=8000
presto-metastore.db.type=h2
presto-metastore.db.filename=var/db/MetaStore
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=http://ip-10-0-0-11:8000
When server starts it can't register itself with discovery (relevant logs):
2013-11-08T19:38:38.193+0000 WARN main Bootstrap Warning: Configuration property 'discovery.uri' is deprecated and should not be used
2013-11-08T19:38:38.968+0000 INFO main Bootstrap discovery-server.enabled false true
2013-11-08T19:38:38.975+0000 INFO main Bootstrap discovery.uri null http://ip-10-0-0-11:8000 Discovery service base URI
2013-11-08T19:38:40.916+0000 ERROR Discovery-0 io.airlift.discovery.client.CachingServiceSelector Cannot connect to discovery server for refresh (collector/general): Lookup of collector failed for http://ip-10-0-0-11:8000/v1/service/collector/general
2013-11-08T19:38:42.556+0000 ERROR Discovery-1 io.airlift.discovery.client.CachingServiceSelector Cannot connect to discovery server for refresh (presto/general): Lookup of presto failed for http://ip-10-0-0-11:8000/v1/service/presto/general
2013-11-08T19:38:43.854+0000 INFO main org.eclipse.jetty.server.AbstractConnector Started SelectChannelConnector#0.0.0.0:8000
Tried to also run standalone Discovery server, same effect. Looks that listener is started after registration attempt is made.
I was wondering if someone would notice this in the logs :) It's actually not a problem. The error appears because the discovery client starts before the discovery server is ready. You'll see "succeeded for refresh" shortly after in the logs which shows that it's working. We will fix the log message eventually but it's purely a cosmetic issue.