How to enable DEBUG level logging with Jetty embedded? - jetty

I'm trying to set the logging level to DEBUG in an embedded Jetty instance.
The documentation at http://docs.codehaus.org/display/JETTY/Debugging says to -
call SystemProperty.set("DEBUG", "true") before calling new
org.mortbay.jetty.Server().
I'm not sure what the SystemProperty class is, it doesn't seem to be documented anywhere. I tried System.setProperty(), but that didn't do the trick.

My question was answered on the Jetty mailing list by Joakim Erdfelt:
You are looking at the old Jetty 6.x docs at docs.codehaus.org.
DEBUG logging is just a logging level determined by the logging
implementation you choose to use.
If you use slf4j, then use slf4j's docs for configuring logging level. http://slf4j.org/manual.html
If you use java.util.logging, use the JVM docs. http://docs.oracle.com/javase/6/docs/technotes/guides/logging/overview.html
If you use the built-in StdErrLog, then there is a pattern to follow.
-D{classref}.LEVEL={level}
Where {classref} is the class reference you want to set the level on,
and all sub-class refs. and {level} is one of the values ALL, DEBUG,
INFO, WARN
Example:
-Dorg.eclipse.jetty.LEVEL=INFO - this will enable INFO level logging for all jetty packages / classes.
-Dorg.eclipse.jetty.io.LEVEL=DEBUG - this will enable DEBUG level logging for IO classes only
-Dorg.eclipse.jetty.servlet.LEVEL=ALL - this will enable ALL logging (trace events, internally ignored exceptions, etc..) for servlet
packages.
-Dorg.eclipse.jetty.util.thread.QueuedThreadPool.LEVEL=ALL - this will enable level ALL+ on the specific class only.

Add this
-Dorg.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.StdErrLog
-Dorg.eclipse.jetty.LEVEL=DEBUG

In case you just want to quickly get log messages to stderr add something like this to java command line:
-Dorg.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.StdErrLog -D{classref}.LEVEL=DEBUG

You can use this snippet to enable logging:
import org.eclipse.jetty.util.log.Log;
import org.eclipse.jetty.util.log.StdErrLog;
.
.
.
StdErrLog logger = new StdErrLog();
logger.setDebugEnabled(true);
Log.setLog(logger);

Related

How to configure OpenTelemetry agent for an Akka application

I am trying to export metrics and traces from my Akka app written in Scala using OpenTelemetry agent with the purpose of consuming the data in OpenSearch.
Technology stack for my application:
Akka - 2.6.*
RabbitMQ (amqp client 5.12.*)
PostgreSQL (jdbc 42.2.*)
I've added OpenTelemetry instrumentation runtime dependency to build.sbt:
val runtimeDependencies: Seq[ModuleID] = Seq(
"io.opentelemetry.instrumentation" % "opentelemetry-instrumentation-api" % otelInstrumentationVersion % "runtime"
)
...
libraryDependencies ++= compileDependencies ++ testDependencies ++ runtimeDependencies,
I am passing OpenTelemetry configurations in a properties file:
export JAVA_OPTS="... \
-javaagent:lib/opentelemetry/opentelemetry-javaagent-all-v1.6.0.jar \
-Dotel.javaagent.configuration-file=lib/opentelemetry/otel.properties"
The only other related piece in my code is the properties file:
otel.service.name=my-app
otel.traces.exporter=jaeger
otel.propagators=jaeger
I do receive some traces in OpenSearch, but they are disparate and unrelated whereas I would expect them to be linked. For example a message is received on RabbitMQ topic, it makes it's way into an actor, the latter eventually issues a SQL query. As a result I could see for each execution how much time did each step take.
This is an approximate view that I get in OpenSearch:
I would love to be able to follow documentation, but I find that OpenTelemetry's configuration guide is scarce at this point.
Update:
Not sure whether this is relevant, but I get a warning on datapreper:
2021-09-29T16:50:50,861 [raw-pipeline-prepper-worker-5-thread-1] WARN com.amazon.dataprepper.plugins.prepper.oteltrace.OTelTraceRawPrepper - Missing trace group for SpanId: 922097e31cf96c72
Ok so I got around by running across this issue and then reading about how to surpress specific instrumentations.
So to reduce clutter in tracing dashboard, one would add something as following to the properties file (or equivalent via environment variables):
otel.instrumentation.rabbitmq.enabled=false
otel.instrumentation.grpc.enabled=false
Note that I removed the two cluttering instrumentation libraries peculiar for my use case. For another application one wold choose other libraries from link # 2 above. In this way the spans that you as application developer declare will become roots.

Python logging adds additional handlers

I'm trying to get to grips with Python's logging module which frankly so far has not been approachable. Currently I have one 'main' logger in my main script:
logger = logging.getLogger(__name__)
handler = logging.FileHandler('debug.log')
handler.setFormatter(logging.Formatter('%(levelname)s: %(asctime)s: %(name)s: %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
logger.debug(
'{} run for {} using {} values.'.format(
skill, str(datetime.now()), key, mode
)
)
and I have a secondary logger in an imported module:
logger = logging.getLogger(__name__)
handler = logging.FileHandler('debug.log')
handler.setFormatter(logging.Formatter('%(levelname)s: %(asctime)s: %(name)s: %(message)s'))
logger.addHandler(handler)
However, although I tell both loggers to log to a file only (both only have the handlers I've set), I still get information printed to stout from the root logger. Calling logging.root.handlers shows the logger has a StreamHandler which only appears when importing the module containing the second module.
My hacking method of solving the additional stream is to just delete from the roots handlers. However, this feels like a non-canonical solution. I'm assuming I've implemented the module wrong in some way rather than this being the intended function of the module. How are you meant to set up loggers in this hierarchical fashion correctly?
A proper [mcve] would certainly help here - I can't reproduce this root logger handler suddenly appearing out of the blue.
This being said, you're doing it wrong anyway: one of the main goals of the logging module and which is not clearly, explicitely documented is to separate logger's usage (.getLogger() and logger.log()) calls from logging configuration.
The point is that library code cannot know in which context it will be used - only the application will -, so library code should NOT try to configure their loggers in anyway - just get a logger and use it, period. Then it's up to the application (here your main.py script) to configure the loggers for the libs (hint: the .dictConfig() function is by far the most usable way to configure eveything at once).

How do I subclass/override Ember.Logger?

I am implementing remote logging ability in my Ember app, where I want to push everything that gets sent to the log console to a remote logging service (e.g. Loggly).
I believe that what I need to do is override Ember.Logger's methods to redirect log output to the remote logging service, but I can't figure out how to do that.
The documentation for Ember.Logger simply states:
Override this to provide more robust logging functionality.
How do I "override this"? I've tried doing Ember.Logger.reopenClass() and it complains with Ember.Logger.reopenClass is not a function.
Where would I do this? In an initializer? In a service? Other?
Ember.Logger is not an Ember class. It's just an object with some methods on it.
You can override it by something like
Ember.Logger.log = function(...
You can put this wherever you want. I might put it at the top of app.js.
Expanding upon and updating #user663031 response...
As of Nov 2017, the status of Ember.Logger is up in the air. It was not included in Ember's module API, and there isn't yet an RFC for the future.
It is possible use a debug utility directly, e.g. ember-debug-logger, and extend those prototypes separate from Ember.Logger.
However, I opted to overwrite Ember.Logger directly because it allows me to include any logging tool that I like (as opposed to debug util) without having to modify the log statements scattered throughout the code.
As I use bunyan on the backend, opted to log with browser-bunyan, which incidentally has the same info, warn, error as Ember.Logger.
YMMV, but this is the minimal example that worked for me...
// app/app.js
import LOG from './logger-bunyan';
if (config.APP.LOG_BUNYAN) {
Ember.Logger = LOG;
}
// app/logger-bunyan.js
import bunyan from 'npm:browser-bunyan';
const LOG = bunyan.createLogger({
name: 'emberApplication',
});
export default LOG;
// config/environment.js
if (environment === 'development') {
ENV.APP.LOG_BUNYAN = true;
}
// app/component/WhereIWantToLog.js
Logger.warn('bunyan logged warning message')

apple logger (ASL) ignoring rule in /etc/asl.conf for specific facility

I've got a C/C++/Objective-C project that send asl logging messages.
The default configuration in asl.conf route all log message with level above notice to system log (see below rule), and I'd like to cancel this rule for my specific facility only.
This means, that all log messages under my facility will be routed to my log file only, and not to system.log.
here's the configuraiton where my facility is defined to com.bla.bla
asl.conf
? [<= Level notice] file system.log
my_asl.conf
? [<= Level notice] [=Facility com.bla.bla] skip / ignore
I've tried both skip and ignore, but i didn't made any change. the only thing that work is to erase the rule from asl.conf, but i don't want to change the behavior of other processes / facilities and to modify some default rules.
is there any rule i can add to ban my messages only from system.log ?
thanks
After re-reading asl.conf man page over and over again, I've found out that i can use 'claim' command to ignore asl.conf base configuration file for my specific rule
claim Messages that match the query associated with a 'claim' action are not processed by the main ASL configuration file /etc/asl.conf. While claimed messages are not pro-cessed processed cessed by /etc/asl.conf, they are not completely private. Other modules may also claim messages, and in some cases two or more modules may have claim actions that match the same messages. This action only blocks processing by /etc/asl.conf.
The `claim' action may be followed by the keyword 'only'. In this case, only those messages that match the 'claim only' query will be processed by subsequent
rules in the module.
I followed the description of the tag 'claim' and added the following configuration to my config file :
? [= com.bla.bla] file /var/log/my-log
? [= com.bla.bla] claim

Why do I get a missing handler for logger "sentry.errors"?

I installed django-sentry in an integrated fashion.
I then ran python manage.py shell and tried to log like this:
>> import logging
>> mylog = logging.getLogger('sentrylogger')
>> mylog.handlers
[<logging.StreamHandler instance at 0x9f6130c>,
<sentry.client.handlers.SentryHandler instance at 0x9ffed4c>]
>> mylog.debug('this is a test 1')
DEBUG 2011-09-28 11:10:33,178 <ipython console> 4607 -1217300800 this is a test 1
No handlers could be found for logger "sentry.errors"
Currently, nothing is written to the sentry. I believe the missing logger 'sentry.errors' is the root cause of my inability to log to sentry. Am I on the right track ?
Yes, there's a handler missing. I cannot explain why logging to one log should affect another log instance, but maybe if you try to write this before doing mylog.debug(..) it will work:
sentry_errors_log = logging.getLogger("sentry.errors")
sentry_errors_log.addHandler(logging.StreamHandler())
Furthermore, refer to the documentation about older versions that seem to add sentry.errors log handler manually:
http://readthedocs.org/docs/sentry/en/latest/config/index.html#older-versions
if you are running sentry on your own domain with HTTPS, there is a bug in sentry SNI support. check https://github.com/getsentry/raven-python/issues/523 for more details. A quick workaround is to replace DSN scheme by threaded+requests+https:
RAVEN_CONFIG = {
'dsn': 'threaded+requests+https://xxxxxxxxxxxx#sentry.example.com/1',
}