Java Util Logging File Configuration Conundrum - java.util.logging

My log.properties contains configuration
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.FileHandler.level = INFO
java.util.logging.FileHandler.pattern = logs/startup.%g.log
java.util.logging.FileHandler.limit = 10000000
java.util.logging.FileHandler.count = 3
GtfsOperatorManager.level=INFO
TripUpdate.level=FINER
VehiclePosition.level=INFO
Alert.level=INFO
where the root logger logs to a file called startup and other loggers such as TripUpdate are set up programmatically to log to their own files.
The problem is that as shown log entries only go into TripUpdate at level INFO . However if I comment out the line
#java.util.logging.FileHandler.level = INFO
then TripUpdate logs at FINER as configured but log entries then go into the startup log at FINER too rather than INFO.
What am I doing wrong, how do I get startup logging at INFO and TripUpdate logging at FINER ?

Try setting the root logger level in your config file by adding the following:
.level=INFO
All child loggers will inherit this level.
However if I comment out the line [snip] then TripUpdate logs at FINER as configured but log entries then go into the startup log at FINER too rather than INFO.
This is because the default java.util.logging.FileHandler.level is ALL.
Since your are doing programmatic configurations too you need to make sure you are keeping your logger from being garbage collected.
No logging is not done directly to startup above, rather it inherits log entries but I want to filter out everything above INFO
You have limited options there. You can use setUseParentHandlers on the loggers that you don't want to see output from. This can be done from the properties file by setting <loggername>.useParentHandlers=false. You can then attach an additional file handler to the logger that is no longer publishing records to the parent handlers.
Otherwise, you have to just write a log filter to check the logger name and level and install it on the file handler.

Related

How to change default log level in wso2 esb

I have downloaded wso2 esb 5.0.0 into local but when I started it's starting with log level DEBUG. I would like t change the log level to ERROR by default when ever server started.
Looks like log level in registry will override log level in property files so is there a way to edit log level in registry or any other configuration file
All WSO2 products uses a log4j based logging mechanism. Through log4j.properties file, <ESB_HOME>/repository/conf directory, this can be controlled.
WSO2 recommends to not to modify log4j.properties directly but rather through management console. The settings in the management console override the settings in log4j.properties.
For the respective loggers, check the log level and set it appropriately.
TRACE - Designates finer-grained informational events than the DEBUG.
DEBUG - Designates fine-grained informational events that are most useful to debug an application.
INFO - Designates informational messages that highlight the progress
of the application at coarse-grained level.
WARN - Designates potentially harmful situations.
ERROR - Designates error events that might still allow the application to continue running.
FATAL - Designates very severe error events that will presumably lead
the application to abort.
For the following loggers, change the log levels to ERROR.
log4j.category.org.apache.synapse=ERROR
log4j.category.org.apache.synapse.transport=ERROR
log4j.category.org.apache.axis2=ERROR
log4j.category.org.apache.axis2.transport=ERROR
log4j.logger.org.wso2=ERROR
log4j.logger.org.wso2.carbon=ERROR
log4j.appender.CARBON_LOGFILE.threshold=ERROR
log4j.appender.CARBON_MEMORY.threshold=ERROR
log4j.appender.CARBON_SYS_LOG.threshold=ERROR
log4j.appender.AUDIT_LOGFILE.threshold=ERROR
note: Even Important info statements like server startup will not be printed due to this change.
Refer the post for insights into tracking messages across different WSO2 components: http://muralitechblog.com/wso2-esb-how-to-track-messages/
https://docs.wso2.com/display/ESB490/Setting+Up+Logging

Wrong event time in CloudWatch log events

Found the solution after searching, but leaving this here if somebody happens to run into similar kind of confusion. See resolution in the end.
I'm trying to figure out why AWS CloudWatch log service fails to understand the right timestamp for my log events. Currently all my events are being saved under Time 2017-01-01 no matter what the actual timestamp in the event is.
I'm feeding the log from syslog where docker is saving the logged events and I configured docker to put the timestamp in format:
170105/103242 (%y%m%d/%H%M%S)
I configured awslogs service with parameters:
datetime_format = %y%m%d/%H%M%S
I restarted the service and hit the server, but still when I go to CloudWatch and see the log entries, even entries that indeed start with timestamp 170105/103242 are actually saved as events that belong to date 2017-01-01 containing all events between 01-01 and 01-05
When I look at the awslogs.log I can see following lines:
2017-01-05 11:05:28,633 - cwlogs.push - INFO - 29223 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to using gzip encoding.
2017-01-05 11:05:28,633 - cwlogs.push - INFO - 29223 - MainThread - Using default logging configuration.
This makes me think that the configuration probably isn't actually reading/using the datetime_format but I don't understand why it decides to end up using default. I tried to put
use_gzip_http_content_encoding = true
under general settings, but it doesn't change the errors.
I am running out of ideas - has anyone managed to configure awslogger in a way where the datetime_format is actually used correctly?
Edit:
I'm currently hacking more console logs to local python2.7 push.py to see what is going on :)
RESOLVED:
Ok, problem was that I came into this project after the initial setup had been created and I had the impression that the logger was configured to use the .conf file in location:
/etc/awslogs/awslogs.conf
that was dynamically populated.
The environment had a script that gave this location to awslogs-agent-setup.py which tried to make the agent understand that configuration should be read from here.
However this script didn't actually do what it was supposed to do and when the service started, it actually read the config from
/var/awslogs/etc/awslogs.conf
Which contained the default values.
So the actual resolution was to change the datetime_format parameter in the default config and forget about the config I thought the service was using.
Add logging to /var/awslogs/lib/python2.7/site-packages/cwlogs/push.py and see how the actual config parameters are interpreted.
You will probably find out that the service is actually using configuration file at default location:
/var/awslogs/etc/awslogs.conf
and hence you have to edit configuration values there for them to be actually read.

Python - logging requests summary stats using locust

I'm using locust
http://docs.locust.io/en/latest/index.html
to simulate a bunch of web users doing random site visits and file downloads. The logging option is set by specifying
locust ... --logfile </path/to/log/file>...
But this only logs a subset of internal events and print statements in the code, it does not log the request stats which are printed out on the console (if you use the --no-web option) or the UI (if you don't specify the --no-web option).
How can you capture the request stats in the log file?
Try setting the log level. From what I just read in the source it defaults to INFO
In your case I would type
locust ... --logfile </path/to/log/file> --loglevel DEBUG
Information from source:
help="Choose between DEBUG/INFO/WARNING/ERROR/CRITICAL. Default is INFO."
The stats you see on the console are a result of logging through the console_logger. See https://github.com/locustio/locust/blob/master/locust/log.py#L50
You can add your custom FileHandler to the console_logger and get those stats in a file.
console_logger = logging.getLogger("console_logger")
fh = logging.FileHandler(filename="stats.log")
fh.setFormatter(logging.Formatter('%(message)s'))
console_logger.addHandler(fh)

WSO2 BAM 2.5 - Default Logger Output Event Adaptor

Which directory and file does the default logger output event adaptor write the logs to? I am not able to see the logs in the repository/logs/wso2carbon.log file? The log4j properties file in repository/conf has the default configurations and I can see the mention of wso2carbon.log there. Are there any additional configurations necessary? Please help.
It is stored in wso2carbon.log file and also log output adapter should print event logs in terminal console as well.
You can download the log file by System Logs which is located Carbon Management Console > Monitor > System Logs > Show archived logs

How to configure wso2 servers logging the same level of detail as console output in wso2carbon.log file

When we run the bin/wso2server.sh file in a terminal, we get nice verbose logging output in the same terminal which is very useful for debugging. But the output in the repository/log/wso2carbon.log file is minimal. I have checked all the other files in the repository/log/ directory and none have the same level of verbosity as the console output.
I tried settings under Home > Configure > Logging after logging in to the management console of wso2 application server. Specifically I set the settings for "Configure Log4J Appenders" for CARBON_LOGFILE to be the same as for CARBON_CONSOLE but this did not have desired effect. The web application level info and debug messages are shown on the terminal from where we started the wso2 application server but this is not shown in the wso2carbon.log file.
How do we get the same level of detail i.e. verbose output like we get in the terminal into the repository/log/wso2carbon.log file?
I tried a lot of changes via the "Home > Configure > Logging" of the WSO2 web based management console, to get the same level of detail as the console into the logfile but none had the desired effect. In fact I observed that even though I changed the Log Pattern of CARBON_LOGFILE to [%d] %5p - %x %m {%c}%n I still kept getting logs in the TID: [0] [AS] [2013-08-23 15:11:10,025] format in the repository/logs/wso2carbon.log file. There is definitely some problem with setting log file detail level and pattern via the web based management console at least on version wso2as 5.0.1
So I ended up hacking the bin/wso2server.sh file.
I changed the line
nohup bash $CARBON_HOME/bin/wso2server.sh > /dev/null 2>&1 &
under both start and restart sections to
nohup bash $CARBON_HOME/bin/wso2server.sh > $CARBON_HOME/repository/logs/wso2carbon.log 2>&1 &
Now I am getting same logs as console in the file.
I know its a hack but atleast I am able to get the detailed debug logs in a file for offline analysis.
Hope someone from wso2 looks into the issue of log level & pattern setting via the web based management console and solves it..
By default the console output and the wso2carbon.log file should be the same. I checked and both have the same output. In "Configure Log4J Appenders" see whether you have DEBUG as the Threshold for both CARBON_LOGFILE and CARBON_CONSOLE.