Python - logging requests summary stats using locust - python-2.7

I'm using locust
http://docs.locust.io/en/latest/index.html
to simulate a bunch of web users doing random site visits and file downloads. The logging option is set by specifying
locust ... --logfile </path/to/log/file>...
But this only logs a subset of internal events and print statements in the code, it does not log the request stats which are printed out on the console (if you use the --no-web option) or the UI (if you don't specify the --no-web option).
How can you capture the request stats in the log file?

Try setting the log level. From what I just read in the source it defaults to INFO
In your case I would type
locust ... --logfile </path/to/log/file> --loglevel DEBUG
Information from source:
help="Choose between DEBUG/INFO/WARNING/ERROR/CRITICAL. Default is INFO."

The stats you see on the console are a result of logging through the console_logger. See https://github.com/locustio/locust/blob/master/locust/log.py#L50
You can add your custom FileHandler to the console_logger and get those stats in a file.
console_logger = logging.getLogger("console_logger")
fh = logging.FileHandler(filename="stats.log")
fh.setFormatter(logging.Formatter('%(message)s'))
console_logger.addHandler(fh)

Related

How could I start envoy from a dumped config. Which is generated by /config_dump

When debug envoy, I try to run from a dumpped config file, but couldn't figure it out.
Dump the config using the envoy admin api '/config_dump'.
curl -X POST http://127.0.0.1:15000/config_dump -o envoy.config
But can't start it up, there are errors:
envoy --config-path envoy.config
...
[2019-12-22 12:40:50.313][194][critical][main] [external/envoy/source/server/server.cc:98] error initializing configuration 'envoy.config': Protobuf message (type envoy.config.bootstrap.v2.Bootstrap reason INVALID_ARGUMENT:configs: Cannot find field.) has unknown fields
[2019-12-22 12:40:50.313][194][info][main] [external/envoy/source/server/server.cc:607] exiting Protobuf message (type envoy.config.bootstrap.v2.Bootstrap reason INVALID_ARGUMENT:configs: Cannot find field.) has unknown fields
The dumped config is actually not intended to be used to start up the server. You start a server with a Bootstrap Config, but if you look closely at the output of the /config_dump endpoint, it actually contains 5 or more separate config dumps. My local envoy (Envoy 1.12.2) actually show config dumps for:
Bootstrap Config
Clusters
Listeners
ScopedRoutes
Routes
Secrets
You can read more about the output structure in the config dump docs, but the summary of that is that it's a totally different structure.
If you do take the output of /config_dump and strip it down to just the bootstrap config field, you can indeed start the server with it.

Java Util Logging File Configuration Conundrum

My log.properties contains configuration
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.FileHandler.level = INFO
java.util.logging.FileHandler.pattern = logs/startup.%g.log
java.util.logging.FileHandler.limit = 10000000
java.util.logging.FileHandler.count = 3
GtfsOperatorManager.level=INFO
TripUpdate.level=FINER
VehiclePosition.level=INFO
Alert.level=INFO
where the root logger logs to a file called startup and other loggers such as TripUpdate are set up programmatically to log to their own files.
The problem is that as shown log entries only go into TripUpdate at level INFO . However if I comment out the line
#java.util.logging.FileHandler.level = INFO
then TripUpdate logs at FINER as configured but log entries then go into the startup log at FINER too rather than INFO.
What am I doing wrong, how do I get startup logging at INFO and TripUpdate logging at FINER ?
Try setting the root logger level in your config file by adding the following:
.level=INFO
All child loggers will inherit this level.
However if I comment out the line [snip] then TripUpdate logs at FINER as configured but log entries then go into the startup log at FINER too rather than INFO.
This is because the default java.util.logging.FileHandler.level is ALL.
Since your are doing programmatic configurations too you need to make sure you are keeping your logger from being garbage collected.
No logging is not done directly to startup above, rather it inherits log entries but I want to filter out everything above INFO
You have limited options there. You can use setUseParentHandlers on the loggers that you don't want to see output from. This can be done from the properties file by setting <loggername>.useParentHandlers=false. You can then attach an additional file handler to the logger that is no longer publishing records to the parent handlers.
Otherwise, you have to just write a log filter to check the logger name and level and install it on the file handler.

Wrong event time in CloudWatch log events

Found the solution after searching, but leaving this here if somebody happens to run into similar kind of confusion. See resolution in the end.
I'm trying to figure out why AWS CloudWatch log service fails to understand the right timestamp for my log events. Currently all my events are being saved under Time 2017-01-01 no matter what the actual timestamp in the event is.
I'm feeding the log from syslog where docker is saving the logged events and I configured docker to put the timestamp in format:
170105/103242 (%y%m%d/%H%M%S)
I configured awslogs service with parameters:
datetime_format = %y%m%d/%H%M%S
I restarted the service and hit the server, but still when I go to CloudWatch and see the log entries, even entries that indeed start with timestamp 170105/103242 are actually saved as events that belong to date 2017-01-01 containing all events between 01-01 and 01-05
When I look at the awslogs.log I can see following lines:
2017-01-05 11:05:28,633 - cwlogs.push - INFO - 29223 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to using gzip encoding.
2017-01-05 11:05:28,633 - cwlogs.push - INFO - 29223 - MainThread - Using default logging configuration.
This makes me think that the configuration probably isn't actually reading/using the datetime_format but I don't understand why it decides to end up using default. I tried to put
use_gzip_http_content_encoding = true
under general settings, but it doesn't change the errors.
I am running out of ideas - has anyone managed to configure awslogger in a way where the datetime_format is actually used correctly?
Edit:
I'm currently hacking more console logs to local python2.7 push.py to see what is going on :)
RESOLVED:
Ok, problem was that I came into this project after the initial setup had been created and I had the impression that the logger was configured to use the .conf file in location:
/etc/awslogs/awslogs.conf
that was dynamically populated.
The environment had a script that gave this location to awslogs-agent-setup.py which tried to make the agent understand that configuration should be read from here.
However this script didn't actually do what it was supposed to do and when the service started, it actually read the config from
/var/awslogs/etc/awslogs.conf
Which contained the default values.
So the actual resolution was to change the datetime_format parameter in the default config and forget about the config I thought the service was using.
Add logging to /var/awslogs/lib/python2.7/site-packages/cwlogs/push.py and see how the actual config parameters are interpreted.
You will probably find out that the service is actually using configuration file at default location:
/var/awslogs/etc/awslogs.conf
and hence you have to edit configuration values there for them to be actually read.

WSO2 BAM 2.5 - Default Logger Output Event Adaptor

Which directory and file does the default logger output event adaptor write the logs to? I am not able to see the logs in the repository/logs/wso2carbon.log file? The log4j properties file in repository/conf has the default configurations and I can see the mention of wso2carbon.log there. Are there any additional configurations necessary? Please help.
It is stored in wso2carbon.log file and also log output adapter should print event logs in terminal console as well.
You can download the log file by System Logs which is located Carbon Management Console > Monitor > System Logs > Show archived logs

How to configure wso2 servers logging the same level of detail as console output in wso2carbon.log file

When we run the bin/wso2server.sh file in a terminal, we get nice verbose logging output in the same terminal which is very useful for debugging. But the output in the repository/log/wso2carbon.log file is minimal. I have checked all the other files in the repository/log/ directory and none have the same level of verbosity as the console output.
I tried settings under Home > Configure > Logging after logging in to the management console of wso2 application server. Specifically I set the settings for "Configure Log4J Appenders" for CARBON_LOGFILE to be the same as for CARBON_CONSOLE but this did not have desired effect. The web application level info and debug messages are shown on the terminal from where we started the wso2 application server but this is not shown in the wso2carbon.log file.
How do we get the same level of detail i.e. verbose output like we get in the terminal into the repository/log/wso2carbon.log file?
I tried a lot of changes via the "Home > Configure > Logging" of the WSO2 web based management console, to get the same level of detail as the console into the logfile but none had the desired effect. In fact I observed that even though I changed the Log Pattern of CARBON_LOGFILE to [%d] %5p - %x %m {%c}%n I still kept getting logs in the TID: [0] [AS] [2013-08-23 15:11:10,025] format in the repository/logs/wso2carbon.log file. There is definitely some problem with setting log file detail level and pattern via the web based management console at least on version wso2as 5.0.1
So I ended up hacking the bin/wso2server.sh file.
I changed the line
nohup bash $CARBON_HOME/bin/wso2server.sh > /dev/null 2>&1 &
under both start and restart sections to
nohup bash $CARBON_HOME/bin/wso2server.sh > $CARBON_HOME/repository/logs/wso2carbon.log 2>&1 &
Now I am getting same logs as console in the file.
I know its a hack but atleast I am able to get the detailed debug logs in a file for offline analysis.
Hope someone from wso2 looks into the issue of log level & pattern setting via the web based management console and solves it..
By default the console output and the wso2carbon.log file should be the same. I checked and both have the same output. In "Configure Log4J Appenders" see whether you have DEBUG as the Threshold for both CARBON_LOGFILE and CARBON_CONSOLE.