How to disable logging while running unit-tests? - unit-testing

When I am testing my class, application writes lots of stuff to the console, because of logging (I am using SLF4J). How do I disable logging in this case, without introducing special flag inside tested class?

SLF4J is only a facade, meaning that it does not provide a complete logging solution, so if you are using it with LOG4J just change your logging level to OFF. For more info see https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html#OFF
So you can have two log4j.xml configuration files, one for production and one for testing with logging level set to OFF or better to WARN.

Related

Is it possible to configure spdlog from a file like log4j or log4cxx?

I have experience of log4j, and have used a port of it called log4net in c#. In both cases I find it very useful to configure loggers at run time, by means of logger config files. For example you can increase the log level of a particular subsystem without a recompile.
I am searching for a logging framework for c++. Currently checking log4cxx and spdlog.
I see that log4cxx can read its configuration from an xml file.
Does this ability to configure at run time exist for spdlog?
There is https://github.com/guangie88/spdlog_setup to configure spdlog using TOML

Determine whether I'm running on Cloudhub or Locally

I am building a Mulesoft/Anypoint app for deployment on Cloudhub, and for diagnostic purposes want to be able to determine (from within the app) whether it is running on Cloudhub or in Anypoint Studio on my local development machine:
On Cloudhub, I want to use the Cloudhub connector to create notifications for exceptional situations - but using that connector locally causes an exception.
On my local machine, I want to use very verbose logs with full dumping of the payload (#[message.payloadAs(java.lang.String)]) but want to use much more concise logging when on Cloudhub.
What's the best way to distinguish the current runtime? I can't figure out any automatic system properties that expose this information.
(I realize the I could set my own property called something like system.env=LOCAL and override it with system.env=CLOUDHUB for deployment, but I'm curious if the platform already provides this information in some way.)
As far as I can tell the best approach is to use properties. The specific name and values you use doesn't matter as long as you're consistent. Here's an example:
In your local dev environment, set the following property in mule-app.properties:
system.environment=DEV
When you deploy to Cloudhub, use the deployment tool to change that property to:
system.environment=CLOUDHUB
Then in your message processors, you can reference this property:
<logger
message="#['${system.environment}' == 'DEV' ? 'verbose log message' : 'concise log message']"
level="ERROR"
doc:name="Exception Logger"
/>

Where can I find request and response logs for Spark?

I have just started using Spark framework. And experimenting with a local server on Mac OS
The documentation says that to enable debug logs I simply need to add a dependency.
I've added a dependency and can observe logs in the console.
The question is where the log files are located?
If you are following the Spark example here, you are only enabling slf4j-simple logging. By default, this only logs items to the console. You can change this programmatically (Class information here) or by adding a properties file to the classpath, as seen in this discussion. Beyond this you will likely want to implement a logging framework like log4j or logback, as slf4j is designed to act as a facade over an existing logging implementation.

How to specify custom database-connection parameters for testing purposes in Play Framework v2?

I want to run my tests against a distinct PostgreSQL database, as opposed to the in-memory database option or the default database configured for the local application setup (via the db.default.url configuration variable). I tried using the %test.db and related configuration variables (as seen here), but that didn't seem to work; I think those instructions are intended for Play Framework v1.
FYI, the test database will have it's schema pre-defined and will not need to be created and destroyed with each test run. (Though, I don't mind if it is re-created and destroyed with each test run, but I don't want to use "evolutions" to do so; I have a single SQL schema file I'm using at this point.)
Use alternative configuration files while local development to override DB credentials (and other settings) ie. like described in the other answer (Update 1).
Tip: using different kinds of databases in development and production leads fast to errors and bugs, so it's better to install the same DB locally for development and testing.
We were able to implement Play 1.x style configs on top of Play 2.x - though I bet the creators of Play will cringe when they hear this.
The code is not quite shareable, but basically, you just have to override the "configuration" method in your GlobalSettings: http://www.playframework.org/documentation/api/2.0.3/scala/index.html#play.api.GlobalSettings
You can check for some system of conf setting like "environment.tag=%test" then override all configs of for "%test.foo=bar" into "foo=bar".

One log file and one error handler for set of web services - possible?

We have a set of web services which is also our internal API.
They perfectly share one common web.config file.
Is there a way to somehow make log4net create one log for the whole site, for all of them? And have common error handler? The problem I think they are all separate virtual directories, separate applications...?
But again...they share the same web.config file and it works fine...can they somehow share global.asax or whatever? Can't get it to work...
Thanks for any help.
From log4net's FAQ:
How do I get multiple process to log
to the same file?
By default the FileAppender holds an
exclusive write lock on the log file
while it is logging. This prevents
other processes from writing to the
file. The FileAppender can be
configured to use a different locking
model, MinimalLock, that only acquires
the write lock while a log is being
written. This allows multiple
processes to interleave writes to the
same file, albeit with a loss in
performance. See the FileAppender
config examples for an example
MinimalLock configuration.
While the MinimalLock model may be
used to interleave writes to a single
file it may not be the optimal
solution, especially when logging from
multiple machines. Alternatively you
may have one or more processes log to
RemotingAppenders. Using the
RemoteLoggingServerPlugin (or
IRemoteLoggingSink) a process can
receive all the events and log them to
a single log file.
Without additional configuration, log4net puts an exclusive lock on the file. Using their MinimalLock setting, you can "hopefully" get shared logging. In other words, your mileage may vary.
As a suggested alternative strategy, since you've got an internal API with web services, consider a web service method that implements a single private static logger in the background and adds entries to the log. You could call the logging web method (asynchronously if performance is critical) from your other web methods where you want to implement logging.
Are they all inside the same project? Are they ASMX Services? If so and you can put them in the same virtual directory there shouldn't be a problem.