Should I change the log level of my unit test if it's supposed to produce an error? - unit-testing

I have a unit test that creates an error condition. Normally, the class under test writes this error to the logs (using log4j in this case, but I don't think that matters). I can change the log level temporarily, using
Logger targetLogger = Logger.getLogger(ClassUnderTest.class);
Level oldLvl = targetLogger.getLevel();
targetLogger.setLevel(Level.FATAL);
theTestObject.doABadThing();
assertTrue(theTestObject.hadAnError());
targetLogger.setLevel(oldLvl);
but that also means that if an unrelated / unintended error occurs during testing, I won't see that information in the logs either.
Is there a best practice or common pattern I'm supposed to use here? I don't like prodding the log levels if I can help it, but I also don't like having a bunch of ERROR noise in the test output, which could scare future developers.

If your logging layer permits, it is a good practice to make an assertion on the error message. You can do it by implementing your own logger that just asserts on the message (without output), or by using a memory-buffer logger and then check on the contents of the log buffer.
Under no circumstances should the error message end up in the unit-test execution log. This will cause people to get used to errors in the log and mask other errors. In short, your options are:
Most preferred: Catch the message in the harness and assert on it.
Somewhat OK: Raise the level and ignore the message.
Not OK: Don't do anything and let the log message reach stderr/syslog.

The way I approach this assuming an XUnit style of unit testing (Junit,Pyunit, etc)
#Test(expected = MyException)
foo_1() throws Exception
{
theTestObject.doABadThing(); //MyException here
}
The issue with doing logging is that someone needs to go and actually parse the log file, this is time consuming and error prone. However the test will pass above if MyException is generated and fail if it isn't. This in turn allows you to fail the build automatically instead of hoping the tester read the logs correctly.

Related

How to make order-independent assertions on Flux output?

I have a test case for a Flux from Project Reactor roughly like this:
testMultipleChunks(StepVerifier.FirstStep<Chunk> verifier, Chunk chunk1, Chunk chunk2) {
verifier.then(() -> {
worker.add(chunk1);
worker.add(chunk2);
worker.update();
})
.expectNext(chunk1, chunk2)
.verifyTimeout(Duration.ofSeconds(5));
}
Thing is, my worker is encouraged to parallelize the work, which means the order of the output is undefined. chunk2, chunk1 would be equally valid.
How can I make assertions on the output in an order-independent way?
Properties I care about:
every element in the expected set is present
there are no unexpected elements
there are no extra (duplicate) events
I tried this:
testMultipleChunks(StepVerifier.FirstStep<Chunk> verifier, Chunk chunk1, Chunk chunk2) {
Set<Chunk> expectedOutput = Set.of(chunk1, chunk2);
verifier.then(() -> {
worker.add(chunk1);
worker.add(chunk2);
worker.update();
})
.recordWith(HashSet::new)
.expectNextCount(expectedOutput.size())
.expectRecordedMatches(expectedOutput::equals)
.verifyTimeout(Duration.ofSeconds(5));
}
While I think that makes the assertions I want, it took a terrible dive in readability. A clear one-line, one-method assertion was replaced with four lines with a lot of extra punctuation.
expectedRecordedMatches is also horribly uninformative when it fails, saying only “expected collection predicate match” without giving any information about what the expectation is or how close the result was.
What's a clearer way to write this test?
StepVerifier is not a good fit for that because it verifies each signal as it gets emitted, and materialize an expected order for asynchronous signals, by design.
It is especially tricky because (it seems) your publisher under test doesn't clearly complete.
If it was completing after N elements (N being the expected amount here), I'd change the publisher passed to StepVerifier.create from flux to flux.collectList(). That way, you get a List view of the onNext and you can assert the list as you see fit (eg. using AssertJ, which I recommend).
One alternative in recent versions of Reactor is the TestSubscriber, which can be used to drive request and cancel() without any particular opinion on blocking or on when to perform the assertions. Instead, it internally stores the events it sees (onNext go into a List, onComplete and onError are stored as a terminal Signal...) and you can access these for arbitrary assertions.

AWS Translate: Get DetectedLanguageCode from DetectedLanguageLowConfidenceException

I'm playing around with AWS Translate a bit. I want AWS Translate to auto-detect the source language, when I send a TranslateTextAsync request. Apparently, there can be a DetectedLanguageLowConfidenceException, which I want to handle by getting the DetectedLanguageCode from the exception and retry the translation. I was not able to get this exception to occur, so I don't know the structure of that response exception.
For the Java SDK, I found that there is a "getDetectedLanguageCode" function, but this one doesn't exist in the .NET SDK. I'm using AWSSDK.Translate v3.3.101.12.
How do I get the language code from the DetectedLanguageLowConfidenceException?
I contacted AWS Support and they reached out to their AWS Translate team. They write that
C#/.Net does not support member variables in exceptions the way Java does. However, supplementary information about exceptions is stored in the Data dictionary of the exception
They also mention that AWS Translate will usually use even a low confidence guess before throwing a DetectedLanguageLowConfidenceException, so it seems like we don't really have to worry about it.
I still went and implemented the exception handling and have the following code to extract the detected language code data. This code is untested though:
catch (DetectedLanguageLowConfidenceException ex)
{
var dictionary = ex.Data as Dictionary<object, object>;
var detectedLanguageCode = dictionary?["DetectedLanguageCode"] as string;
// Retry here with the detected low confidence language code.
}

Poco - failure to openApplication log causes subsystem shutdown failure

I'm using Poco 1.6.0 and the Util::ServerApplication structure.
At the start of int main(const ArgVec& args) in my main class, I redirect all the logging to a file:
Poco::AutoPtr<Poco::FileChannel> chanFile = new Poco::FileChannel;
chanFile->setProperty("path", "C:\\doesnotexist\\file.log");
Poco::Util::Application::instance().logger().setChannel(chanFile);
If the log file cannot be opened, this causes an exception to be thrown, which I catch, and return an error code from main(). The Application::run() code in Poco's Application.cpp then calls Application::uninitialize().
The implementation of Application::uninitialize()iterates through each SubSystem executing that subsystem's uninitialize().
But one of those is LogFile::uninitialize(), which causes the following message to be logged: Uninitializing subsystem: Logging Subsystem.
When it attempts to log that message, an exception is thrown since the log file could not be opened (for the same reason as before). That exception is caught somewhere in Poco's code and it attempts to log an error, which causes an exception, and that one finally terminates the program.
How should I deal with this issue? E.g. is it possible to tell the logging subsystem to not throw any exceptions?
There seems to be a greater issue too; if any subsystem uninitialize() throws, this will cause execution to leave the subsystem shutdown loop in Application.cpp , so other subsystems will not have a chance to shut down either.
You should make sure that the path exist before setting up the file channel, e.g.:
if (Poco::File("C:\\doesnotexist").exists())
{
Poco::AutoPtr<Poco::FileChannel> chanFile = new Poco::FileChannel;
chanFile->setProperty("path", "C:\\doesnotexist\\file.log");
Poco::Util::Application::instance().logger().setChannel(chanFile);
}
Application::unitialize() will loop through subsystems and log iterations as debug messages - the idea is to catch problems before release.
UPDATE: as pointed in the comments, the directory may exist at the time of the check but may not exist (or not be accessible) afterwards, when logging actually happens. There is nothing in Poco that shields user from that; so, you will have to make sure the directory exists and is accessible throughout the lifetime of the FileChannel using it. I have not found this to be an obstacle in practice. I did find the initial non-existence of a directory to be an annoying problem and there is a proposal for addition of such (optional/configurable) feature but it has not been scheduled yet for inclusion in upcoming releases.

Rails - run pusher in background

I use Pusher in my Rails-4 application.
The problem is that sometimes the connection is slow, so the execution of the code becomes slower.
I also get from time to time the following error:
Pusher::HTTPError: execution expired (HTTPClient::ConnectTimeoutError)
I send signals via Pusher with this code:
Pusher[channel].trigger!(event, msg)
I would like to execute it in background, so if an exception is thrown it will not break the flow of my app, and neither slow it down.
I tried to wrap the call with begin ... rescue but it didn't solve the exception problem. Of course even if it would, it wouldn't solve the slow-down problem i want to avoid.
Information on performing asynchronous triggers can be found here:
https://github.com/pusher/pusher-gem#asynchronous-requests
This also provides you within information on catching/handling errors.
Finally I implemented this solution:
Thread.new do
begin
Pusher[channel].trigger!(ch, ev, msg)
ActiveRecord::Base.connection.close
rescue Pusher::Error => e
Rails.logger.error "Pusher error: #{e.message}"
end
end

Elmah is not logging NullReference exception

Good afternoon,
in my project is installed elmah framework to logging exceptions. On the localhost it works fine, but when I deploy it to production it stops logging null reference exceptions. All others exceptions are logged (or I didn't find out next which is not logged).
I have set logging into SqlServer.
I can't find out what is wrong, can someone give me advice please? (How I said it loggs all exceptions what I fired but only this one is never caught)
Thank you
Well, Thomas Ardal answered right.
Problem was in the FilterConfig.cs file. Because in default settings it didn't want log any 500 errors, dangerous requests, null reference exceptions etc, i have added this lines:
public class ElmahHandleErrorAttribute : HandleErrorAttribute
{
public override void OnException(ExceptionContext filterContext)
{
if(filterContext.Exception is HttpRequestValidationException)
{
ErrorLog.GetDefault(HttpContext.Current).Log(new Error(filterContext.Exception));
}
}
}
and added this line to the RegisterGlobalFilters method on the first place.
filters.Add(new ElmahHandleErrorAttribute());
After that it started log some exceptions but not all. Solution is that I remove if condition and catch everything. So if anyone will have similar problem, be sure, that problem will be somewhere in filters...