How to suppress runtime errors caused by assert() using google test? - c++

I am using google test in a C++ project. Some functions use assert() in order to check for invalid input parameters. I already read about Death-Tests (What are Google Test, Death Tests) and started using them in my test cases.
However, I wonder if there is a way to suppress the runtime errors caused by failing assertions. At this time each failing assertion creates a pop-up window I have to close everytime I run the tests. As my project grows, this behaviour increasingly disturbs the workflow in an unacceptable way and I tend to not test assert()-assertions any longer.
I know there are possibilities to disable assertions in general, but it seems more convenient to suppress the OS-generated warnings from inside the testing framework.

Ok, I found the solution myself: You have to select the test-style threadsafe. Just add the following line to your test code:
::testing::FLAGS_gtest_death_test_style = "threadsafe";
You can either do this for all tests in the test-binary or for affected tests only. The latter is faster. I got this from the updated FAQ: Googletest AdvancedGuide

Related

Any way to manually trigger a Test Discovery pass in VS2019 from a VSPackage?

We're currently building an internal apparatus to run unit tests on a large C++ codebase, using Catch2 for the framework and our in-house VS test adapter (using [ITestDiscoverer] and ITestExecutor) to attune them to our code practices. However, we've encountered issues with unit tests not always being discovered after a build.
There's a couple of things we're doing out of the norm that may be contributing. While we're using VS2019 for coding, we use FASTBuild and Sharpmake to build our solutions (which can contain countless projects). When we realised that VS would try to build the tests again using MSBuild before running them (even after a full rebuild), we disabled that behaviour in the VS options. Everything else seems to be running as expected, except that sometimes tests aren't picked up.
After doing some digging (namely outputting a verification message to VS's Tests Output the moment our TestDiscoverer is entered), it seems like a test discovery pass isn't always being invoked when we would expect it, sometimes even with a full solution rebuild. Beyond the usual expectation that building a project with new changes (or rebuilding outright) would cause a pass to start, the methodology VS uses to determine when to invoke all installed test adapters seems to be fairly blackbox in terms of what exact parameters/conditions trigger it.
An alternative seems to be to allow the user to manually execute a TD pass via some means that could be wrapped in a VSPackage. However, initial looks through the VSSDK API for anything that'd do the job has come up short.
Using the VSSDK, are there any means to invoke a Test Discovery pass independently from VS's normal means of detecting whether a pass is required?
You would want to use the ITestContainerDiscoverer.TestContainersUpdated event. The platform should then call into your Container Discoverer to get the latest set of containers (ITestContainerDiscoverer.TestContainers). As long as the containers returned from the discoverer are different(based on ITestContainer.CompareTo()) the platform should trigger a discovery for the changed containers. This blog has been quite helpful in the past: https://matthewmanela.com/blog/anatomy-of-the-chutzpah-test-adapter-for-vs-2012-rc/

Tell Google Test to resume executing the rest of the tests after a crashed test

I have a large unit test suite written in C++ using Google Test.
I have recently made a change to the codebase which may affect different parts of the system, so various tests should now probably fail or even crash. I would like to run once the entire suite (which unfortunately takes a long time to complete), summarize the list of the failed tests, and fix them one by one.
However, when ever a test crashes (e.g. with a segmentation fault) as opposed to simply logically failing, it seems that GTest stops and executes no more tests.
I can than fix the crashed test, however rerunning the entire suite will take a long time.
Is there a way to tell GTest to resume executing the rest of the tests after a test has crashed?
Or, alternatively, at least a way to launch GTest starting from a particular test (assuming the order of the tests is always the same)?
If you are are need to test if assertion is triggered when API is used incorrectly then gtest delivers something called DEATH TEST.
If your test crashed because of Segmentation Fault you should fix this ASAP! You can disable test temporary by adding DISABLED_ prefix, or by adding GTEST_SKIP() in test boy. Alternatively there is also command line argument --gtest_filter=<colon separated positive patterns>[:-<colon separated negative patterns>]. There is no way to recover from segmentation fault, so test suite can't continue.
If you use gcc or clang (msvc has this feature experimentally) you can enable address sanitizer to quickly detect all memory issues in your tested code. You will able to faster fix those issues.
There are cool plugins to IDE to handle gtest, those should you help you track which test were run, which failed and which crashed.
Google tests are not able to do what you need. I'd suggest you write a simple test runner that:
Runs the test executable with --gtest_list_tests to get a list of all tests.
Runs a loop thru all tests that prints out the test number and runs the test executable with --gtest_filter=FooTest.Bar to invoke only one test in each loop iteration.
The loop skips the required number of iterations and runs from the number N after the test with the number N is fixed.
You only need to write such a runner script once, and it shouldn't be hard.

Can I get log output only for failures with boost unit tests

I have some logging in my application (it happens to be log4cxx but I am flexible on that), and I have some unit tests using the boost unit test framework. When my unit tests run, I get lots of log output, from both the passing and failing tests (not just boost assertions logged, but my own application code's debug logging too). I would like to get the unit test framework to throw away logs during tests that pass, and output logs from tests that fail (I grew to appreciate this behaviour while using python/nose).
Is there some standard way of doing this with the boost unit test framework? If not, are there some start of test/end of test hooks that I could use to buffer my logs and conditionally output them to implement this behaviour myself?
There are start of test and end of test hooks that you can use for this purpose. To set up these hooks you need to define a subclass of boost::unit_test::test_observer, create an instance of the class that will persist throughout the entire test (either a static global object or a BOOST_TEST_GLOBAL_FIXTURE), and then pass the class to boost::unit_test::framework::register_observer.
The method to override with a start of test hook is test_unit_start, and the method to override with an end of test hook is test_unit_finish. However, these hooks fire both for test suites as well as individual test cases, which may be an issue depending on how the hooks are set up. The test_unit_finish hook also doesn't explicitly tell you whether a given test actually passed, and there doesn't seem to be one clear and obvious way to get that information. There is a boost::unit_test::results_collector singleton, which has a results() method, and if you pass it the test_unit_id of the test unit provided to test_unit_finish, you get a test_results object that has a passed() method. I can't really see a way to get the test_unit_id that is clearly part of the public API -- you can just directly access the p_id member, but that could always change in a future boost version. You could also manually track whether each test is passing or failing using the assertion_result, exception_caught, test_unit_aborted, and test_unit_timed_out hooks from the test_observer subclass (assertion_result indicates a failure of the current test whenever its argument is false and every other hook indicates a failure if it is called at all).
According to the Boost.Test documentation, run your test executable with --log_level=error. This will catch only failing test cases.
I checked that it works using a BOOST_CHECK(false) on an otherwise correctly running project with a few thousand unit tests.
Running with --log_level=all gives the result of all assertions. I checked that by piping it to wc -l that the number of lines in the log is exactly the same as the number of assertions in the tests (which number is also reported by --report_level=detailed). You could of course also grep the log for the strings error or failed.

How to make VS Unit Test show the Error Message from exceptions other than UnitTestAssertException?

I'm using VS Unit Testing Framework and Moq.
When a Moq verification fails, I'll get a Moq.MockException. In the Test Results window, instead of showing the helpful message inside the exception, it just says "Test method XXX threw exception: ..."
Is there a way to tell the VS Unit Test framework always display the message of exceptions of a given type (like Moq.MockException)?
The short answer is: No. You have to open the Test Details window for that in MSTest (btw. that's one of many reasons why I think MSTest is not the best choice for doing Test-driven development...).
Anyway, there are two possible ways to achieve this (at least that I know of):
Use ReSharper to run your tests.
Use the free Gallio automation platform to run your tests.
HTH!
Thomas

Should you display what's happening in the unit test as it runs?

As I am coding my unit tests, I tend to find that I insert the following lines:
Console.WriteLine("Starting InteropApplication, with runInBackground set to true...");
try
{
InteropApplication application = new InteropApplication(true);
application.Start();
Console.WriteLine("Application started correctly");
}
catch(Exception e)
{
Assert.Fail(string.Format("InteropApplication failed to start: {0}", e.ToString()));
}
//test code continues ...
All of my tests are pretty much the same thing. They are displaying information as to why they failed, or they are displaying information about what they are doing. I haven't had any formal methods of how unit tests should be coded. Should they be displaying information as to what they are doing? Or should the tests be silent and not display any information at all as to what they are doing, and only display failure messages?
NOTE: The language is C#, but I don't care about a language specific answer.
I'm not sure why you would do that - if your unit test is named well, you already know what it's doing. If it fails, you know what test failed (and what assert failed). If it didn't fail you know that it succeeded.
This seems completely subjective, but to me this seems like completely redundant information that just adds noise.
I personally would recommend that you output only errors and a summary of the number of tests run and how many passed. This is a completely subjective view though. Display what suits your needs.
I recommend against it - I think that the unit testing should work on the Unix tools philosophy - don't say anything when things are going well.
I find that constructing tests to give meaningful information when they fail is best - that way you get nice short output when things work and it's easy to see what went wrong when there are problems - errors aren't lost to scroll blindness.
I would actually suggest against it (though not militantly). It couples the user interface of your tests with the test implementation (what if the tests are run through GUI viewer?). As alternative I would suggest one of the following:
I'm not familiar with NUnit, but PyUnit allows you to add a description of the test and when tests are run with the verbose option the description is printed. I would look into the NUnit documentation to see if this is something you can do.
Extend the TestCase class that you're inheriting from to add a function from which you call that logs what the test is trying to do. That way different implementations can handle messages in different ways.
I'd say you should output whatever suits your needs, but showing too much can dilute output from test runner.
BTW, your example code hardly looks as a unit test, more of a integration/system test.
I like to buffer the verbose log (about last 20 lines or so), but I don't display it until it gets to some error. When the error happens, it's nice to have some context.
OTOH, unit tests should be small pieces of unrelated code with specific input and output requirements. In most cases, displaying input that caused the error (i.e. wrong output) is enough to trace the problem to its roots.
This might be a bit too language specific, but when I'm writing NUnit tests I tend to do this, only I use the System.Diagnostics.Trace library instead of the console, that way the information is only shown if I decide to watch the tracing.
You don't need to, if the tests are running silently then that means there was no error. There is usually no reason for tests to give any output other than if the test failed. If it's running, then it is running indicated by the test runner that the test has passed, i.e. it is "green". Running the test (together with many tests with console output) through a test runner in an IDE, you'll be spamming the console log with messages nobody will care about.
The test you've written is not a unit test, but looks more like an integration/system test because you seem to be running an application as a whole. A unit test will test a public method in a class, preferably keeping the class as isolated as possible.
Using console i/o kinda defies the whole purpose of a unit testing framework. you might as well code the whole test manually. If you are using a unit testing framework, your tests should be very malleable, tied to as few things as possible
Displaying information can be useful; if you're trying to find out why a test failed, it can be useful to be able to see more than just a stack trace, and what happened before the program reached the point where it failed.
However, in the "normal" case where everything succeeds, these messages are unnecessary clutter that distract from what you're really trying to do - ie. looking at an overview of which tests succeeded and failed.
I'd suggest redirecting your debugging messages to a log file. You can either do this by writing all your log message code to call a special "log print" function, or if you're writing a console program, you should be able to redirect stdout to a different file (I know for a fact that you can do this in both Unix and Windows). This way, you get the high level overview but the details are there if you need them.
I would avoid putting extra Try/Catch statements in Unit Tests. First of all, an expected exception in a unit test will already cause the test to Fail. That is the default behavior of NUnit. Essentitally, the test harness wraps each call to your test functions with that code already. Also, by just using the e.ToString() to display what happened, I believe you are losing a lot of information. By default, I believe NUnit will display not just the Exception type, but also the Call Stack, which I don't believe you're seeing with your method.
Secondly, there are times when its necessary. For instance, you can use the [ExpectedException] attribute to actually say when it occurs. Just be sure that when you test non-exception related Asserts (for instance Asserting a list count > 0, etc) that you put in a good description as the argument to the assert. That is useful.
Everything else is generally not needed. If your unit tests are so large that you start putting in WriteLines with what "step" of the test you're on, that is generally a sign that your test should really be broken out into multiple smaller tests. In other words, that you're not doing a unit test, but rather an integration test.
Have you looked at the xUnit style of unit test frameworks?
See Ron Jeffries site for a rather large list.
One of the principles of these frameworks is that they produce little or no output during the test run and only really an indicator of success at the end. In the case of failures its possible to get a more descriptive output of the reason for failure.
The reason for this mode is that while everything is OK you don't want to be bothered by extra output, and certainly if there is a failure you don't want to miss it because of the noise of other output.
Well, you should only know when a test failed and why it failed. It's no use to know what's going on, unless, for example, you have a loop and you want to know exactly where in the loop the test died.
I think your making far more work for yourself. The tests either pass or fail, the failure should hopefully be the exception to the rule and you should let the unit test runner handle and throw the exception. What you're doing is adding cruft, the exception logged by the test runner will tell you the same thing.
The only time I would display what's happening is if there was some aspect of it that would be easier to test non-automatically. For example, if you've got code that takes a little while to run, and might get stuck in an infinite loop, you might want to print out a message every so often to indicate that it is still making progress.
Always make sure failure messages clearly stand out from other output, however.
You could have written the test method like this. It's up to your code-nose which style of test you prefer. I prefer not writing extra try-catches and Console.WriteLines.
public void TestApplicationStart()
{
InteropApplication application = new InteropApplication(true);
application.Start();
}
Test frameworks that I have worked with would interpret any unhandled (and unexpected) exception as a failed test.
Think about the time you took to gold-plate this test and how many more meaningful tests you could have written with that time.