Repeat subset of tests determined programmatically at runtime with gtest framework - c++

That's orthogonal why but for clarity: I created a TimeMonitor event listener that at the end of the test compares the elapsed time with a policy and fails it if the test takes longer.
It works great with one exception - from time to time the system gets in weird state and some of the tests might take longer because of that. Note my bar for unit tests is 15ms - it is not so hard to happen.
I had this before and the way I solved it was to create a record and wait until the same test exceed the them several times before I fail it. This has several flows - the major one - the need of persisting the data.
I think it will works better if I simply do two (or more) passes. At first pass I collect the tests that exceeded their time and in pass 2-N I repeat them to confirm or reject the problem.
My question is - how. What I need to do (if possible) to programmatically collect a subset of tests and rerun them. Do I need to remove test from testing::UnitTest::GetInstance() or I should create another UnitTest.
A reference to something similar would be great, like retry failed tests for example.

I know the following does not directly answer your question, but I believe that a suggestion of a different approach is justified. I would suggest doing test execution time analysis from a separate process to simplify things and avoid changing the program that runs the tests. This way, you can be certain that you have not influenced the execution time of your tests by inserting additional code that keeps track of tests whose execution time exceeds the threshold you have defined. Also, you won't be needing to modify state of UnitTest objects and other details of googletest implementation, which is harder to understand and potentially dangerous.
Output of the executable that runs your test suite already provides you with execution time for each test. Write a script that runs your test suite executable once and parses that output to determine which tests take too long to execute (this can be easily achieved in some higher level language like Python). Then, if the script has found some tests that are suspicious, it re-runs the test suite executable 2-N times by specifying --gtest_filter command line parameter to it. For example:
tests.exe --gtest_filter=*test1*:*test2*:...:*testN*
This way, only suspicious tests will be re-run and you will be able to determine if some of them is indeed problematic.
If you do not want to use the values provided by googletest, you can modify your TimeMonitor to output the test execution time and parse those values. However, maybe it would be best to remove it and be 100% sure you are not influencing the execution time of the tests.
Hope this helps!

The solution actually is simple (when you know it). Disclaimer not tested with every possible corner case.
in pseudo code:
time monitor -> just observe and create a filter for the long tests
attach time monitor
testing::InitGoogleTest(&argc, argv);
int result = RUN_ALL_TESTS();
if (result == 0 && time_monitor->has too long tests()) {
time monitor -> activate reporting errors
::testing::GTEST_FLAG(filter) = time monitor -> the filter();
result = RUN_ALL_TESTS();
}

Related

Tell Google Test to resume executing the rest of the tests after a crashed test

I have a large unit test suite written in C++ using Google Test.
I have recently made a change to the codebase which may affect different parts of the system, so various tests should now probably fail or even crash. I would like to run once the entire suite (which unfortunately takes a long time to complete), summarize the list of the failed tests, and fix them one by one.
However, when ever a test crashes (e.g. with a segmentation fault) as opposed to simply logically failing, it seems that GTest stops and executes no more tests.
I can than fix the crashed test, however rerunning the entire suite will take a long time.
Is there a way to tell GTest to resume executing the rest of the tests after a test has crashed?
Or, alternatively, at least a way to launch GTest starting from a particular test (assuming the order of the tests is always the same)?
If you are are need to test if assertion is triggered when API is used incorrectly then gtest delivers something called DEATH TEST.
If your test crashed because of Segmentation Fault you should fix this ASAP! You can disable test temporary by adding DISABLED_ prefix, or by adding GTEST_SKIP() in test boy. Alternatively there is also command line argument --gtest_filter=<colon separated positive patterns>[:-<colon separated negative patterns>]. There is no way to recover from segmentation fault, so test suite can't continue.
If you use gcc or clang (msvc has this feature experimentally) you can enable address sanitizer to quickly detect all memory issues in your tested code. You will able to faster fix those issues.
There are cool plugins to IDE to handle gtest, those should you help you track which test were run, which failed and which crashed.
Google tests are not able to do what you need. I'd suggest you write a simple test runner that:
Runs the test executable with --gtest_list_tests to get a list of all tests.
Runs a loop thru all tests that prints out the test number and runs the test executable with --gtest_filter=FooTest.Bar to invoke only one test in each loop iteration.
The loop skips the required number of iterations and runs from the number N after the test with the number N is fixed.
You only need to write such a runner script once, and it shouldn't be hard.

What are the settings to be set to get Impacted Test results in AzureDev ops for MSTEST

I want to get an Impacted test result in MSTEST but not getting expected result. I have followed all the instructions written here - https://learn.microsoft.com/en-us/azure/devops/pipelines/test/test-impact-analysis?view=azure-devops
This is the log files of VSTS here you can see all the configuration done for Impact Analysis
This is test result image where I can not see Impacted results
My main branch is "Build Development" and child branch is "Mstest_UT" We have rebased it but still I did not get impacted result as expected.
After doing the research I got to know that Impacted test result gets only if all test cases are passed so I did that too but did not get such result.
[TestMethod]
public void GetAboutTideContent_Passing_Valid_Data()
{
iAboutTideEditorRepository.Setup(x => x.GetAboutTideContent(It.IsAny<ApplicationUser>())).Returns(new AboutTideEditor() { });
ResponseData<AboutTideEditor> actual = aboutTideService.GetAboutTideContent(It.IsAny<ApplicationUser>());
Assert.AreEqual(ProcessStatusEnum.Success, actual.Status);
}
I am writing a mock test in MSTEST.
I am expecting Impacted test result.
From what I understand from the link you provided for this test you should use this type of test from the start of your project ("growth and maturation off the test" hints towards some kind of deep-learning abilities of the software). If you're kicking in the test halfway, the program might be already locked in commitment of performing particular tests in a certain way (MS stuff remains sometimes having "black box approaches"). If that is the case you should override/reset it and run from the start without having the program or user have selected (detailed) tests. This off-course might set you back for several hours of testing. But consider spending and loosing more time in the search of what goes wrong; it keeps counting an d consuming time if its off the essence to minimize that. Check also the graph provided on the linked page its very informative about the order of actions (e.g. 6).
In your first "black-screen" there is a difference in the parallel setup (consider also below bullets). the black-screen states some dll files are not found in "test assembly". If there is a possibility to run a test-log you might want to check that too to see what typos might have occurred.
From the page:
At present, TIA is not supported for:
Multi-machine topology (where the test is exercising an app deployed to a different machine)
Data driven tests
Test Adapter-specific parallel test execution
.NET Core
UWP
In short: reset the whole test and run "fresh" to see if the errors persist.

Exeute custom method when the test execution is halted

We are in the situation when the database used as our test environment db must be kept clean. It means that every test has a cleanup method which will be run after each execution at it deletes from the db every data which needed for the test.
We use Specflow and to achieve our goal to keep the db clean is reachable by using this tool if the test execution is not halted. But, during developing the test cases happens that the test execution is halted so the generated data in the db is not cleaned up.
The question came up what happens when I press the "Halt execution" in VS 2013? How the VS stops the execution? What method will be called? It is possible to customize it?
The specflow uses MSTest framework and there is no option to change it.
I don't know how practical this is going to be for you, but as I see it you have a couple of options:
Run the cleanup code at the start and end of the test
Create a new database for every test
The first is the simplest and will ensure that when you stop execution in VS it won't impact the next test run (of the same test) as any remaining data will be cleaned up when the test runs.
The second is more complicated to set up (and slower when it runs) but means that you can run your tests in parallel (so is good if you use tools like NCrunch), and they won't interfere with each other.
What I have done ion the past is make the DB layer switchable so you can run the tests against in memory data most of the time, and then switch to the DB once in a while to check that the actual reading and writing stuff isn't broken
This isn't too onerous if you use EF6 and can switch the IDBSet<T> for some other implementation backed by an in memory IQueryable<> implementation

Automated Test case Execution - when to stop

We have around 100 test cases for our system. We are trying to build an automated test suite for it.
Say while running the tests the 25th test fails. Should our automated test system bail out here and stop execution, or should it just mark this as failed and continue trying to execute test cases 26th onwards (that is every test cycle will execute all 100 test cases irrespective of any failed test cases).
Ofcourse after a failed test case(for example no 25) if the system needs to be reset to execute test cases 26 onwards it will be taken care of.
Thanks
James
If your tests are independent - you should finish all of them. This way you can monitor the system stability and see all of the problems at once without re-running tests countless times.
If this is running without human intervention, say as part of some automated build, I would want to attempt all tests.
However, there are scenarios where you're in the mode of fixing problems where it might save a human's time to just stop. If it's easy I'd like to offer a "Stop on first failure" option.

Should you display what's happening in the unit test as it runs?

As I am coding my unit tests, I tend to find that I insert the following lines:
Console.WriteLine("Starting InteropApplication, with runInBackground set to true...");
try
{
InteropApplication application = new InteropApplication(true);
application.Start();
Console.WriteLine("Application started correctly");
}
catch(Exception e)
{
Assert.Fail(string.Format("InteropApplication failed to start: {0}", e.ToString()));
}
//test code continues ...
All of my tests are pretty much the same thing. They are displaying information as to why they failed, or they are displaying information about what they are doing. I haven't had any formal methods of how unit tests should be coded. Should they be displaying information as to what they are doing? Or should the tests be silent and not display any information at all as to what they are doing, and only display failure messages?
NOTE: The language is C#, but I don't care about a language specific answer.
I'm not sure why you would do that - if your unit test is named well, you already know what it's doing. If it fails, you know what test failed (and what assert failed). If it didn't fail you know that it succeeded.
This seems completely subjective, but to me this seems like completely redundant information that just adds noise.
I personally would recommend that you output only errors and a summary of the number of tests run and how many passed. This is a completely subjective view though. Display what suits your needs.
I recommend against it - I think that the unit testing should work on the Unix tools philosophy - don't say anything when things are going well.
I find that constructing tests to give meaningful information when they fail is best - that way you get nice short output when things work and it's easy to see what went wrong when there are problems - errors aren't lost to scroll blindness.
I would actually suggest against it (though not militantly). It couples the user interface of your tests with the test implementation (what if the tests are run through GUI viewer?). As alternative I would suggest one of the following:
I'm not familiar with NUnit, but PyUnit allows you to add a description of the test and when tests are run with the verbose option the description is printed. I would look into the NUnit documentation to see if this is something you can do.
Extend the TestCase class that you're inheriting from to add a function from which you call that logs what the test is trying to do. That way different implementations can handle messages in different ways.
I'd say you should output whatever suits your needs, but showing too much can dilute output from test runner.
BTW, your example code hardly looks as a unit test, more of a integration/system test.
I like to buffer the verbose log (about last 20 lines or so), but I don't display it until it gets to some error. When the error happens, it's nice to have some context.
OTOH, unit tests should be small pieces of unrelated code with specific input and output requirements. In most cases, displaying input that caused the error (i.e. wrong output) is enough to trace the problem to its roots.
This might be a bit too language specific, but when I'm writing NUnit tests I tend to do this, only I use the System.Diagnostics.Trace library instead of the console, that way the information is only shown if I decide to watch the tracing.
You don't need to, if the tests are running silently then that means there was no error. There is usually no reason for tests to give any output other than if the test failed. If it's running, then it is running indicated by the test runner that the test has passed, i.e. it is "green". Running the test (together with many tests with console output) through a test runner in an IDE, you'll be spamming the console log with messages nobody will care about.
The test you've written is not a unit test, but looks more like an integration/system test because you seem to be running an application as a whole. A unit test will test a public method in a class, preferably keeping the class as isolated as possible.
Using console i/o kinda defies the whole purpose of a unit testing framework. you might as well code the whole test manually. If you are using a unit testing framework, your tests should be very malleable, tied to as few things as possible
Displaying information can be useful; if you're trying to find out why a test failed, it can be useful to be able to see more than just a stack trace, and what happened before the program reached the point where it failed.
However, in the "normal" case where everything succeeds, these messages are unnecessary clutter that distract from what you're really trying to do - ie. looking at an overview of which tests succeeded and failed.
I'd suggest redirecting your debugging messages to a log file. You can either do this by writing all your log message code to call a special "log print" function, or if you're writing a console program, you should be able to redirect stdout to a different file (I know for a fact that you can do this in both Unix and Windows). This way, you get the high level overview but the details are there if you need them.
I would avoid putting extra Try/Catch statements in Unit Tests. First of all, an expected exception in a unit test will already cause the test to Fail. That is the default behavior of NUnit. Essentitally, the test harness wraps each call to your test functions with that code already. Also, by just using the e.ToString() to display what happened, I believe you are losing a lot of information. By default, I believe NUnit will display not just the Exception type, but also the Call Stack, which I don't believe you're seeing with your method.
Secondly, there are times when its necessary. For instance, you can use the [ExpectedException] attribute to actually say when it occurs. Just be sure that when you test non-exception related Asserts (for instance Asserting a list count > 0, etc) that you put in a good description as the argument to the assert. That is useful.
Everything else is generally not needed. If your unit tests are so large that you start putting in WriteLines with what "step" of the test you're on, that is generally a sign that your test should really be broken out into multiple smaller tests. In other words, that you're not doing a unit test, but rather an integration test.
Have you looked at the xUnit style of unit test frameworks?
See Ron Jeffries site for a rather large list.
One of the principles of these frameworks is that they produce little or no output during the test run and only really an indicator of success at the end. In the case of failures its possible to get a more descriptive output of the reason for failure.
The reason for this mode is that while everything is OK you don't want to be bothered by extra output, and certainly if there is a failure you don't want to miss it because of the noise of other output.
Well, you should only know when a test failed and why it failed. It's no use to know what's going on, unless, for example, you have a loop and you want to know exactly where in the loop the test died.
I think your making far more work for yourself. The tests either pass or fail, the failure should hopefully be the exception to the rule and you should let the unit test runner handle and throw the exception. What you're doing is adding cruft, the exception logged by the test runner will tell you the same thing.
The only time I would display what's happening is if there was some aspect of it that would be easier to test non-automatically. For example, if you've got code that takes a little while to run, and might get stuck in an infinite loop, you might want to print out a message every so often to indicate that it is still making progress.
Always make sure failure messages clearly stand out from other output, however.
You could have written the test method like this. It's up to your code-nose which style of test you prefer. I prefer not writing extra try-catches and Console.WriteLines.
public void TestApplicationStart()
{
InteropApplication application = new InteropApplication(true);
application.Start();
}
Test frameworks that I have worked with would interpret any unhandled (and unexpected) exception as a failed test.
Think about the time you took to gold-plate this test and how many more meaningful tests you could have written with that time.