Running NUnit tests multiple times - unit-testing

I have a suite of NUnit tests, some of which fail intermittently, probably because of timing problems. I'd like to find these flaky unit tests. Is there a way to repeat each test multiple times without having to put a Repeat() attribute on each test? We routinely use the resharper and ncrunch runners, but also have access to the nunit gui and console runners.

NUnit 3
In NUnit 3, you may use Retry attribute:
RetryAttribute is used on a test method to specify that it should be
rerun if it fails, up to a maximum number of times.
Notes:
It is not currently possible to use RetryAttribute on a TestFixture or any other type of test suite. Only single tests may be
repeated.
If a test has an unexpected exception, an error result is returned and it is not retried. Only assertion failures can trigger a retry. To
convert an unexpected exception into an assertion failure, see the
ThrowsConstraint.
NUnit 2
NUnit 2 doesn't support retries, but you may use NUnit-retry plug-in (NuGet, GitHub). An example of use:
private static int run = 0;
...
[Test]
[Retry(Times = 3, RequiredPassCount = 2)]
public void One_Failure_On_Three_Should_Pass()
{
run++;
if (run == 1)
{
Assert.Fail();
}
Assert.Pass();
}
See also
Feature - Add 'Retry Attribute' to repeat test upon failure. Discussion about the feature on Launchpad

Related

Flag test as expected to fail in unit 5

I have a unit test, written with JUnit 5 (Jupiter), that is failing. I do not currently have time to fix the problem, so I would like to mark the test as an expected failure. Is there a way to do that?
I see #Disable which causes the test to not be run. I would like the test to still run (and ideally fail the build if it starts to work), so that I remember that the test is there.
Is there such an annotation in Junit 5? I could use assertThrows to catch the error, but I would like the build output to indicate that this is not a totally normal test.
You can disable the failing test with the #Disabled annotation. You can then add another test that asserts the first one does indeed fail:
#Test
#Disabled
void fixMe() {
Assertions.fail();
}
#Test
void fixMeShouldFail() {
assertThrows(AssertionError.class, this::fixMe);
}

Timeboxing NUnit Unit Tests

I've inherited a codebase which has had some bad check-ins -- some of the unit tests are completely hanging and I can't run the entire unit test suite because it will always get stuck on specific tests. -- I would like to take an inventory of those tests that are now hanging.
What's the right way to set a global timeout on all of my tests such that each one is timeboxed to a specific amount of time. (i.e. if I set it to 1 minute, and a test takes 61 seconds, that test is automatically aborted and marked as failed? -- The test runner should then move on to the next test immediately.)
I'm using Visual Studio 2015 Update 1, NUnit 2.6.4, and the NUnit 2.x Test Adapter for Visual Studio.
I believe it is timeout that you want to use here.
E.g.
[Test, Timeout(2000)]
public void PotentiallyLongRunningTest()
{
...
}
The NUnit documentation seems to indicate it can be set at an assembly level. In AssemblyInfo.cs:
First:
using NUnit.Framework;
Then:
[assembly: Timeout(1000)]

Unit test fails with NUnit Test Adapter but not with ReSharper in VS2012

I have a strange problem occurring when I run my unit tests in VS2012. I'm using NUnit and run them with ReSharper and there all tests are working. But when my colleagues run the tests, some of them don't have ReSharper so they are using the Test Explorer with the extension NUnit Test Adapter (Beta 3) v0.95.2 (http://visualstudiogallery.msdn.microsoft.com/6ab922d0-21c0-4f06-ab5f-4ecd1fe7175d). However with that extension some tests are failing.
The specific code that fails is the following:
public void Clear()
{
this.Items.ForEach(s => removeItem(s));
}
private bool removeItem(SequenceFlow item)
{
int i = this.Items.IndexOf(item);
if (i == -1)
return false;
this.Items.RemoveAt(i);
return true;
}
The exception is:
System.InvalidOperationException : Collection was modified; enumeration operation may not execute.
Result StackTrace:
at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource)
at System.Collections.Generic.List`1.ForEach(Action`1 action)
Now, I'm not looking for an to answer why I get this exception, sure I can understand why it fails. But what I can't understand is why the tests fail with the Test Exporer but not when using ReSharper. Why do I get different behavior for the tests?
I used ildasm.exe to see if the code is compiled differently when testing for the two cases, but the IL-code is identical.
The tests also runs during commit on our Team City server with no errors.
Furthermore, when debugging the test I get the same exception when debugging through NUnit test adapter, but when debugging and stepping through the code with ReSharper, no exception at all.
I found that in VS2012 similar code would fail at run-time with the same error. If you used this method in an application, would it succeed?
You're functionally iterating over a collection and removing items from it while you're still in the collection - this changes the internal indexing of the collection, invalidating the addressing of the iteration. If you'd coded it as:
for(int I=0; I < Items.Count, I++)
{
removeItem(Items[I]);
}
you'd wind up with an index out of bounds error because the collection's internal indexing resets.
I can't speak to ReSharper, but I'd guess that it has a more generous run-time engine than the MS nunit engine (or, for that matter, the MS runtime engine).
I was doing something similar in an application where I tried to iterate through the collection of dependent objects on my parent and remove them. It failed with the exact error you're receiving: ultimately I went with a linq query to remove all items attached to the specified parent - the equivalent of running the SQL query DELETE FROM table WHERE parentID = parentid.

Arbitrary unit tests fail in VS2012 after switching from Moles to Fakes

I have about 300 unit tests for an assembly which is part of a solution I originally started under VS2010. Numerous tests used the Moles framework provided by Micrsoft, but after upgrading to VS2012 (Update 2) I wanted to change the tests to use the officially supplied Fakes framework.
I updated the corresponding tests accordingly, which usually only involved creating a ShimsContext and some minor changes to the code:
Before
[TestMethod]
[HostType( "Moles" )]
public void MyUnitTest_CalledWithXyz_ThrowsException()
{
// Arrange
...
MGroupPrincipal.FindByIdentityPrincipalContextIdentityTypeString =
( t1, t2, t3 ) => null;
...
try
{
// Act
...
}
catch( Exception ex )
{
// Assert
...
}
}
After
[TestMethod]
public void MyUnitTest_CalledWithXyz_ThrowsException()
{
using( ShimsContext.Create() )
{
// Arrange
...
ShimGroupPrincipal.FindByIdentityPrincipalContextIdentityTypeString =
( t1, t2, t3 ) => null;
try
{
// Act
...
}
catch( Exception ex )
{
// Assert
...
}
}
}
I've got different test classes in my test project, and when I run the tests I get arbitrary erros which I cannot explain, e.g.:
Run tests for one class in Release mode => 21 tests fail / 15 pass
Run tests for same class in Debug mode => 2 tests fail / 34 pass
Run tests for same class again in Release mode => 2 tests fail / 34 pass
Run all tests in the project => 21 tests fail / 15 pass (for the class mentioned above)
Same behaviour for a colleague on his system. The error messages are always TypeLoadExceptions such as
Test method ... threw exception: System.TypeLoadException: Could not load type 'System.DirectoryServices.Fakes.ShimDirectorySearcher' in the assembly 'System.DirectoryServices.4.0.0.0.Fakes,Version=4.0.0.0, Culture=neutral, PublicKeyToken=..."
In VS2012 itself the source code editor doesn't show any errors, Intellisense works as expected, mouse tooltips over e.g. ShimDirectorySearcher show where it is located etc. Furthermore, when I open the Fakes assembly that's being generated (e.g. System.DirectoryServices.4.0.0.0.Fakes.dll) with .NET Reflector, the type shown in the error message exists.
All the tests worked fine (in Debug and Release mode as well) before we switched from VS2010 to VS2012, but now we don't have a clue what's wrong here. Why does the result change in the ways described above? Why do we get TypeLoadExceptions even though the types do exist?
Unfortunately there is hardly any help available from Micrsoft or on the internet.
I don't quite understand why having the old .testsettings file from VS2010 is such a problem, but deleting it and adding a .runsettings file as suggested by MSDN did the job for me.
All problems were solved:
All unit tests run (again) without problems
Arbitrary combinations of tests run (again) without problems
I can debug tests using Fakes (before I used to get test instrumentalisation errors)
Hope this helps others who run into problems, there doesn't seem to be too much information about Fakes yet.
One more thing regarding Code Coverage: This works (without having to configure any test settings) via menu Test => Analyze Code Coverage. For TFS build definitions you can enable code coverage for the build by choosing Process => Basic => Automated Tests => 1. Test Source. Now click into the corresponding text field and then on the ... button that is (only) shown when you click into the text field. Now choose Visual Studio Test Runner in the Test runner ComboBox. Now you can also choose Enable Code Coverage from the options.

Unit tests with different severity

I'm testing a set of classes and my unit tests so far are along the lines
1. read in some data from file X
2. create new object Y
3. sanity assert some basic properties of Y
4. assert advanced properties of Y
There's about 30 of these tests, that differ in input/properties of Y that can be checked. However, at the current project state, it sometimes crashes at #2 or already fails at #3. It should never crash at #1. For the time being, I'm accepting all failures at #4.
I'd like to e.g. see a list of unit tests that fail at #3, but so far ignore all those that fail at #4. What's the standard approach/terminology to create this? I'm using JUnit for Java with Eclipse.
You need reporting/filtering on your unit test results.
jUnit itself wants your tests to pass, fail, or not run - nothing in between.
However, it doesn't care much about how those results are tied to passing/failing the build, or reported.
Using tools like maven (surefire execution plugin) and some custom code, you can categorize your tests to distinguish between 'hard failures', 'bad, but let's go on', etc. But that's build validation or reporting based on test results rather than testing.
(Currently, our build process relies on annotations such as #Category(WorkInProgress.class) for each test method to decide what's critical and what's not).
What I could think of would be to create assert methods that check some system property as to whether to execute the assert:
public static void assertTrue(boolean assertion, int assertionLevel){
int pro = getSystemProperty(...);
if (pro >= assertionLevel){
Assert.assertTrue(assertion);
}
}