nUnit Test interdependencies/hierarchies - unit-testing

Is there a way to give nUnit a hierarchy of tests, so that later test aren't run if earlier test fail?
Suppose I have a set of data that is expected to be processed and ordered, and I’ve got a UT that tests that the ordering is done, but also a bunch other tests that test the processing and which assume that the ordering has been successful and will hence fail spuriously if that’s false.
(And suppose that breaking this dependency is not feasible - I know that is the ideal solution!)
Then it would be nice to tell nUnit … "If the ordering test has failed, then don’t even bother with these ones (throw inconclusive, maybe?) ‘cos we know they ain’t gonna work."
Obviously I could do it manually, but that's going to add a lot of essentially irrelevant cruft to the tests (we have nice short tests, so a "call the head test and catch Exceptions" block would double the length of some tests)
I'm hoping that nUnit might JustDoThis already, but I can’t see it anywhere?

As mentioned in your question, the most 'proper' fix is to break the dependency between the tests. If you really want to do this though, NUnit does support assumptions (which are a bit like assertions, but at the start of your test), which could be used to check for some precondition that affects the validity of your test. Searching for "NUnit Assume" brought up the following StackOverflow question, which looks similar to this one: How do I ignore a test based on another test in NUnit?

I believe hierarchy in unit tests is a bad idea. As you said your unit tests are small, nice and independent of course. And it should stay the same. Adding some dependencies breaks it.
To offer some workaround, I would suggest to use the Category attribute combined with the /stoponerror option. You could group tests by its importance and run it by this categories. And use stoponerror option will stop executing the rest of tests when the first fails. I'm not sure if this also works in integration with Visual Studio but you can achieve this with NUnit command line tool. See this

Related

Can I force a unit test to fail if there are no Asserts?

Some programmers in my team sometimes write unit tests which call the method, gets the result but forget to call the proper Assert methods to actually check what's happening.
I was wondering if there is any configuration I can do to force MSTest to fail the test if no verification is done. I remember seeing something like this in DUnit, but could not find it in Visual Studio.
Check out Test-Lint from Roy Osherove and co. It's static code analysis for test code.
I tried it out once when it was out for public alpha/beta.. was pretty well-behaved. However didn't try out this specific need. I don't think MSTest or most unit testing frameworks will guard against this out-of-the-box.
Also aquinas has a valid comment.. Education might work better than inspection-and-the-stick. You may even be able to create a custom rule to catch rogue asserts.. Check the tool out.
From the home page of the tool,
What issues does it detect?
Currently Test Lint finds a set of common problems:
* Missing asserts in your tests
I have not heard of such a feature. Unit testing without Asserts is simply testing that no errors are thrown in which case would typically pass every time.
I am rather surprised that your programmers are actually writing tests without Asserts, this seems very unprofessional. I would suggest pointing them towards reading a few online courses on Test Driven Development where typically you write a test to fail and then make the programming changes to make it pass (where in this case Assert.IsTrue(true) wouldn't even begin to make sense.
Also provide the template:
[Test]
public void TestCase
{
//Setup
//Run Test
//Process Results
//Assert
}
I would highly suggest making the purchase for this screencast:
http://tekpub.com/productions/ft_tdd_wilson
It provides a good idea on how to write unit tests and how to properly follow TDD.

Unit testing: Is it a good practice to have assertions in setup methods?

In unit testing, the setup method is used to create the objects needed for testing.
In those setup methods, I like using assertions: I know what values I want to see in those
objects, and I like to document that knowledge via an assertion.
In a recent post on unit tests calling other unit tests here on stackoverflow, the general feeling seems to be that unit tests should not call other tests:
The answer to that question seems to be that you should refactor your setup, so
that test cases do not depend on each other.
But there isn't much difference in a "setup-with-asserts" and a
unit test calling other unit tests.
Hence my question: Is it good practice to have assertions in setup methods?
EDIT:
The answer turns out to be: this is not a good practice in general. If the setup results need to be tested, it is recommended to add a separate test method with the assertions (the answer I ticked); for documenting intent, consider using Java asserts.
Instead of assertions in the setup to check the result, I used a simple test (a test method along the others, but positionned as first test method).
I have seen several advantages:
The setup keeps short and focused, for readability.
The assertions are run only once, which is more efficient.
Usage and discussion :
For example, I name the method testSetup().
To use it, when I have some test errors in that class, I know that if testSetup() has an error, I don't need to bother with the other errors, I need to fix this one first.
If someone is bothered by this, and wants to make this dependency explicit, the testSetup() could be called in the setup() method. But I don't think it matters. My point is that, in JUnit, you can already have something similar in the rest of your tests:
some tests that test local code,
and some tests that is calls more global code, which indirectly calls the same code as the previous test.
When you read the test result where both fail, you already have to take care of this dependency that is not in the test, but in the code being called. You have to fix the simple test first, and then rerun the global test to see if it still fails.
This is the reason why I'm not bothered by the implicit dependency I explained before.
Having assertions in the Setup/TearDown methods is not advisable. It makes the test less readable if the user needs to "understand" that some of the test logic is not in the test method.
There are times when you do not have a choice but to use the setup/teardown methods for something other than what they where intended for.
There is a bigger issue in this question: a test that calls another test, it is a smell for some problem in your tests.
Each test should test a specific aspect of your code and should only have one or two assertions in it, so if your test calls another test you might be testing too many things in that test.
For more information read: Unit Testing: One Test, One Assertion - Why It Works
They're different scenarios; I don't see the similarity.
Setup methods should contain code that is common to (ideally) all tests in a fixture. As such, there's nothing inherently wrong with putting asserts in a test setup method if certain things must be true before the rest of the test code executes. The setup is an extension of the test; it is part of the test as a whole. If the assert trips, people will discover which pre-requisite failed.
On the other hand, if the setup is complicated enough that you feel the need to assert it is correct, it may be a warning sign. Furthermore, if all tests do not require the setup's full output, then it is a sign that the fixture has poor cohesion and should be split up based on scenarios and/or refactored.
It's partly because of this that I tend to stay away from using Setup methods. Where possible, I use private factory methods or similar to set things up. It makes the test more readable and avoids confusion. Sometimes this is not practical (e.g. working with tightly coupled classes and/or when writing integration tests), but for the majority of my tests it does the job.
Follow your heart / Blink decisions. Asserts within a Setup method can document intent ; improver readability. So personally I'd back you up on this.
It is different from a test calling other tests - which is bad. No test isolation. A test should not influence the outcome of another test.
Although it is not a freq use-case, I sometimes use Asserts inside a Setup method so that I can know if test setup has not taken place as I intended it to; usually when I'm dealing with components that I didn't write myself. An Assertion failure which reads 'Setup failed!' in the errors tab - quickly helps me zone in on the setup code instead of having to look at a bunch of failed tests.
A Setup failure usually should cause all tests in that fixture to fail - which is a smell that your nose should soon pickup. 'All tests failed usually implies Setup broke ' So assertions are not always needed. That said be pragmatic, look at your specific context and 'Add to taste.'
I use Java asserts, rather than JUnit ones, in the cases where something like this is necessary. e.g. when you use some other utility class to set up test data.:
byte[] pkt = pktFactory.makePacket(TIME, 12, "23, F2");
assert pkt.length == 15;
Failing has the implication 'system is not in a state to even try to run this test'.

Should failing tests make the continuous build fail?

If one has a project that has tests that are executed as part of the build procedure on a build machine, if a set tests fail, should the entire build fail?
What are the things one should consider when answering that question? Does it matter which tests are failing?
Background information that prompted this question:
Currently I am working on a project that has NUnit tests that are done as part of the build procedure and are executed on our cruise control .net build machine.
The project used to be setup so that if any tests fail, the build fails. The reasoning being if the tests fail, that means the product is not working/not complete/it is a failure of a project, and hence the build should fail.
We have added some tests that although they fail, they are not crucial to the project (see below for more details). So if those tests fail, the project is not a complete failure, and we would still want it to build.
One of the tests that passes verifies that incorrect arguments result in an exception, but the test does not pass is the one that checks that all the allowed arguments do not result in an exception. So the class rejects all invalid cases, but also some valid ones. This is not a problem for the project, since the rejected valid arguments are fringe cases, on which the application will not rely.
If it's in any way doable, then do it. It greatly reduces the broken-window-problem:
In a system with no (visible) flaws, introducing a small flaw is usually seen as a very bad idea. So if you've got a project with a green status (no unit test fails) and you introduce the first failing test, then you (and/or your peers) will be motivated to fix the problem.
If, on the other side, there are known-failing tests, then adding just another broken test is seen as keeping the status-quo.
Therefore you should always strive to keep all tests running (and not just "most of them"). And treating every single failing test as reason for failing the build goes a long way towards that goal.
If a unit test fails, some code is not behaving as expected. So the code shouldn't be released.
Although you can make the build for testing/bugfixing purposes.
If you felt that a case was important enough to write a test for, then if that test is failing, the software is failing. Based on that alone, yes, it should consider the build a failure and not continue. If you don't use that reasoning, then who decides what tests are not important? Where is the line between "if this fails it's ok, but if that fails it's not"? Failure is failure.
I think a nice setup like yours should always build successfully, including all unit tests passed.
Like Gamecat said, the build itself is succeeded, but this code should never go to production.
Imagine one of your team members introducing a bug in the code which that one unit test (which always fails) covers. It won't be discovered by the test since you allow that one test to always fail.
In our team we have a simple policy: if all tests don't pass, we don't go to production with the code. This is also a very simple to understand for our project manager.
In my opinion it really depends on your Unit Tests,...
if your Unit tests are really UNIT tests (like they should be => "reference to endless books ;)" )
then the build should fail, because something is not behaving as should...
but most often (unfortunately to often seen), in so many projects these unit tests only cover some 'edges' and/or are integration tests, then the build should go on
(yes, this is a subjective answer ;)
in short:
do you know the unit tests to be fine: fail;
else: build on
The real problem is with your failing tests. You should not have a unit test where it's OK to fail because it's an edge case. Decide whether the edge case is important or not - if not then delete the unit test; if yes then fix the code.
Like some of the other answers implied, it's definitely a code smell when unit tests fail. If you live with the smell, then you're less likely to spot the next problem
All the answers have been great, here is what I decided to do:
Make the tests (or if need be split a failing test) that are not crucial be ignored by NUnit (I remembered this feature after asking the question). This allows:
The build can fail if any tests fail, hence reducing the smelliness
The tests that are ignored have to be defended to project manager (whomever is in charge)
Any tests that are ignored are marked in a special way
I think that is the best compromise, forcing people to fix the tests, but not necessarily right away (but they have to defend their decision of not fixing it now since everyone knows what they did).
What I actually did: fixed the broken tests.

What attribute should a good Unit-Test have?

A Unit-Test should
produce deterministic result
be independent
be valid
...
What other characteristics should a test also have?
Ah. My favorite subject :-) Where to start...
According to xUnit test patterns by Gerard Meszaros (THE book to read about unit testing)
Tests should reduce risk, not
introduce it.
Tests should be easy to run.
Tests should be easy to maintain as
the system evolves around them
Some things to make this easier:
Tests should only fail because of one
reason. (Tests should only test one thing, avoid multiple asserts for example.)
There should only be one test that fails for that reason. (this keeps your testbase maintainable)
Minimize test dependencies (no
dependencies on databases, files, ui
etc.)
Other things to look at:
Naming
Have a descriptive name. Tests-names should read like specifications. If your names get too long you're probably testing too much.
Structure
Use AAA structure. This is the new fad for mocking frameworks, But I think it's a good way to structure all your tests like this.
Arrange your context
Act, do the things that need to be tested
Assert, assert what you want to check
I usually divide my tests in three blocks of code. Knowing this pattern makes tests more readable.
Mocks vs. Stubs
When using a mocking framework always try to use stubs and state based testing before resorting to mocking.
Stubs are objects that stand in for dependencies of the object you're trying to test. You can program behaviour into them and they can get called in your tests. Mocks expand on that by letting you assert if they were called and how. Mocking is very powerfull but it lets you test implementation instead of pre and post-conditions of your code. This tends to make tests more brittle.
The Pragmatic Programmers' answer : good tests shall be A-TRIP
Automatic
Thorough
Repeatable
Independent
Professional
not access external resources
be readable
Automatable: no manual intervention should be required to run the tests (CI).
Complete: they must cover as much code they can (Code Coverage).
Reusable: no need to create tests that will only be executed once.
Independent: Independent execution of a test should not affect the performance of another.
Professional: tests should be considered with the same value as the code, the same professionalism, documentation, etc.
One that I haven't seen anyone else mention is small. A unit test should test for one particular thing and that's all. I try to aim to have only one assert and minimize the amount of setup code by refactoring them out into their own methods. I'll also create my own custom asserts. A nice small unit test IMO is about 10 lines or less. When a test is small it is easy to get a quick understanding of what the test is trying to do. Large tests end up being unmaintainable in the long run.
Of course, small isn't the only thing I aim for...its just one of the things I value in a unit test. :-)
An other factors to keep in mind is the running time. If a test runs too long it is likely to be skipped.
Must be fully automatic.
Must not assume any preconditions
(product X be installed, file and Y
location etc).
Must be person independent as far as
running the scripts are concerned. Result can, however, be analysed by
subject experts only.
Must run on every beta build.
Must produce a verifiable report.
A unit test should be fast: hundreds of test should be able to run in a few seconds.
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
Tests that do these things aren't bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.
source: A Set of Unit Testing Rules

Testing a test?

I primarily spend my time working on automated tests of win32 and .NET applications, which take about 30% of our time to write and 70% to maintain. We have been looking into methods of reducing the maintenance times, and have already moved to a reusable test library that covers most of the key components of our software. In addition we have some work in progress to get our library to a state where we can use keyword based testing.
I have been considering unit testing our test library, but I'm wondering if it would be worth the time. I'm a strong proponent of unit testing of software, but I'm not sure how to handle test code.
Do you think automated Gui testing libraries should be unit tested? Or is it just a waste of time?
First of all I've found it very useful to look at unit-test as "executable specifications" instead of tests. I write down what I want my code to do and then implement it. Most of the benefits I get from writing unit tests is that they drive the implementation process and focus my thinking. The fact that they're reusable to test my code is almost a happy coincidence.
Testing tests seems just a way to move the problem instead of solving it. Who is going to test the tests that test the tests? The 'trick' that TDD uses to make sure tests are actually useful is by making them fail first. This might be something you can use here too. Write the test, see it fail, then fix the code.
I dont think you should unit test your unit tests.
But, if you have written your own testing library, with custom assertions, keyboard controllers, button testers or what ever, then yes. You should write unit tests to verify that they all work as intented.
The NUnit library is unit tested for example.
In theory, it is software and thus should be unit-tested. If you are rolling your own Unit Testing library, especially, you'll want to unit test it as you go.
However, the actual unit tests for your primary software system should never grow large enough to need unit testing. If they are so complex that they need unit testing, you need some serious refactoring of your software and some attention to simplifying your unit tests.
You might want to take a look at Who tests the tests.
The short answer is that the code tests the tests, and the tests test the code.
Huh?
Testing Atomic Clocks
Let me start with an analogy. Suppose you are
travelling with an atomic clock. How would you know that the clock is
calibrated correctly?
One way is to ask your neighbor with an atomic clock (because everyone
carries one around) and compare the two. If they both report the same
time, then you have a high degree of confidence they are both correct.
If they are different, then you know one or the other is wrong.
So in this situation, if the only question you are asking is, "Is my
clock giving the correct time?", then do you really need a third clock
to test the second clock and a fourth clock to test the third? Not if
all. Stack Overflow avoided!
IMPO: it's a tradeoff between how much time you have and how much quality you'd like to have.
If I would be using a home made test harnas, I'd test it if time permits.
If it's a third party tool I'm using, I'd expect the supplier to have tested it.
There really isn't a reason why you could/shouldn't unit test your library. Some parts might be too hard to unit test properly, but most of it probably can be unit tested with no particular problem.
It's actually probably particularly beneficial to unit test this kind of code, since you expect it to be both reliable and reusable.
The tests test the code, and the code tests the tests. When you say the same intention in two different ways (once in tests and once in code), the probability of both of them being wrong is very low (unless already the requirements were wrong). This can be compared to the dual entry bookkeeping used by accountants. See http://butunclebob.com/ArticleS.UncleBob.TheSensitivityProblem
Recently there has been discussion about this same issue in the comments of http://blog.objectmentor.com/articles/2009/01/31/quality-doesnt-matter-that-much-jeff-and-joel
About your question, that should GUI testing libraries be tested... If I understood right, you are making your own testing library, and you want to know if you should test your testing library. Yes. To be able to rely on the library to report tests correctly, you should have tests which make sure that library does not report any false positives or false negatives. Regardless of whether the tests are unit tests, integration tests or acceptance tests, there should be at least some tests.
Usually writing unit tests after the code has been written is too late, because then the code tends to be more coupled. The unit tests force the code to be more decoupled, because otherwise small units (a class or a closely related group of classes) can not be tested in isolation.
When the code has already been written, then usually you can add only integration tests and acceptance tests. They will be run with the whole system running, so you can make sure that the features work right, but covering every corner case and execution path is harder than with unit tests.
We generally use these rules of thumb:
1) All product code has both unit tests (arranged to correspond closely with product code classes and functions), and separate functional tests (arranged by user-visible features)
2) Do not write tests for 3rd party code, such as .NET controls, or third party libraries. The exception to this is if you know they contain a bug which you are working around. A regression test for this (which fails when the 3rd party bug disappears) will alert you when upgrades to your 3rd party libraries fix the bug, meaning you can then remove your workarounds.
3) Unit tests and functional tests are not, themselves, ever directly tested - APART from using the TDD procedure of writing the test before the product code, then running the test to watch it fail. If you don't do this, you will be amazed by how easy it is to accidentally write tests which always pass. Ideally, you would then implement your product code one step at a time, and run the tests after each change, in order to see every single assertion in your test fail, then get implemented and start passing. Then you will see the next assertion fail. In this way, your tests DO get tested, but only while the product code is being written.
4) If we factor out code from our unit or functional tests - creating a testing library which is used in many tests, then we do unit test all of this.
This has served us very well. We seem to have always stuck to these rules 100%, and we are very happy with our arrangement.
Kent Beck's book "Test-Driven Development: By Example" has an example of test-driven development of a unit test framework, so it's certainly possible to test your tests.
I haven't worked with GUIs or .NET, but what concerns do you have about your unit tests?
Are you worried that it may describe the target code as incorrect when it is functioning properly? I suppose this is a possibility, but you'd probably be able to detect that if this was happening.
Or are you concerned that it may describe the target code as functioning properly even if it isn't? If you're worried about that, then mutation testing may be what you're after. Mutation testing changes parts of code being tested, to see if those changes cause any tests to fail. If it doesn't, then either the code isn't being run, or the results of that code isn't being tested.
If mutation testing software isn't available on your system, then you could do the mutation manually, by sabotaging the target code yourself and seeing if it causes the unit tests to fail.
If you're building a suite of unit testing products that aren't tied to a particular application, then maybe you should build a trivial application that you can run your test software on and ensure it gets the failures and successes expected.
One problem with mutation testing is that it doesn't ensure that the tests cover all potential scenarios a program may encounter. Instead, it only ensures that the scenarios anticipated by the target code are all tested.
Answer
Yes, your GUI testing libraries should be tested.
For example, if your library provides a Check method to verify the contents of a grid against a 2-dimensional array, you want to be sure that it works as intended.
Otherwise, your more complex test cases that test business processes in which a grid must receive particular data may be unreliable. If an error in your Check method produces false negatives, you'll quickly find the problem. However, if it produces false positives, you're in for major headaches down the line.
To test your CheckGrid method:
Populate a grid with known values
Call the CheckGrid method with the values populated
If this case passes, at least one aspect of CheckGrid works.
For the second case, you're expecting the CheckGrid method to report a test failure.
The particulars of how you indicate the expectation will depend on your xUnit framework (see an example later). But basically, if the Test Failure is not reported by CheckGrid, then the test case itself must fail.
Finally, you may want a few more test case for special conditions, such as: empty grids, grid size mismatching array size.
You should be able to modify the following dunit example for most frameworks in order to test that CheckGrid correctly detects errors:
begin
//Populate TheGrid
try
CheckGrid(<incorrect values>, TheGrid);
LFlagTestFailure := False;
except
on E: ETestFailure do
LFlagTestFailure := True;
end;
Check(LFlagTestFailure, 'CheckGrid method did not detect errors in grid content');
end;
Let me reiterate: your GUI testing libraries should be tested; and the trick is - how do you do so effectively?
The TDD process recommends that you first figure out how you intend testing a new piece of functionality before you actually implement it. The reason is, that if you don't, you often find yourself scratching your head as to how you're going to verify it works. It is extremely difficult to retrofit test cases onto existing implementations.
Side Note
One thing you said bothers me a little... you said it takes "70% (of your time) to maintain (your tests)"
This sounds a little wrong to me, because ideally your tests should be simple, and should themselves only need to change if your interfaces or rules change.
I may have misunderstood you, but I got the impression that you don't write "production" code. Otherwise you should have more control over the cycle of switching between test code and production code so as to reduce your problem.
Some suggestions:
Watch out for non-deterministic values. For example, dates and artificial keys can play havoc with certain tests. You need a clear strategy of how you'll resolve this. (Another answer on its own.)
You'll need to work closely with the "production developers" to ensure that aspects of the interfaces you're testing can stabilise. I.e. They need to be cognisant of how your tests identify and interact with GUI components so they don't arbitrarily break your tests with changes that "don't affect them".
On the previous point, it would help if automated tests are run whenever they make changes.
You should also be wary of too many tests that simply boil down to arbitrary permutations. For example, if each customer has a category A, B, C, or D; then 4 "New Customer" tests (1 for each category) gives you 3 extra tests that don't really tell you much more than the first one, and are 'hard' to maintain.
Personally, I don't unit test my automation libraries, I run them against a modified version of the baseline to ensure all the checkpoints work. The principal here is that my automation is primarily for regression testing, e.g. that the results for the current run are the same as the expect results (typically this equates to the results of the last run). By running the tests against a suitably modified set of expected results, all the tests shoud fail. If they don't you have a bug in your test suite. This is a concept borrowed from mutation testing that I find works well for checking GUI automation suites.
From your question, I can understand that you are building a Keyword Driven Framework for performing automation testing. In this case, it is always recommended to do some white box testing on the common and GUI utility functions. Since you are interested in Unit testing each GUI testing functionality in your libraries, please go for it. Testing is always good. It is not a waste of time, I would see it as a 'value-add' to your framework.
You have also mentioned about handling test code, if you mean the test approach, please group up different functions/modules performing similar work eg: GUI element validation (presence), GUI element input, GUI element read. Group for different element types and perform a type unit test approach for each group. It would be easier for you to track the testing. Cheers!
I would suggest test the test is a good idea and something that must be done. Just make sure that what you're building to test your app is not more complex that the app itself. As it was said before, TDD is a good approach even when building automated functional tests (I personally wouldn't do it like that, but it is a good approach anyway). Unit testing you test code is a good approach as well. IMHO, if you're automating GUI testing, just go ahead with whatever manual tests are available (you should have steps, raw scenarios, expected results and so on), make sure they pass. Then, for other test that you might create and that are not already manually scripted, unit test them and follow a TDD approach. (then if you have time you could unit test the other ones).
Finally, keyword driven, is, IMO, the best approach you could follow because it gives you the most flexible approach.
You may want to explore a mutation testing framework ( if you work with Java : check out PIT Mutation Testing ). Another way to assess the quality of your unit testing is to look at reports provided by tools such as SonarQube ; the reports include various coverage metrics;