Mark unit test as an expected failure in JUnit4 - unit-testing

Is there an extension for JUnit4 which allows for marking some tests as "expected to fail"?
I would like to mark the test for current features under development with some tag, for instance #wip. For these tests I would like to ensure that they are failing.
My acceptance criteria:
Scenario: A successful test tagged #wip is recorded as failure
Given a successful test marked #wip
When the test is executed
Then the test is recorded as failure.
Scenario: A failing test tagged #wip is recorded as fine
Given a failing test tagged #wip
When the test is executed
Then the test is recorded as fine.
Scenario: A successful test not tagged #wip is recorded as fine
Given a successful test not tagged #wip
When the test is executed
Then the test is recorded as successful.
Scenario: A failing test not tagged with #wip is recorded as failure
Given a failing test not tagged with #wip
When the test is executed
Then the test is recorded as failure.

Short answer, no extension will do that as far as I know and in my opinion it would defeat the whole purpose of JUnit if it would exist.
Longer answer, red/green is kind of sacred and circumventing it shouldn't become a habit. What if you accidentally forgot to remove the circumvention and assume that all tests passed?
You could make it expect an AssertionError or Exception.
#wip
#Test(expected=AssertionError.class)
public void wipTest() {
fail("work in progress");
}
Making a shortcut in your IDE for that shouldn't be too hard. Of course I was assuming you tag the test with an annotation in the source code.
In my opinion what you are asking is against JUnit's purpose, but I do understand the use for it.
An alternative would be to implement a WIPRunner with the WIP annotation and somehow make it accept failures of tests with the WIP annotation.
If you are integrating with a BDD framework I would suggest a way to let it run the unit tests you marked #wip seperately and decide within your BDD methods if the result is ok.

The #Ignore annotation says not to bother with the result.

Related

Is it possible to extract mutation testing results for every test method with Pit Mutation Test

I know that PIT Mutation Test framework can export mutation coverage information based on the test suite or the test class. However, I was wondering if there is an option to extract or export mutation coverage information based on the test case methods (test cases under the #Test annotation), so that I can see which test cases are written well and which are not. If it is not possible, the simplest solution that comes to my mind is commenting all the test methods and uncommenting only one of the test methods, run it and export the information. I wanted to know if there is an elegant solution.
Note: I know that MuJava provides such information.
This can be done with the (badly/un)documented matrix feature.
Assuming you're using maven you'll need to add
<fullMutationMatrix>true</fullMutationMatrix>
<outputFormats>
<param>XML</param>
</outputFormats>
To your pom.
The XML output will then contain pipe separated test names in the killing test nodes.
<killingTests>foo|foo2</killingTests>
<succeedingTests>bar</succeedingTests>

How to Force a Google Test Case to Run Last

Our team has a very mature suite of Google Test (GTest) test cases. The test cases, via a custom test environment, build up a test report in addition to the standard JUnit XML output that GTest produces on its own.
I would like to add one final test that ensures that the Google Test suite produced its test report after all other tests in the suite execute. In other words, I would like to force which test executes last so it can write the custom output and then verify that it was properly written, failing if it was not.
The solution should work even if Google Test is executing tests in random order. Can I force one test to run last? Can I write a test that GTest won't automatically discover, call it myself from my "main", and have its results rolled into the rest of them, or ??
I see no way to do this with the current GTest API, but thought it was worth asking.
This is probably closest to what you're looking for.
https://github.com/google/googletest/blob/master/docs/advanced.md#sharing-resources-between-tests-in-the-same-test-suite
Perhaps you can use the destruction of the static object to collect information about all tests that ran.
However, beware of forks.
I really would write your own main(), fork the test process and wait for the child to finish so you can collect data from it.

Can someone give a concrete example of a unit test adding value when integration tests already exist?

Let's assume we are not doing TDD (for which unit tests are obviously part and parcel), and have integration tests for all the use cases.
The integration tests assume assume a certain input and validate the output is as expected.
My thinking is that adding a unit test for a method that is traversed in an integration test, using the same data as would exist in the method in the integration test, would not expose any additional bugs.
That would lead to the conclusion that provided you have suffcient integration tests you do not then need to unit test the same code.
So, can someone give a concrete example where a unit test could expose a bug in the above scenario?
Integration tests can be seen as a form of Acceptance Testing. They ensure that the software is doing what it is supposed to be doing.
Unit tests, on the other hand, aren't particularly useful for customers. A customer is not concerned that the InitializeServerConnection is failing, but they are concerned that they're unable to send internal messages to their co-workers as a result.
So what good are unit tests for? They are a development tool, full stop. A unit test verifies that a cog in the machine is working properly. And if it is not, it is very easy to see it failing.
Arialdo Martini offers a great explanation:
Oversimplifying, a software system can be seen as a network of cooperating modules. Since they cooperate, some of them depend on other.
[...]
With integration and end-to-end tests you would be able to find all the broken features.
Yet, this is not of any help in guessing where the bug is. The same system, with the same bug, would result in these unit test failures:
So, even though a unit test doesn't add any business value, it does add value in the form of reducing the amount of time spent manually testing, debugging, and sifting through code looking for the root cause of an issue.

why testing an individual junit test works, while testing them together won't?

The test that fails when tested together with mvn test (or through the ide) is called EmpiricalTest.
If I test the file alone it goes through, but not otherwise. Why could that be?
You can checkout the Maven source code (to test) from here.
This is how I make sure the database is 'blank' before each test:
abstract public class PersistenceTest {
#Before
public void setUp() {
db.destroy();
assertIsEmpty(MUser.class);
assertIsEmpty(Meaning.class);
assertIsEmpty(Expression.class);
}
private <Entity> void assertIsEmpty(final Class<Entity> entityClass){
final List<Entity> all = db.getAll(entityClass);
Assert.assertTrue(all.isEmpty());
}
and the test that fails:
public class EmpiricalTest extends PersistenceTest {
It got to do with the id automatically assigned. The PU creates a SEQUENCE table, and although I empty the database from my entities, I don't actually drop that table. So when I'm testing EmpiricalTest alone the sequence starts as expected from 1, while when testing together the test is executed later and starts with a higher, unexpected number.
This leads to this question.
This very much sounds as if there are dependencies between the test. As far as I understand from looking at your test, you're accessing the your data storage in the test. Is there a chance that one of the tests doesn't properly cleanup his traces, therefore causing others to fail??
Testing against a DB is usually not considered a unit test, though it is very useful. Those kind of tests (you may call them integration tests) are however more difficult and time consuming to code because you have to pay a lot of attention that your test leaves the environment in the exact state it found it before.
Your problem is very common. In ideal TDD world each test should be executed in the perfect isolation from the other test. You violated the isolation and that's the problem.
However there is no simple solution for the test isolation problem. The main reason is that SQL DLL doesn't support database creation/deletion, while automatically droping tables is complicated due to the possible complex foreign keys constrains.
In my experience the best idea is to execute tests within transaction and rollback data on the end of the test (just like Pascal suggested). Spring test module provides great support for that.
In case you cannot execute test within the transaction boundaries (like yours) you must be sure that each of your test doesn't leave anything in the database (including foreignkeys, constrains, sequences, etc.) and also that tests are designed to be independent of each others (for example don't depend on autogenerated id value because sequence generation could be executed in previous tests).
You must debug your Maven test session order to check out what is wrong with the assertion (I guess that you cannot tell that from the Surefire logs). Then fix the tests (both the failing one and the other one which leaves the rubbish in the DB) to be isolated from each others.

Grails Testing hickups

I have two testing questions. Both are probably easily answered. The first is that I wrote this unit test in Grails:
void testCount() {
mockDomain(UserAccount)
new UserAccount(firstName: "Ken").save()
new UserAccount(firstName: "Bob").save()
new UserAccount(firstName: "Dave").save()
assertEquals(3, UserAccount.count())
}
For some reason, I get 0 returned back. Did I forget to do something?
EDIT: OH, I understand. The validation constraints were violated, so they didn't store. Is there any way to get some feedback here? That's a really crappy thing to have happen....
The second question is for those who use IDEA. What should I be running - IDEA's junit tests, or grails targets? I have two options.
Also, why does IDEA say that my tests pass and it provides a green light even though the test above actually fails? This will really drive me nuts if I have to check the test reports in html every time I run my tests.....
Help?
I always do object.save(failOnError: true) in tests to avoid silent failures like this. This causes an exception to be thrown if validation fails. Even without a real database in a unit test, most of the constraints will be checked, although I prefer to use integration tests if I want to test complex relationships between domain objects.
I personally haven't found the Idea JUnit tests to particularly useful when working with grails. It is likely fine to use the test runner for "Unit" tests. For integration tests you might consider setting up an ant target in "debug" mode to run your tests.
Over time running tests starts to occupy such a long amount of time I tend to run them exclusively from the command line to avoid the additional overhead IntelliJ adds.
In regards to your unit test, I am pretty sure you would need to run an integration test to get a count that is not zero.
I'm not sure what unit test your using exactly but since GORM is not bootstrapped in the unit tests I'm not sure the domain object mocking supports the increment of a count.
Your test would likely pass as an integration test provided that your domain objects validate.
add flush:true to your save method.
new UserAccount(firstName: "Ken").save(flush:true)
...
Grails sets the flush mode of the hibernate session to manual. So the change is not persisted after the action returns but is before the view is rendered. This allows views to access lazy-loaded collections and relationships and prevents changes from automatically being persisted.