How to use jacoco record which methods or lines are covered by each unit test case or api test case? - jacoco

I want to count which lines are covered by each use case of unit test or interface test。
For example, there are two test cases, test1 and test2. After I run these two test cases, I want to know which lines are covered by test1 and which lines are covered by test2.

Related

Is it possible to extract mutation testing results for every test method with Pit Mutation Test

I know that PIT Mutation Test framework can export mutation coverage information based on the test suite or the test class. However, I was wondering if there is an option to extract or export mutation coverage information based on the test case methods (test cases under the #Test annotation), so that I can see which test cases are written well and which are not. If it is not possible, the simplest solution that comes to my mind is commenting all the test methods and uncommenting only one of the test methods, run it and export the information. I wanted to know if there is an elegant solution.
Note: I know that MuJava provides such information.
This can be done with the (badly/un)documented matrix feature.
Assuming you're using maven you'll need to add
<fullMutationMatrix>true</fullMutationMatrix>
<outputFormats>
<param>XML</param>
</outputFormats>
To your pom.
The XML output will then contain pipe separated test names in the killing test nodes.
<killingTests>foo|foo2</killingTests>
<succeedingTests>bar</succeedingTests>

Alert if not equal to the true value num_steps

I am not that good with "unittest" topic. I'd liked to create a unit test in order to say "Hey body, this is the wrong (or right) answer because blabla!". I need to place a unit test because it took 3 MF weeks to find why the prediction in the machine learning model did not work! Hence I want to avoid that type of error in the future.
Questions :
How can I ask the code to alert me when len(X) - len(pred_values) is not equal to num_step?
Do I need to create a unit test file to gather all the unit tests, e.g. unittest.py?
Do we need to place the unit tests away from the main code?
1.
The test code can alert you by means of an assertion. In your test, you can use self.assertEqual()
self.assertEqual(len(X) - len(pred_values), num_step)
2.
Yes you would normally gather your TestCase classes in a module prefixed with test_. So if the code under test lives in a module called foo.py, you would place your tests in test_foo.py.
Within test_foo.py you can create multiple TestCase classes that group related tests together.
3.
It is a good idea to separate the tests from the main code, although not mandatory. Reasons why you might want to separate the tests include (as quoted from the docs):
The test module can be run standalone from the command line.
The test code can more easily be separated from shipped code.
There is less temptation to change test code to fit the code it tests without a good reason.
Test code should be modified much less frequently than the code it tests.
Tested code can be refactored more easily.
Tests for modules written in C must be in separate modules anyway, so why not be consistent?
If the testing strategy changes, there is no need to change the source code.
Lots more info in the official docs.

iOS: Code Coverage Confusion

Gone through the
http://www.cocoanetics.com/2013/10/xcode-coverage/
link. Being new to unit testing I would like to know how the code coverage identifies the source code being covered ?
My theoretical question:
In a model class [subclass of NSObject] containing three methods M1, M2, M3 we do create an XCTestCase subclass with three unit test methods testM1, testM2, testM3. If we are able to run all these three test methods and able to generate .gcda/.gcno [from code coverage] files.
My question is how from this code coverage one can say that the model has more than 80% coverage? is it necessary that if possible then we should write unit test for each and every method in model (s) and only then we can arrive to this conclusion that more then 80-90% code is covered. In short I would like to know the correlation between unit test methods and code coverage.
Unit test methods call your methods (unit under test) with different scenarios in mind and the intention to test all (important) code paths.
To see what is being covered the app you build has instrumented program flows and the code is compiled to generate coverage files. Using this instrumentation, the program knows during the test runs which code was actually run and how many times.

Yellow color in TDD

I just started to use TDD and it seems to be quite helpful in my case. The only thing that bothers me is that I do not see the way to mark your test as "to be implemented". While I develop an application I sometimes come up with new tests that it should pass in the future when I will be done with current changes, so I write these tests and certainly they fail as there is no such functionality yet. But as I am not going to "fix" them till I finish current part I would like to see something like yellow state (between red and green) as I would like to get red color only for broken tests and to be able to mark TODO tests with another color. Is there any practice that can help me here? I can write such tests to some kind of list but in this case it will be to double work as I will say the same first time in words and then in code.
EDIT: I just found there are todo tests in Perl standard unit framework, maybe there is something similar in Java?
The point of TDD is to write tests first and then keep coding until all your tests fill out. Having yellow tests might help you organize yourself, but, TDD loses some clarity with that.
Both MSTest and NUnit for example, support an Inconclusive state, but it depends on your Test Runner as to whether these appear as Yellow in the UI. JUnit may also have support for inconclusive.
In Kent Beck's TDD by Example, he suggests writing a list of tests to write on a notepad aka a "test list". You then work on only one test at a time and progress through the list in an order that makes sense to you. It's a good approach because you might realize that some tests on your list might be unnecessary after you finish working on a test. In effect, you only write the tests (and code) that you need.
To do what you're suggesting, you would write your test list in code, naming the test methods accordingly with all the normal attributes, but the body for each test would be "Assert.Inconclusive()"
I've done this in the past, but my test body would be "Assert.Fail()". It's a matter of how you work -- I wouldn't check in my changes until all the tests passed. As asserting inconclusive is different than asserting failure, it can be used as a means to check code in to share changes without breaking the build for your entire team (depending on your build server configuration and agreed upon team process).
In JUnit you can use an #Ignore annotation on the test to have the framework skip it during the run. Presuming you have a test list you could just place them in the test as follows:
#Ignore
#Test
public void somethingShouldHappenUnderThisCircumstance() {}
#Ignore
#Test
public void somethingShouldHappenUnderThatCircumstance() {}
Of course, if you don't mark them as a test in the first place you won't need the ignore. IDEs such as IntelliJ will flag ignored tests so that they stand out better.
Perfectly alright - as long as you do not break your train of thought to go implement these new tests. You could make a note of this on your "test-list" a piece of paper OR write empty test stubs with good names and mark them up with an Ignore("Impl pending") attribute.
The NUnit GUI shows ignored tests as yellow.. so there's a good chance that JUnit does the same. You'd need the corresponding Ignore annotation to decorate your tests

Mark unit test as an expected failure in JUnit4

Is there an extension for JUnit4 which allows for marking some tests as "expected to fail"?
I would like to mark the test for current features under development with some tag, for instance #wip. For these tests I would like to ensure that they are failing.
My acceptance criteria:
Scenario: A successful test tagged #wip is recorded as failure
Given a successful test marked #wip
When the test is executed
Then the test is recorded as failure.
Scenario: A failing test tagged #wip is recorded as fine
Given a failing test tagged #wip
When the test is executed
Then the test is recorded as fine.
Scenario: A successful test not tagged #wip is recorded as fine
Given a successful test not tagged #wip
When the test is executed
Then the test is recorded as successful.
Scenario: A failing test not tagged with #wip is recorded as failure
Given a failing test not tagged with #wip
When the test is executed
Then the test is recorded as failure.
Short answer, no extension will do that as far as I know and in my opinion it would defeat the whole purpose of JUnit if it would exist.
Longer answer, red/green is kind of sacred and circumventing it shouldn't become a habit. What if you accidentally forgot to remove the circumvention and assume that all tests passed?
You could make it expect an AssertionError or Exception.
#wip
#Test(expected=AssertionError.class)
public void wipTest() {
fail("work in progress");
}
Making a shortcut in your IDE for that shouldn't be too hard. Of course I was assuming you tag the test with an annotation in the source code.
In my opinion what you are asking is against JUnit's purpose, but I do understand the use for it.
An alternative would be to implement a WIPRunner with the WIP annotation and somehow make it accept failures of tests with the WIP annotation.
If you are integrating with a BDD framework I would suggest a way to let it run the unit tests you marked #wip seperately and decide within your BDD methods if the result is ok.
The #Ignore annotation says not to bother with the result.