jacoco 0 percentage code coverage all 2 branches missed - jacoco

Hi I am trying to run jacoco code coverage and getting all 2 branches missed error and my code coverage is 0.0% all the times how much i tried to write test case for below code..
public void someSchedulerJob(){
if (true) {
log.info("Run Job");
try {
//few method calls which needs to be run every day
} catch (Exceptionexp) {
log.error("Exception doing scheduler job "+ exp);
}
}else{
log.info("Disabled Scheduler Job");
}
}
how to make code coverage report for this method using junit 100%
Regards

Related

Sonarqube Partially covered tests

Sonarqube test coverage report says that my c++ statement are only partially covered. Example of a very simplified function containing such statements is as below:
std::string test(int num) {
return "abc";
}
My test as follow:
TEST(TestFunc, Equal) {
std::string res = test(0);
EXPECT_EQ (res, "abc");
}
Sonarqube coverage report says that the return stmt is only partially covered by tests (1 of 2 conditions). I am wondering what is other condition that i need to test for?
I also saw the following in the report:
Condition to cover: 2
Uncovered Condition: 1
Condition Coverage: 50%
It seems like i need a test to cover the other condition but i cant figure out what that is.
After more research, this is not a Sonarqube problem. This post (and the way around it) most likely explain the root cause of my problem.
Related post: LCOV/GCOV branch coverage with C++ producing branches all over the place

Manually create a JUnit Result object

To execute automated UI tests, I trigger tests on an external cloud service which requires the upload of our test suite (for the purpose of this question please consider their approach a given).
I still want this process to be encapsulated into a JUnit runner to be consistent with runs utilising different cloud services or local execution. I execute my tests with Maven
mvn clean install -Dtest=TestRunner -Dproperties=/path/to/settings.file
and I want this flow to be consistent no matter which test provider is used.
The workaround I came up with is to trigger the tests like this on my local machine:
#Override
public void run(RunNotifier notifier) {
if (someCondition) {
new DelegateRunner().run(notifier);
} else {
super.run(notifier);
}
}
The DelegateRunner then calls the third-party service which triggers the tests on the cloud. How can I map the results I receive from this service (I can query their API) back to my local JUnit execution?
The class RunNotifier offers methods like fireTestFinished or fireTestFailure but I'm not sure how to build the objects (Result, Description, Failure) these methods take as parameters. I suspect I need to make use of test listeners but I can't figure out the details.
In a broader sense, what are my options to create JUnit test results when the actual tests are running on a remote machine or not even being executed as JUnit tests? Is this a use-case someone has encountered before. It might be slightly exotic but I don't think I'm the first either.
For a start, I just want to provide a binary result - tests passed or at least one test failed - in a way that doesn't break any JUnit integrations (like the Maven surefire plugin).
Right now, I get:
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 501.287 sec
and
No tests were executed! (Set -DfailIfNoTests=false to ignore this error.)
How can I fail the build in case there is a test failure and pass it otherwise (with number of tests as 1)? I can think of a few hacky ways but I'm sure there is a proper one.
At it's most basic, with a single test result, the DelegateRunner could be something like this:
public class DelegateRunner extends Runner {
private Description testDescription = Description
.createTestDescription("groupName", "testName");
public DelegateRunner(Class<?> testClass) {
}
#Override
public Description getDescription() {
return testDescription;
}
#Override
public void run(RunNotifier notifier) {
notifier.fireTestStarted(testDescription);
... trigger remote test ...
if (passed)
notifier.fireTestFinished(testDescription);
else
notifier.fireTestFailure(new Failure(testDescription,
new AssertionError("Details of the failure")));
}
}
Then both getDescription() and run() would need to be wrapped:
public class FrontRunner extends Runner {
private Runner runner;
public FrontRunner(Class<?> testClass) throws InitializationError {
if (someCondition)
runner = new DelegateRunner(testClass);
else
runner = new JUnit4(testClass);
}
#Override
public Description getDescription() {
return runner.getDescription();
}
#Override
public void run(RunNotifier notifier) {
runner.run(notifier);
}
}
(Assuming someCondition can be known up front, and that it's just the default JUnit4 runner that's needed normally).
This comes through to the Maven build as expected:
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running ...FrontRunnerTest
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec <<< FAILURE!
testName(groupName) Time elapsed: 0.015 sec <<< FAILURE!
java.lang.AssertionError: Details of the failure
at so.ownrunner.DelegateRunner.run(DelegateRunner.java:28)
at so.ownrunner.FrontRunner.run(FrontRunner.java:27)
at ...
Results :
Failed tests: testName(groupName): Details of the failure
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
Then if a more structured response is needed, Description.addChild() can be used to nest the suites and/or tests, e.g. :
public class NestedDelegateRunner extends Runner {
private Description suiteDescription = Description
.createSuiteDescription("groupName");
private Description test1Description = Description
.createTestDescription("groupName", "test1");
private Description test2Description = Description
.createTestDescription("groupName", "test2");
public NestedDelegateRunner(Class<?> testClass) {
suiteDescription.addChild(test1Description);
suiteDescription.addChild(test2Description);
}
#Override
public Description getDescription() {
return suiteDescription;
}
#Override
public void run(RunNotifier notifier) {
notifier.fireTestStarted(test1Description);
notifier.fireTestStarted(test2Description);
notifier.fireTestFinished(test1Description);
notifier.fireTestFailure(new Failure(test2Description,
new AssertionError("Details of the failure")));
}
}
In fact the addChild() is not crucial, but without it the structure can be less obvious - e.g. something like Eclipse will just show Unrooted tests.

Groovy Spock BlockingVariable never released

I am fighting a loosing battle against Spock unit tests in my Grails application. I want to test async behavior and in order to get familiar with Spock's BlockingVariable I've written this simple sample test.
void "test a cool function of my app I will not tell you about"() {
given:
def waitCondition = new BlockingVariable(10000)
def runner = new Runnable() {
#Override
void run() {
Thread.sleep(5000)
waitCondition.set(true)
}
}
when:
new Thread(runner)
then:
true == waitCondition.get()
}
Unfortunately it is not a good thing, because otherwise it would come to an end. When I set a breakpoint at Thread.sleep() and debug the test, that breakpoint is never hit. What am I missing?
Your test is broken, because you don't actually run the thread you create. Instead:
when:
new Thread(runner)
you should do:
when:
new Thread(runner).run()
and then your test succeeds after approximately 5 seconds.

Unexplained Castle and MTM errors

I have a suite that runs a little over 30 tests through MTM. They're selenium tests and take a bit over 20 minutes to run. 6 of these tests are new (2 tests, 3 iterations each) to the project, and error out for the same reason every time they run.
Here's the catch:
1. They pass locally
2. They pass when run individually
The specific error is from Castle ActiveRecord telling me to initialize a class I have most definitely initialized in the code.
[TestMethod]
public void Test(){
Initialize();
//do test
}
public void Initialize(){
if(!ActiveRecordStarter.IsInitialized){
Type[] types = //typeof each castle class;
InPlaceConfigurationSource source = new InPlaceConfigurationSource();
//set up source
...
ActiveRecordStarter.Initialize(source, types);
}
}
MTM runs all the tests without restarting the assembly. If a Castle test runs before my failing tests, it will intialize ActiveRecordStarter, and keep it initialized through my tests. For some reason, my tests did not like this (no answer yet on why), but calling ActiveRecordStarter.ResetInitializationFlag(); before the IsInitialized check fixed the errors.

Why do my test functions appear in code coverage? (or how to make them 100%?)

I'm using xUnit to test my C# code and I'm using Visual Studio Premium 2012.
In my solution I have my main project that I'm testing and a 2nd project that contains all of my tests. I'm supposesd to be at 100% code coverage, but there are some functions in my Test project that I cannot get to 100%. Can I just exclude that project from appearing in Code Coverage results?
Or... does anyone now how to get a test function to 100% when you have a test where you are expecting an exception to be thrown? Here are some of the ways I've tried to write a test for a method that should throw an exception and what isn't being covered. MyBusinessLogic has a function named GenerateNameLine that accepts an object of type MyViewModel. if the Name property of MyViewModel is an empty string, it should throw an exception of type RequiredInformationMissingException.
[Fact]
public void TestMethod1()
{
var vm = new MyViewModel();
vm.Name = string.Empty;
Assert.Throws<RequiredInformationMissingException>(delegate { MyBusinessLogic.GenerateNameLine(vm); });
}
This test passes, but code coverage with color highlighting it showing me that MyBusinessLogic.GenerateNameLine(vm); is not getting hit.
I've also tried:
[Fact]
public void TestMethod1
{
bool fRequiredInfoExceptionThrown = false;
var vm = new MyViewModel();
vm.Name = string.Empty;
try
{
MyBusinessLogic.GenerateNameLine(vm);
}
catch (Exception ex)
{
if (ex.GetType() == typeof(RequiredInformationMissingException))
fRequiredInfoExceptionThrown = true;
}
Assert.True(fRequiredInfoExceptionThrown, "RequiredInformationMissingException was not thrown.");
}
This test also passes. But code coverage says the } right before my catch is never hit.
I don't know how to write a test for an exception that gets 100%. I know it doesn't even really matter, but at work 100% code coverage is part of our definition of done, so I don't know what to do here.
The answer is Yes
We provide filters to customize what you want to include/exclude via the .runsettings file. You can filter out pretty much anything that you do not find useful.
The [ExcludeFromCodeCoverage] attribute can also be used in code.
See: http://blogs.msdn.com/b/sudhakan/archive/2012/05/11/customizing-code-coverage-in-visual-studio-11.aspx
Are you seeing the second issue in VS2012RTM+Update1 as well?
I would exclude the tests, but still have an eye on coverage rate for them, because a coverage below 99% would suggest some of them did not run at all.
BTW: 100% is an ideal and cannot be achieved in real life projects. At least the effort to actually reach 100% opposed to something like 90% is disproportionately high. Also exact coverage rates depend on the manner of counting hit lines.