We are using SonarQube 5.1 with Jacoco maven plugin 0.7.4, and all of our slf4j logging statements such as log.debug('Something happened') show that only 1 of 2 branches are covered. I understand that this is because slf4j internally does an if debug, and that's great, but we don't want this to throw off our numbers. We aren't interested in testing slf4j and we'd rather not run every test multiple times for different logging levels.
So, how can we tell Sonar and/or Jacoco to exclude these lines from coverage? Both of them have configurable file exclusions, but from what I can tell those are only for excluding your own classes from coverage (using the target dir), not the imported libraries. I tried adding groovy.util.logging.*' to the exclusion list anyway but it didn't do anything.
logger.isDebugEnabled() is killing my code coverage. I'm planning to exclude it while running cobertura is similar and suggested that for Cobertura the 'ignore' property should be used instead of 'exclude'. I don't see anything like that for Jacoco or Sonar in settings or documentation.
EDIT:
Example image from Eclipse attached, after running Jacoco coverage (Sonar shows the same thing in their GUI). This is actual code from one of our classes.
EDIT 2:
We are using the Slf4j annotation. Docs here:
http://docs.groovy-lang.org/next/html/gapi/groovy/util/logging/Slf4j.html
This local transform adds a logging ability to your program using LogBack logging. Every method call on a unbound variable named log will be mapped to a call to the logger. For this a log field will be inserted in the class. If the field already exists the usage of this transform will cause a compilation error. The method name will be used to determine what to call on the logger.
log.name(exp)
is mapped to
if (log.isNameLoggable() {
log.name(exp)
}
Here name is a place holder for info, debug, warning, error, etc. If the expression exp is a constant or only a variable access the method call will not be transformed. But this will still cause a call on the injected logger.
Hopefully this clarifies what's going on. Our log statements become 2 branch ifs to avoid expensive string building for log levels that aren't enabled (a common practice, as far as I know). But that means that to guarantee coverage of all these branches, we have to run every test repeatedly for every logging level.
I did not find a general solution for excluding it, but if your codebase allows you to do so, you could wrap your logging statements in a method with an annotation containing "Generated" in its name.
A simple example:
package org.example.logging
import groovy.transform.Generated
import groovy.util.logging.Slf4j
#Slf4j
class Greeter {
void greet(name) {
logDebug("called greet for ${name}")
println "Hello, ${name}!"
}
#Generated
private logDebug(message) {
log.debug message
}
}
Unfortunately javax.annotation.Generated is not suitable, because it has only a retention of SOURCE, therefor I (ab)used groovy.transform.Generated here, but can easily create your own annotation for that purpose.
I found that solution here: How would I add an annotation to exclude a method from a jacoco code coverage report?
UPDATE: In Groovy you can solve it most elegantly with a trait:
package org.example.logging
import groovy.transform.Generated
import groovy.util.logging.Slf4j
#Slf4j
trait LoggingTrait {
#Generated
void logDebug(String message) {
log.debug message
}
}
...and then...
package org.example.logging
import groovy.util.logging.Slf4j
#Slf4j
class Greeter implements LoggingTrait {
void greet(name) {
logDebug "called greet for ${name}"
println "Hello, ${name}!"
}
}
Unfortunately the property log is interpreted as property of Greeter, not of LoggingTrait, so you must attach #Slf4j to the trait and the class implementing the trait.
Nevertheless doing so gives you the expected logger - the one of the implementing class:
14:25:09.932 [main] DEBUG org.example.logging.Greeter - called greet for world
Related
I have the below lines of code of which are indicated as not "executed" by Jacoco.
But when I debug the test case it does execute those lines. Below are the test cases I wrote.
#PrepareForTest({MessagingAdapterFactory.class, MessagingConfigReaderFactory.class,UpdaterServiceExecutor.class,Files.class})
#Test
public void should_shutDown_the_scheduledExecutor_and_close_the_messagingAdapter() throws Exception {
PowerMockito.mockStatic(Files.class);
PowerMockito.when(Files.exists(any())).thenReturn(true);
PowerMockito.mockStatic(MessagingAdapterFactory.class);
PowerMockito.when(MessagingAdapterFactory.getMessagingAdapter("edgeNode")).thenReturn(messagingAdapterMock);
PowerMockito.mockStatic(MessagingConfigReaderFactory.class);
PowerMockito.when(MessagingConfigReaderFactory.getConfigurationReader()).thenReturn(readerMock);
ScheduledExecutorService scheduledExecutorServiceMock = Mockito.mock(ScheduledExecutorService.class);
PowerMockito.mockStatic(Executors.class);
PowerMockito.when(Executors.newSingleThreadScheduledExecutor()).thenReturn(scheduledExecutorServiceMock);
when(readerMock.getConfigParams()).thenReturn("somePath,somePath,somePath");
when(decompressUtilMock.decompressZip(Matchers.anyString(),Matchers.anyString())).thenReturn(true);
when(checkSumUtilMock.check(anyString(), anyString())).thenReturn(true);
when(commandExecutorMock.executeCommand("somePath verify /pa somePathKubeUpdates/KubePlatformSetup.exe")).thenReturn(false);
updaterServiceExecutor.execute();
Thread.sleep(10000);
updaterServiceExecutor.close();
verify(scheduledExecutorServiceMock,timeout(10000).times(1)).shutdownNow();
verify(messagingAdapterMock,timeout(10000).times(1)).close();
}
#PrepareForTest({MessagingAdapterFactory.class, MessagingConfigReaderFactory.class,UpdaterServiceExecutor.class,Files.class})
#Test
public void should_not_throw_ServiceSDKException_when_occurred_while_closing_the_messagingAdapter() throws Exception {
PowerMockito.mockStatic(Files.class);
PowerMockito.when(Files.exists(any())).thenReturn(true);
PowerMockito.mockStatic(MessagingAdapterFactory.class);
PowerMockito.when(MessagingAdapterFactory.getMessagingAdapter("edgeNode")).thenReturn(messagingAdapterMock);
PowerMockito.mockStatic(MessagingConfigReaderFactory.class);
PowerMockito.when(MessagingConfigReaderFactory.getConfigurationReader()).thenReturn(readerMock);
ScheduledExecutorService scheduledExecutorServiceMock = Mockito.mock(ScheduledExecutorService.class);
PowerMockito.mockStatic(Executors.class);
PowerMockito.when(Executors.newSingleThreadScheduledExecutor()).thenReturn(scheduledExecutorServiceMock);
when(readerMock.getConfigParams()).thenReturn("somePath,somePath,somePath");
when(decompressUtilMock.decompressZip(Matchers.anyString(),Matchers.anyString())).thenReturn(true);
when(checkSumUtilMock.check(anyString(), anyString())).thenReturn(true);
when(commandExecutorMock.executeCommand("somePath verify /pa somePathKubeUpdates/KubePlatformSetup.exe")).thenReturn(false);
doThrow(new ServiceSDKException()).when(messagingAdapterMock).close();
updaterServiceExecutor.execute();
Thread.sleep(10000);
updaterServiceExecutor.close();
verify(scheduledExecutorServiceMock,timeout(10000).times(1)).shutdownNow();
verify(messagingAdapterMock,timeout(10000).times(1)).close();
}
What is wrong here? Why is Jacoco showing as the lines have not been executed? Please advice.
Jacoco and PowerMockito don't work together.
Jacoco instruments the byte code, collect the coverage data and afterwards associates the coverage information with the sourcecode based on some identifier of the class.
PowerMockito instruments the bytecode as well, this leads to different class identifiers so coverage calculated by Jacoco can not be associated to the source code because the identifier information does not match.
This is a known issue.
Gerald's answers is the reason. This only occurs when you have put the class being tested inside #PrepareForTest. So I removed that from certain methods and now its working fine. Having PowerMockito itself doesn't cause any issues. Issues arise only if you have the class name in #PrepareForTest. Find ways to manage it with only the name of the static method class and not the class for which you are writing the test cases.
I'm using log4net, trying to get logging in my unit tests. If I manually call
log4net.Config.XmlConfigurator.Configure();
Since that works, that seems to eliminate all of the "bad config, config location" issues.
it works, but there are a large number of test classes, so that is not good.
I added
[assembly: log4net.Config.XmlConfigurator(Watch=true)]
to the assemblyinfo of my test project, but when I run (either via native MSTest, or Resharper test runner) I get no logging.
Help?
Source
[AssemblyInitialize()]
public static void MyTestInitialize(TestContext testContext)
{
// Take care the log4net.config file is added to the deployment files of the testconfig
FileInfo fileInfo;
string fullPath = Path.Combine(System.Environment.CurrentDirectory, "log4net.config");
fileInfo = new FileInfo(fullPath);
As it says in the documentation for assembly attributes
Therefore if you use configuration attributes you must invoke log4net
to allow it to read the attributes. A simple call to
LogManager.GetLogger will cause the attributes on the calling assembly
to be read and processed. Therefore it is imperative to make a logging
call as early as possible during the application start-up, and
certainly before any external assemblies have been loaded and invoked.
Because the unit test runners load the test assembly in order to find and the tests, it isn't possible to initialise log4net using an assembly attribute in unit test projects, and you will have to use the XmlConfigurator.
Edit: as linked in a comment by OP this can be done in one place for the whole test project by using the AssemblyInitializeAttribute
In my daily unit test coding with Xcode, I only use XCTestCase. There are also these other classes that don't seem to get used much such as: XCTestSuite, XCTest, XCTestRun.
What are XCTestSuite, XCTest, XCTestRun for? When do you use them?
Also, XCTestCase header has a few methods such as:
defaultTestSuite
invokeTest
testCaseWithInvocation:
testCaseWithSelector:
How and when to use the above?
I am having trouble finding documentation on the above XCTest-classes and methods.
Well, this question is pretty good and I just wondering why this question is being ignored.
As the document saying:
XCTestCase is a concrete subclass of XCTest that should be the override point for
most developers creating tests for their projects. A test case subclass can have
multiple test methods and supports setup and tear down that executes for every test
method as well as class level setup and tear down.
On the other hand, this is what XCTestSuite defined:
A concrete subclass of XCTest, XCTestSuite is a collection of test cases. Alternatively, a test suite can extract the tests to be run automatically.
Well, with XCTestSuite, you can construct your own test suite for specific subset of test cases, instead of the default suite ( [XCTestCase defaultTestSuite] ), which as all test cases.
Actually, the default XCTestSuite is composed of every test case found in the runtime environment - all methods with no parameters, returning no value, and prefixed with ‘test’ in all subclasses of XCTestCase.
What about the XCTestRun class?
A test run collects information about the execution of a test. Failures in explicit
test assertions are classified as "expected", while failures from unrelated or
uncaught exceptions are classified as "unexpected".
With XCTestRun, you can record information likes startDate, totalDuration, failureCount etc when the test is starting, or somethings like hasSucceeded when done, and therefore you got the result of running a test. XCTestRun gives you controlability to focus what is happening or happened about the test.
Back to XCTestCase, you will notice that there are methods named testCaseWithInvocation: and testCaseWithSelector: if you read source code. And I recommend you to do for more digging.
How do they work together ?
I've found that there is an awesome explanation in Quick's QuickSpec source file.
XCTest automatically compiles a list of XCTestCase subclasses included
in the test target. It iterates over each class in that list, and creates
a new instance of that class for each test method. It then creates an
"invocation" to execute that test method. The invocation is an instance of
NSInvocation, which represents a single message send in Objective-C.
The invocation is set on the XCTestCase instance, and the test is run.
Some links:
http://modocache.io/probing-sentestingkit
https://github.com/Quick/Quick/blob/master/Sources/Quick/QuickSpec.swift
https://developer.apple.com/reference/xctest/xctest?language=objc
Launch your Xcode, and use cmd + shift + O to open the quickly open dialog, just type 'XCTest' and you will find some related files, such as XCTest.h, XCTestCase.h ... You need to go inside this file to check out the interfaces they offer.
There is a good website about XCTest: http://iosunittesting.com/xctest-assertions/
In order to get code coverage report, i instrument the #Decorator bean by cobertura maven plugin.
When running my unit test in OpenEJB container. The container reports some error during start up (new initial context).
Caused by: org.apache.webbeans.exception.WebBeansConfigurationException: Decorator : MyDecorator, Name:null, WebBeans Type:DECORATOR, API Types:[org.apache.commo
ns.configuration.Configuration,net.sourceforge.cobertura.coveragedata.HasBeenInstrumented,org.apache.commons.configuration.AbstractConfiguration,MyDecorator,org.apache.commons.configuration.event.EventSource,java.lang.Object], Qualifiers:[javax.enterprise.inject.Any,javax.enterprise.inject.Default] delegate at
tribute must implement all of the decorator decorated types, but decorator type interface net.sourceforge.cobertura.coveragedata.HasBeenInstrumented is not assignable from deleg
ate type of interface org.apache.commons.configuration.Configuration
Details:
I have one Decorator to be unit tested.
Something like
import org.apache.commons.configuration.AbstractConfiguration;
import org.apache.commons.configuration.Configuration;
#Decorator
public class MyDecorator extends AbstractConfiguration {
#Inject
#Delegate
private Configuration conf;
.....
}
After cobertura instrumented it, the code is like below:(I uncompile it)
import net.sourceforge.cobertura.coveragedata.HasBeenInstrumented;
#Decorator
public class MyDecorator extends AbstractConfiguration
implements HasBeenInstrumented
{
#Inject
#Delegate
private Configuration conf;
.....
}
As you can see, cobertura add one more interface for my decorator.
When OpenEJB load and deploy this instrumented class, a error is reported:
Caused by: org.apache.webbeans.exception.WebBeansConfigurationException: Decorator : MyDecorator, Name:null, WebBeans Type:DECORATOR, API Types:[org.apache.commo
ns.configuration.Configuration,net.sourceforge.cobertura.coveragedata.HasBeenInstrumented,org.apache.commons.configuration.AbstractConfiguration,MyDecorator,org.apache.commons.configuration.event.EventSource,java.lang.Object], Qualifiers:[javax.enterprise.inject.Any,javax.enterprise.inject.Default] delegate at
tribute must implement all of the decorator decorated types, but decorator type interface net.sourceforge.cobertura.coveragedata.HasBeenInstrumented is not assignable from deleg
ate type of interface org.apache.commons.configuration.Configuration
The error log say that the #Decorator and the #Delegate should implement the same types.
But after instrument, the to be tested class has one more interface.
Then i try to instrument the org.apache.commons.configuration.AbstractConfiguration and org.apache.commons.configuration.Configuration. (by instrument the commons-configuration-1.9.jar by cobertura command line)
And modify my code like:
#Decorator
public class MyDecorator extends AbstractConfiguration {
#Inject
#Delegate
private AbstractConfiguration conf;
.....
}
//I use AbstractConfiguration instead of Configuration, because the Configuration is an //interface which could not be instrumented.
After all of this,the problem is solved.
But it is not a good way to do this.
The root cause is maven cobertura plugin identify the class file is instrumented by adding an interface to the original class, i works for most of the cases.
But not for a #Decorator bean which running in an container.
Should i create an comments for maven-cobertura-plugin org?
Any one has some suggestion on how to unit test #Decorators.And easy to get coverage report?
May be my unit test is not implement in the good way, maybe the openejb is not good for this?
Normally how do you unit test your #Decorators?
Cobertura does not instrument interfaces. It is recommended the non-instrumented classes go in the classpath after the instrumented classes for that reason.
So when instrumenting, compile first with maven normally, then place yourself in the directory where the sourcecode of the classes you want to instrument exists, and then run the following command : mvn cobertura:instrument.
This will make cobertura instrument all the classes, and maven will automatically add the files not instrumented. the instrumented code will be at ".\target\generated-classes\cobertura".
You'll need to run the 'jar -cvf [name-of-jar].jar *', then you'll get your instrumented jar.
We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.