How do I make jacoco take into account the coverage of what is run by tests as Runtime.exec(...) or ProcessBuilder(...).start()? - jacoco

I have unit tests that generate runnable classes with a main class, compiles them and execute them as part of the test with Runtime.getRuntime().exec("java", "-cp", ..., "Main", ...) or new ProcessBuilder("java", "-cp", ..., "Main", ...).start().
Although these main classes cover a lot of the project, the code coverage they represent is invisible to jacoco and the covered code doesn't appear in the report.
I saw that the jacoco report ran with a javaagent.
-javaagent:C:\\Users\\myusername\\.m2\\repository\\org\\jacoco\\org.jacoco.agent\\0.8.8\\org.jacoco.agent-0.8.8-runtime.jar=destfile=C:\\Users\\myusername\\git\\project\\target\\jacoco.exec
What happens when I propagate that javaagent to the jvm started by Runtime.exec(...) or ProcessBuilder(...).start() ?
Runtime.getRuntime().exec("java", "-javaagent:...", "-cp", ..., "Main", ...) or new ProcessBuilder("java", "-javaagent:...", "-cp", ..., "Main", ...).start().
Does it override the running jacoco report ? What I want to do instead is take into consideration the parts of the code covered by the main I execute.

Related

Test Result Report in Jenkins Pipeline using Groovy | Pipeline Execution | Report Types

I am setting up test result reporting in Jenkins for test projects written in various test frameworks (NUnit, MSTest, etc) and would like to improve my understanding with regards to report types and the difference between stages and post in the pipeline execution.
Pipeline Execution
Stages are executed in the order in which they appear and if there are any stages after and the one before fails the following stages will not get executed.
Where post gets executed regardless of whether or not the stages completed successfully or not, after the stages execution.
Report Types
Provided I have a stage (produces test result):
stage('MSTest') {
steps {
bat(script: 'dotnet test "..\\TestsProject.csproj" --logger "trx;LogFileName=TestResult.xml"')
}
}
And a post that runs always (consume test result to produce test result report):
post {
always {
xunit testTimeMargin: '5000', thresholdMode: 1, thresholds: [], tools: [ReportType(deleteOutputFiles: true, failIfNotNew: false, pattern: '..\\TestResult.xml', skipNoTestFiles: false, stopProcessingIfError: false)]
}
}
Project variations:
Provided my test project is written in NUnit the 'ReportType' method in 'tools:' will need to be replaced with NUnit3 for the post to execute successfully.
Provided my test project is written in MSTest the 'ReportType' method in 'tools:' will need to be replaced with MSTest for the post to execute successfully.

Get Coverage stats when tests are in another package

My tests aren't in the same package as my code. I find this a less cluttered way of organising a codebase with a lot of test files, and I've read that it's a good idea in order to limit tests to interacting via the package's public api.
So it looks something like this:
api_client:
Client.go
ArtistService.go
...
api_client_tests
ArtistService.Events_test.go
ArtistService.Info_test.go
UtilityFunction.go
...
I can type go test bandsintown-api/api_client_tests -cover
and see 0.181s coverage: 100.0% of statements. But that's actually just coverage over my UtilityFunction.go (as I say when I ran go test bandsintown-api/api_client_tests -cover=cover.out and
go tool cover -html=cover.out).
Is there any way to get the coverage for the actual api_client package under test, without bringing it all into the same package?
As it is mentioned in comments you can run
go test -cover -coverpkg "api_client" "api_client_tests"
to run the tests with coverage.
But splitting code files from tests files to a different directories isn't a Go's way.
I suppose that you want to have a black-box testing(nothing package-private stuff can be accessible outside, even for tests).
To accomplish this it's allowed to have tests in another package(without moving the files). Example:
api_client.go
package api_client
// will not be accessible outside of the package
var privateVar = 10
func Method() {
}
api_client_test.go
package api_client_tests
import "testing"
func TestClient(t *testing.T) {
Method()
}

TFS Build servers and critical Unit Tests

When you build on a TFS build server, failed unit tests cause the build to show an orange alert state but they still "succeed". Is there any way to tag a unit test as critical such that if it fails, the whole build will fail?
I've Googled for it and didn't find anything, and I don't see any attribute in the framework, so I'm guessing the answer is no. But maybe I'm just looking in the wrong place.
There is a way to do this, but you need to create multiple test runs and then filter your tests. On your tests, set a TestCategory attribute:
[TestCategory("Critical")]
[TestMethod]
public void MyCriticalTest {}
For NUnit you should be able to use [Category("Critical")]. There are multiple attributes of a test you can filter on, including the name.
Name = TestMethodDisplayNameName
FullyQualifiedName = FullyQualifiedTestMethodName
Priority = PriorityAttributeValue
TestCategory = TestCategoryAttributeValue
ClassName = ClassName
And these operators:
= (equals)
!= (not equals)
~ (contains or substring only for string values)
& (and)
| (or)
( ) (paranthesis for grouping)
XUnit .NET currently does not support TestCaseFilters.
Then in your build definition you can create two test runs, one that runs Critical tests, one that runs everything else. You can use the Filter option of the Test Run.
Open the Test Runs window using this hard to find button:
Create 2 test runs:
On your first run set the options as follows:
On your second run set the options as follows:
This way Team Build will run any test with the "Ciritical" category in the first run and will fail. If the first run succeeds it will kick off the non-critical tests and will Partially Succeed, even when a test fails.
Update
The same process explained for Azure DevOps Pipelines.
Yes.
Using the TFS2013 Default Template:
Under the "Process" tab, go to section 2, "Basic".
Expand the Automated Tests section.
For "Test Source", click the ellipsis ("...").
This will open a new window that has a "Fail build when tests fail" check box.

How does TeamCity know when an xUnit.net test is run?

I have always wondered how TeamCity recognizes that it is running xUnit.net tests and how it knows to put a separate "Test" tab in the build overview after a build step runs. Is the xUnit console runner somehow responsible for that?
Found finally what is actually going on. TeamCity has its own API. I dug this code snippet out of the xUnit source code and it becomes clear:
https://github.com/xunit/xunit/blob/v1/src/xunit.console/RunnerCallbacks/TeamCityRunnerCallback.cs
public override void AssemblyStart(TestAssembly testAssembly)
{
Console.WriteLine(
"##teamcity[testSuiteStarted name='{0}']",
Escape(Path.GetFileName(testAssembly.AssemblyFilename))
);
}
...code omitted for clarity

Arbitrary unit tests fail in VS2012 after switching from Moles to Fakes

I have about 300 unit tests for an assembly which is part of a solution I originally started under VS2010. Numerous tests used the Moles framework provided by Micrsoft, but after upgrading to VS2012 (Update 2) I wanted to change the tests to use the officially supplied Fakes framework.
I updated the corresponding tests accordingly, which usually only involved creating a ShimsContext and some minor changes to the code:
Before
[TestMethod]
[HostType( "Moles" )]
public void MyUnitTest_CalledWithXyz_ThrowsException()
{
// Arrange
...
MGroupPrincipal.FindByIdentityPrincipalContextIdentityTypeString =
( t1, t2, t3 ) => null;
...
try
{
// Act
...
}
catch( Exception ex )
{
// Assert
...
}
}
After
[TestMethod]
public void MyUnitTest_CalledWithXyz_ThrowsException()
{
using( ShimsContext.Create() )
{
// Arrange
...
ShimGroupPrincipal.FindByIdentityPrincipalContextIdentityTypeString =
( t1, t2, t3 ) => null;
try
{
// Act
...
}
catch( Exception ex )
{
// Assert
...
}
}
}
I've got different test classes in my test project, and when I run the tests I get arbitrary erros which I cannot explain, e.g.:
Run tests for one class in Release mode => 21 tests fail / 15 pass
Run tests for same class in Debug mode => 2 tests fail / 34 pass
Run tests for same class again in Release mode => 2 tests fail / 34 pass
Run all tests in the project => 21 tests fail / 15 pass (for the class mentioned above)
Same behaviour for a colleague on his system. The error messages are always TypeLoadExceptions such as
Test method ... threw exception: System.TypeLoadException: Could not load type 'System.DirectoryServices.Fakes.ShimDirectorySearcher' in the assembly 'System.DirectoryServices.4.0.0.0.Fakes,Version=4.0.0.0, Culture=neutral, PublicKeyToken=..."
In VS2012 itself the source code editor doesn't show any errors, Intellisense works as expected, mouse tooltips over e.g. ShimDirectorySearcher show where it is located etc. Furthermore, when I open the Fakes assembly that's being generated (e.g. System.DirectoryServices.4.0.0.0.Fakes.dll) with .NET Reflector, the type shown in the error message exists.
All the tests worked fine (in Debug and Release mode as well) before we switched from VS2010 to VS2012, but now we don't have a clue what's wrong here. Why does the result change in the ways described above? Why do we get TypeLoadExceptions even though the types do exist?
Unfortunately there is hardly any help available from Micrsoft or on the internet.
I don't quite understand why having the old .testsettings file from VS2010 is such a problem, but deleting it and adding a .runsettings file as suggested by MSDN did the job for me.
All problems were solved:
All unit tests run (again) without problems
Arbitrary combinations of tests run (again) without problems
I can debug tests using Fakes (before I used to get test instrumentalisation errors)
Hope this helps others who run into problems, there doesn't seem to be too much information about Fakes yet.
One more thing regarding Code Coverage: This works (without having to configure any test settings) via menu Test => Analyze Code Coverage. For TFS build definitions you can enable code coverage for the build by choosing Process => Basic => Automated Tests => 1. Test Source. Now click into the corresponding text field and then on the ... button that is (only) shown when you click into the text field. Now choose Visual Studio Test Runner in the Test runner ComboBox. Now you can also choose Enable Code Coverage from the options.