how to make junit TestSuite parallel - unit-testing

I have created several junit test suites which include several test cases.
Looks like as below:
#RunWith(Suite.class)
#Suite.SuiteClasses({
HttpAPICreationTest.class,
HttpAPIVerifyTest.class,
HttpAPIDeletionTest.class
})
public class HttpAPITestSuite {
}
#RunWith(Suite.class)
#Suite.SuiteClasses({
HtmlSeleniumScriptBatchCreationTest.class,
PagingVerificationTest.class,
HtmlSeleniumScriptBatchDeletionTest.class
})
public class PagingTestSuite {
}
Now I want to execute my HttpAPITestSuite and PagingTestSuite in parallel, and for now, HttpAPICreationTest.class, HttpAPIVerifyTest.class, HttpAPIDeletionTest.class are executed serial, not in parallel> Also, I don't want to break their order.
So how can I make my suites run parallel and keep the original order for their inner test cases: run HttpAPICreationTest.class and HtmlSeleniumScriptBatchCreationTest.class in parallel, and keep the case of HttpAPICreationTest.class always before HttpAPIVerifyTest.class and HttpAPIDeletionTest.class.

If you are using maven, use Surefire forks. If you want behavior part parallel, part serial, then you would need to execute the plugin multiple times, with different settings. But generally speaking, the jUnit runners/plugins are not well suited for "run tests as a graph", test ordering is done much better way in TestNG.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.19.1</version>
<configuration>
<forkCount>3</forkCount>
<reuseForks>true</reuseForks>
</configuration>
</plugin>
More documentation: http://maven.apache.org/surefire/maven-surefire-plugin/examples/fork-options-and-parallel-execution.html
Edit: if you are running tests from Idea, then Idea (2016.3) has its own settings in Run tests configuration (and does not pick up maven settings). And it supports only one global settings per tests execution (serial/method_fork/class_form + fork count)

Related

Gradle : Multiple configurations for Test tasks

I have got two types of tests in my app as follows:
Unit tests (large number of tests and quick to execute)
Integration tests (small number of tests but each suite takes considerable time)
My project uses gradle and I want both sets of tests to execute concurrently. As per gradle's documentation, I can use maxParallelForks config to parallelize the execution. However, as gradle distributes tasks to workers statistically (see here) there is a chance that all my integration tests get allocated to the same worker.
So, what I really want is to have two sets of test blocks in my gradle file, e.g.:
test {
include 'org/unit/**'
exclude 'org/integration/**'
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
test {
include 'org/integration/**'
exclude 'org/unit/**'
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
Does gradle support two different test profiles like the above? If yes, can I execute those two in parallel?
I am assuming you have them all under the same source set: src/main/java/test.
I'd suggest creating a separate source set and task specifically for integration tests. See Configuring integration tests.
And since you want to parallelize the execution, then you'll need to create a custom task that submits both your unit and integration test tasks to the Worker API: https://guides.gradle.org/using-the-worker-api/
Starting from Gradle 7.4 the The built-in JVM Test Suite Plugin is the way to go:
testing {
suites {
test {
useJUnitJupiter()
}
integrationTest(JvmTestSuite) {
dependencies {
implementation project
// other integration-test-specific dependencies
}
targets {
all {
testTask.configure {
shouldRunAfter(test)
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
}
}
}
}
}

How can I skip running slow tests when running a particular test suite in PHPUnit, but still run all tests when I need full code coverage?

I have a single test suite that is marked at so in my PHPUnit configuration. The test suite contains various many tests, and it also has database-intensive live database tests, which take a long time to complete.
In particular one of the tests takes over 2 seconds to complete (see below).
I want to separate running fast tests form slow tests, so that I can run the full slow but complete version of tests when I have more time, but in general I want to run the fast tests for my every-day needs, thereby omitting the slow tests when running the test suit.
How can I do this?
For the record, my phpunit.xml config is like so:
<phpunit bootstrap="bootstrap.php">
<testsuite name="Crating">
<directory>../module/Crating/test/</directory>
</testsuite>
</phpunit>
Command I use to run my test suite is like so:
phpunit -c phpunit.xml --testsuite CratingCalc
One of the files in my ../module/Crating/test/ directory is CrateRepositoryTest.php. It looks like so:
class CrateRepositoryTest extends TestCase
{
function testCombine()
{
//mocked up hardcoded data
$fake = new FakeCratingDataModel();
//connection to real live database
$real = new CratingDataModel();
/*
* Tests that verify mocked up data to match live data
* Purpose to have them is to alert me when live database data or schema change
*/
$this->assertEquals($fake->getContentsBySalesOrderNumber(7777), $real->getContentsBySalesOrderNumber(7777));
$this->assertEquals($fake->getContentsByShopJobNumber(17167), $real->getContentsByShopJobNumber(17167));
$this->assertEquals($fake->getNearCrating(20, 20, 20), $real->getNearCrating(20, 20, 20));
$this->assertEquals($fake->getContentsByInquiryNumber(640, 2), $real->getContentsByInquiryNumber(25640, 2));
}
}
Groups.
Normally, you can add annotations #group small or I have #group ci (just for things I'll run in a full CI environment).
Having small, medium or large tests is in fact so common, there are dedicated group annotations - #small, #medium & #large, and there are also settings for the phpunit.xml file that can also give a time-limit for each (and will kill, and fail them, if they take too long):
<?xml version="1.0" encoding="UTF-8"?>
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
....
timeoutForLargeTests="5"
timeoutForMediumTests="2"
timeoutForSmallTests="1"
.... >
I have two ways to run my tests - the full version that does not exclude any groups (running over 1250 tests takes around 50 seconds, without coverage), and the faster tests that adds --exclude-group ci,large,webtest to the phpunit command, that can run 630 of the tests in less than 4 seconds.

Nunit parallel lifecycle and parallel test failing sometimes

I am having an issue with tests sporadically failing when using nunit 3 and using parallel test running.
We have a number of tests that currently are structured as follows
[TestFixture]
public class CalculateShipFromStoreShippingCost
{
private IService_service;
private IClient _client;
[SetUp]
public void SetUp()
{
_service = Substitute.For<IService>();
_client = new Client(_service);
}
[Test]
public async Task WhenScenario1()
{
_service.Apply(Args.Any<int>).Returns(1);
var result = _client.DoTheThing();
Assert.IsTrue(1,result);
}
[Test]
public async Task WhenScenario2()
{
_service.Apply(Args.Any<int>).Returns(2);
var result = _client.DoTheThing();
Assert.IsTrue(2,result);
}
}
Sometimes the test fail as one of the substitutes is returning the value for the other test.
How should this test be structured so that with Nunit it will run reliably when done in parallel
You haven't shown any Parallelizable attributes in your example, so I assume you are using the attribute at a higher level, most likely on the assembly. Otherwise, no parallel execution would occur. Further, since you say the test cases are running in parallel, you have apparently specified ParallelScope.Children.
The two test cases shown in your fixture cannot run in parallel. You should bear in mind that the SetUp method runs for each of the tests. So each of your two tests sets the value of _service, which is part of the state of the single instance of CalculateShipFromStoreShippingCost, which is shared by both tests. That is why you are seeing the "wrong" substitute being returned at times.
It is not possible for two test cases to run reliably in parallel if they both change the state of the fixture. Note that it does not matter whether the assignment to _service takes place in the test method itself or in the SetUp method - both are executed as part of the test case. So, you have to either stop running these two cases in parallel or stop changing the state.
To stop running the tests in parallel, you simply add [NonParallelizable] to each test method. If you are not using the latest framework version, use [Parallelizable(ParallelScope.None)] instead. Your other tests will continue to run in parallel, but these two will not.
Alternatively, use ParallelScope.Fixture at the assembly level. This will cause fixtures to run in parallel by default, while the individual test cases within them each run sequentially. When using ParallelizableAttribute at the assembly level, it is sometimes best to take a more conservative approach, adding in more parallelism within some fixtures where it is useful.
An entirely different approach is to make your tests stateless. Eliminate the _service member and use a local value within the test method itself. Each of your tests would add two lines like...
var service = SubstituteFor<IService>();
var client = new Client(service);
As shown in your example, I would imagine you are getting very little performance gain from running the two methods in parallel, so I would not use that last approach unless I saw a specific performance reason to do so.
As a final note... If you use make your fixtures run in parallel by default (either with an assembly-level attribute or with attributes on each fixture) and place no Parallelizable attribute on your test cases, NUnit uses an optimization, whereby all the tests within the fixture run on the same thread. This saving in context changes will often make up for the loss of any performance improvement you hoped to get through running in parallel.

How to test EJB Beans in OpenEJB with JUnit5?

In JUnit 4, I use the following setup to test my EJB beans:
#RunWith(EJBContainerRunner.class)
public class MyEETestWithOneOpenEJB {
#Inject
private ACdiBean bean;
#Test
public void theTest() {
// do test
}
}
But in JUnit 5, there is no #RunWith(...) anymore.
Question: How to test with JUnit 5?
You will need to write your own EJBContainerExtension to replace the Runner or find an already existing one. The latter is unfortunately not very likely at this moment, JUnit5 is still not in GA and there are not many official extensions yet.
If you want to, read about JUnit 5 extension model here
TomEE 8 (since 8.0.7) supports testing with JUnit 5 only (without a transient dependency towards JUnit 4).
The Legacy Way
The legacy EJBContainerRunner was replaced by a related JUnit 5 extension.
If you are using Maven, you would need to add the following dependency to your pom file:
<dependency>
<groupId>org.apache.tomee</groupId>
<artifactId>openejb-junit5-backward</artifactId>
<version>8.0.9</version>
<scope>test</scope>
</dependency>
Subsequently, you can replace
#RunWith(EJBContainerRunner.class)
with
#RunWithEjbContainer
which is a pure JUnit 5 extension. There is no need to add any JUnit 4 dependency into your classpath. A usage example can be found in the module's test source at the TomEE GitHub repository.
The Modern Way
In the same release, the ApplicationComposer was enhanced to support JUnit 5 as an extension. To use it, add
<dependency>
<groupId>org.apache.tomee</groupId>
<artifactId>openejb-junit5</artifactId>
<version>8.0.9</version>
<scope>test</scope>
</dependency>
to your classpath. ApplicationComposer does not require classpath scanning and is faster than the alternative mentioned above.
Just add #RunWithApplicationComposer to your JUnit 5 test class. By default, the container lifecycle is bound to the lifecycle of the test instance. However, other modes are available as well:
PER_EACH: A container is started for each test method
PER_ALL: A container is started for each test class
PER_JVM: A container is started once per JVM
AUTO (default): A container is started based on the test instance lifecycle.
An example can be found in the examples section of the TomEE GitHub repository.

Separating unit tests and integration tests in Go

Is there an established best practice for separating unit tests and integration tests in GoLang (testify)? I have a mix of unit tests (which do not rely on any external resources and thus run really fast) and integration tests (which do rely on any external resources and thus run slower). So, I want to be able to control whether or not to include the integration tests when I say go test.
The most straight-forward technique would seem to be to define a -integrate flag in main:
var runIntegrationTests = flag.Bool("integration", false
, "Run the integration tests (in addition to the unit tests)")
And then to add an if-statement to the top of every integration test:
if !*runIntegrationTests {
this.T().Skip("To run this test, use: go test -integration")
}
Is this the best I can do? I searched the testify documentation to see if there is perhaps a naming convention or something that accomplishes this for me, but didn't find anything. Am I missing something?
#Ainar-G suggests several great patterns to separate tests.
This set of Go practices from SoundCloud recommends using build tags (described in the "Build Constraints" section of the build package) to select which tests to run:
Write an integration_test.go, and give it a build tag of integration. Define (global) flags for things like service addresses and connect strings, and use them in your tests.
// +build integration
var fooAddr = flag.String(...)
func TestToo(t *testing.T) {
f, err := foo.Connect(*fooAddr)
// ...
}
go test takes build tags just like go build, so you can call go test -tags=integration. It also synthesizes a package main which calls flag.Parse, so any flags declared and visible will be processed and available to your tests.
As a similar option, you could also have integration tests run by default by using a build condition // +build !unit, and then disable them on demand by running go test -tags=unit.
#adamc comments:
For anyone else attempting to use build tags, it's important that the // +build test comment is the first line in your file, and that you include a blank line after the comment, otherwise the -tags command will ignore the directive.
Also, the tag used in the build comment cannot have a dash, although underscores are allowed. For example, // +build unit-tests will not work, whereas // +build unit_tests will.
To elaborate on my comment to #Ainar-G's excellent answer, over the past year I have been using the combination of -short with Integration naming convention to achieve the best of both worlds.
Unit and Integration tests harmony, in the same file
Build flags previously forced me to have multiple files (services_test.go, services_integration_test.go, etc).
Instead, take this example below where the first two are unit tests and I have an integration test at the end:
package services
import "testing"
func TestServiceFunc(t *testing.T) {
t.Parallel()
...
}
func TestInvalidServiceFunc3(t *testing.T) {
t.Parallel()
...
}
func TestPostgresVersionIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
...
}
Notice the last test has the convention of:
using Integration in the test name.
checking if running under -short flag directive.
Basically, the spec goes: "write all tests normally. if it is a long-running tests, or an integration test, follow this naming convention and check for -short to be nice to your peers."
Run only Unit tests:
go test -v -short
this provides you with a nice set of messages like:
=== RUN TestPostgresVersionIntegration
--- SKIP: TestPostgresVersionIntegration (0.00s)
service_test.go:138: skipping integration test
Run Integration Tests only:
go test -run Integration
This runs only the integration tests. Useful for smoke testing canaries in production.
Obviously the downside to this approach is if anyone runs go test, without the -short flag, it will default to run all tests - unit and integration tests.
In reality, if your project is large enough to have unit and integration tests, then you most likely are using a Makefile where you can have simple directives to use go test -short in it. Or, just put it in your README.md file and call it the day.
I see three possible solutions. The first is to use the short mode for unit tests. So you would use go test -short with unit tests and the same but without the -short flag to run your integration tests as well. The standard library uses the short mode to either skip long-running tests, or make them run faster by providing simpler data.
The second is to use a convention and call your tests either TestUnitFoo or TestIntegrationFoo and then use the -run testing flag to denote which tests to run. So you would use go test -run 'Unit' for unit tests and go test -run 'Integration' for integration tests.
The third option is to use an environment variable, and get it in your tests setup with os.Getenv. Then you would use simple go test for unit tests and FOO_TEST_INTEGRATION=true go test for integration tests.
I personally would prefer the -short solution since it's simpler and is used in the standard library, so it seems like it's a de facto way of separating/simplifying long-running tests. But the -run and os.Getenv solutions offer more flexibility (more caution is required as well, since regexps are involved with -run).
I was trying to find a solution for the same recently.
These were my criteria:
The solution must be universal
No separate package for integration tests
The separation should be complete (I should be able to run integration tests only)
No special naming convention for integration tests
It should work well without additional tooling
The aforementioned solutions (custom flag, custom build tag, environment variables) did not really satisfy all the above criteria, so after a little digging and playing I came up with this solution:
package main
import (
"flag"
"regexp"
"testing"
)
func TestIntegration(t *testing.T) {
if m := flag.Lookup("test.run").Value.String(); m == "" || !regexp.MustCompile(m).MatchString(t.Name()) {
t.Skip("skipping as execution was not requested explicitly using go test -run")
}
t.Parallel()
t.Run("HelloWorld", testHelloWorld)
t.Run("SayHello", testSayHello)
}
The implementation is straightforward and minimal. Although it requires a simple convention for tests, but it's less error prone. Further improvement could be exporting the code to a helper function.
Usage
Run integration tests only across all packages in a project:
go test -v ./... -run ^TestIntegration$
Run all tests (regular and integration):
go test -v ./... -run .\*
Run only regular tests:
go test -v ./...
This solution works well without tooling, but a Makefile or some aliases can make it easier to user. It can also be easily integrated into any IDE that supports running go tests.
The full example can be found here: https://github.com/sagikazarmark/modern-go-application
I encourage you to look at Peter Bourgons approach, it is simple and avoids some problems with the advice in the other answers: https://peter.bourgon.org/blog/2021/04/02/dont-use-build-tags-for-integration-tests.html
There are many downsides to using build tags, short mode or flags, see here.
I would recommend using environment variables with a test helper that can be imported into individual packages:
func IntegrationTest(t *testing.T) {
t.Helper()
if os.Getenv("INTEGRATION") == "" {
t.Skip("skipping integration tests, set environment variable INTEGRATION")
}
}
In your tests you can now easily call this at the start of your test function:
func TestPostgresQuery(t *testing.T) {
IntegrationTest(t)
// ...
}
Why I would not recommend using either -short or flags:
Someone who checks out your repository for the first time should be able to run go test ./... and all tests are passing which is often not the case if this relies on external dependencies.
The problem with the flag package is that it will work until you have integration tests across different packages and some will run flag.Parse() and some will not which will lead to an error like this:
go test ./... -integration
flag provided but not defined: -integration
Usage of /tmp/go-build3903398677/b001/foo.test:
Environment variables appear to be the most flexible, robust and require the least amount of code with no visible downsides.