scalatest run integration test separately from unit test - unit-testing

I am using scalatest maven plugin and I would like to run integration test separately from unit tests. The tests path are src/it and src/test for integration test and unit test respectively.
Which is the best approach to achieve this goal?
Thanks

One option is to create an object and then use it as a tag in each test:
object IntegrationTag extends Tag("Integration-Test")
test("Test for correct number of records", IntegrationTag) {
// some stuff
}
Then, if you want to test the Unit Tests simply run the command:
mvn test -DtagsToExclude=Integration-Test
This is a possible solution...sure that will be more.

Related

phpunit skipping tests with dependencies between different test cases

For a integration test I want to reuse test results. A dependency is defined via annotations. For the depending tests to be executed the result from previous tests needs to be available. Therefore the tests need to be executed in a fixed order. Otherwise tests which depend on other tests are skipped. To make sure the tests are executed in a fixed order, a test suite has been defined. Still the test with the dependency is skipped. Why is that?
ATest.php:
<?php
use PHPUnit\Framework\TestCase;
class ATest extends TestCase
{
public function testA()
{
self::assertTrue(true);
return $this;
}
}
BTest.php:
<?php
use PHPUnit\Framework\TestCase;
class BTest extends TestCase
{
/**
* #depends ATest::testA()
*/
public function testB($a)
{
self::assertTrue(true);
}
}
phpunit.xml:
<?xml version="1.0" encoding="UTF-8"?>
<phpunit
verbose="true"
>
<testsuites>
<testsuite name="dependency">
<file>ATest.php</file>
<file>BTest.php</file>
</testsuite>
</testsuites>
</phpunit>
phpunit --testsuite dependency
PHPUnit 5.5.7 by Sebastian Bergmann and
contributors.
Runtime: PHP 7.1.5 with Xdebug 2.5.4 Configuration:
/phpunit.xml
.S 2
/ 2 (100%)
Time: 49 ms, Memory: 4.00MB
There was 1 skipped test:
1) BTest::testB This test depends on "ATest::testA()" to pass.
OK, but incomplete, skipped, or risky tests! Tests: 1, Assertions: 1,
Skipped: 1.
You can't have a test depend on a test in a different TestCase. The tests need to be contained in the same test case. Since the test is not in the test case, it gets treated like a failed test and the test is skipped when you run the tests.
Your tests need to be combined into one test in order for the dependency to work.
https://phpunit.de/manual/current/en/writing-tests-for-phpunit.html#writing-tests-for-phpunit.test-dependencies
Part of the reason for this is that each of your tests should be isolated and be able to run in any order. Having a test depend on a test in a separate testcase means that the test files will need to run in a specific order. This can get complicated very easily with having circular test dependencies.
In addition, you now have things affecting tests that are not contained within your testcase. This can lead to a nightmare in maintaining the tests.

Separating unit tests and integration tests in Go

Is there an established best practice for separating unit tests and integration tests in GoLang (testify)? I have a mix of unit tests (which do not rely on any external resources and thus run really fast) and integration tests (which do rely on any external resources and thus run slower). So, I want to be able to control whether or not to include the integration tests when I say go test.
The most straight-forward technique would seem to be to define a -integrate flag in main:
var runIntegrationTests = flag.Bool("integration", false
, "Run the integration tests (in addition to the unit tests)")
And then to add an if-statement to the top of every integration test:
if !*runIntegrationTests {
this.T().Skip("To run this test, use: go test -integration")
}
Is this the best I can do? I searched the testify documentation to see if there is perhaps a naming convention or something that accomplishes this for me, but didn't find anything. Am I missing something?
#Ainar-G suggests several great patterns to separate tests.
This set of Go practices from SoundCloud recommends using build tags (described in the "Build Constraints" section of the build package) to select which tests to run:
Write an integration_test.go, and give it a build tag of integration. Define (global) flags for things like service addresses and connect strings, and use them in your tests.
// +build integration
var fooAddr = flag.String(...)
func TestToo(t *testing.T) {
f, err := foo.Connect(*fooAddr)
// ...
}
go test takes build tags just like go build, so you can call go test -tags=integration. It also synthesizes a package main which calls flag.Parse, so any flags declared and visible will be processed and available to your tests.
As a similar option, you could also have integration tests run by default by using a build condition // +build !unit, and then disable them on demand by running go test -tags=unit.
#adamc comments:
For anyone else attempting to use build tags, it's important that the // +build test comment is the first line in your file, and that you include a blank line after the comment, otherwise the -tags command will ignore the directive.
Also, the tag used in the build comment cannot have a dash, although underscores are allowed. For example, // +build unit-tests will not work, whereas // +build unit_tests will.
To elaborate on my comment to #Ainar-G's excellent answer, over the past year I have been using the combination of -short with Integration naming convention to achieve the best of both worlds.
Unit and Integration tests harmony, in the same file
Build flags previously forced me to have multiple files (services_test.go, services_integration_test.go, etc).
Instead, take this example below where the first two are unit tests and I have an integration test at the end:
package services
import "testing"
func TestServiceFunc(t *testing.T) {
t.Parallel()
...
}
func TestInvalidServiceFunc3(t *testing.T) {
t.Parallel()
...
}
func TestPostgresVersionIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
...
}
Notice the last test has the convention of:
using Integration in the test name.
checking if running under -short flag directive.
Basically, the spec goes: "write all tests normally. if it is a long-running tests, or an integration test, follow this naming convention and check for -short to be nice to your peers."
Run only Unit tests:
go test -v -short
this provides you with a nice set of messages like:
=== RUN TestPostgresVersionIntegration
--- SKIP: TestPostgresVersionIntegration (0.00s)
service_test.go:138: skipping integration test
Run Integration Tests only:
go test -run Integration
This runs only the integration tests. Useful for smoke testing canaries in production.
Obviously the downside to this approach is if anyone runs go test, without the -short flag, it will default to run all tests - unit and integration tests.
In reality, if your project is large enough to have unit and integration tests, then you most likely are using a Makefile where you can have simple directives to use go test -short in it. Or, just put it in your README.md file and call it the day.
I see three possible solutions. The first is to use the short mode for unit tests. So you would use go test -short with unit tests and the same but without the -short flag to run your integration tests as well. The standard library uses the short mode to either skip long-running tests, or make them run faster by providing simpler data.
The second is to use a convention and call your tests either TestUnitFoo or TestIntegrationFoo and then use the -run testing flag to denote which tests to run. So you would use go test -run 'Unit' for unit tests and go test -run 'Integration' for integration tests.
The third option is to use an environment variable, and get it in your tests setup with os.Getenv. Then you would use simple go test for unit tests and FOO_TEST_INTEGRATION=true go test for integration tests.
I personally would prefer the -short solution since it's simpler and is used in the standard library, so it seems like it's a de facto way of separating/simplifying long-running tests. But the -run and os.Getenv solutions offer more flexibility (more caution is required as well, since regexps are involved with -run).
I was trying to find a solution for the same recently.
These were my criteria:
The solution must be universal
No separate package for integration tests
The separation should be complete (I should be able to run integration tests only)
No special naming convention for integration tests
It should work well without additional tooling
The aforementioned solutions (custom flag, custom build tag, environment variables) did not really satisfy all the above criteria, so after a little digging and playing I came up with this solution:
package main
import (
"flag"
"regexp"
"testing"
)
func TestIntegration(t *testing.T) {
if m := flag.Lookup("test.run").Value.String(); m == "" || !regexp.MustCompile(m).MatchString(t.Name()) {
t.Skip("skipping as execution was not requested explicitly using go test -run")
}
t.Parallel()
t.Run("HelloWorld", testHelloWorld)
t.Run("SayHello", testSayHello)
}
The implementation is straightforward and minimal. Although it requires a simple convention for tests, but it's less error prone. Further improvement could be exporting the code to a helper function.
Usage
Run integration tests only across all packages in a project:
go test -v ./... -run ^TestIntegration$
Run all tests (regular and integration):
go test -v ./... -run .\*
Run only regular tests:
go test -v ./...
This solution works well without tooling, but a Makefile or some aliases can make it easier to user. It can also be easily integrated into any IDE that supports running go tests.
The full example can be found here: https://github.com/sagikazarmark/modern-go-application
I encourage you to look at Peter Bourgons approach, it is simple and avoids some problems with the advice in the other answers: https://peter.bourgon.org/blog/2021/04/02/dont-use-build-tags-for-integration-tests.html
There are many downsides to using build tags, short mode or flags, see here.
I would recommend using environment variables with a test helper that can be imported into individual packages:
func IntegrationTest(t *testing.T) {
t.Helper()
if os.Getenv("INTEGRATION") == "" {
t.Skip("skipping integration tests, set environment variable INTEGRATION")
}
}
In your tests you can now easily call this at the start of your test function:
func TestPostgresQuery(t *testing.T) {
IntegrationTest(t)
// ...
}
Why I would not recommend using either -short or flags:
Someone who checks out your repository for the first time should be able to run go test ./... and all tests are passing which is often not the case if this relies on external dependencies.
The problem with the flag package is that it will work until you have integration tests across different packages and some will run flag.Parse() and some will not which will lead to an error like this:
go test ./... -integration
flag provided but not defined: -integration
Usage of /tmp/go-build3903398677/b001/foo.test:
Environment variables appear to be the most flexible, robust and require the least amount of code with no visible downsides.

Using IOC To Configure Unit and Integration Tests

I have a unit test project which uses Ninject to mock the database repositories. I would like to use these same tests as integration tests and use Ninject to bind my real database repositories back into their respective implementations so as to test/stress the application into the DB.
Is there a way to do this with Visual Studio 2012 or is there another test framework, other than MSTest, which allows for this type of configuration?
I would really hate to rewrite/copy these unit tests into an integration test project but I suspect I could copy the files in as links and have a single test file compiled into two projects (Unit and Integration).
Thanks
Todd
Your requirements sound really odd to me. The difference between a unit test and an integration test is much bigger than just connecting to a database or not. An integration test either has a much bigger scope, or tests if components communicate correctly. When you write a unit test, the scope of such a unit is normally small (one class/component with all dependencies mocked out), which means there is no need for using a DI container.
Let me put it differently. When the tests are exactly the same, why are you interested to do the same test with and without the database. Just leave the database in and just test that. Besides these tests, you can add 'real' unit tests, that have a much smaller scope.
With Nunit you can do this with TestCase,
say you need to use the unit and unit/integration test using CustomerRepository and OrderRepository,
[TestFixture]
public class TestCustomerRepository
{
IKernel _unit;
Ikernel _integration;
[SetUp]
public void Setup()
{
//setup both kernels
}
[TestCase("Unit")]
[TestCase("Integration")]
public void DoTest(String type)
{
var custRepo = GetRepo<ICustomerRepository>(type);
var orderRepo = GetRepo<IOrderRepository>(type);
//do the test here
}
protected T GetRepo<T>(String type)
{
if (type.Equals("Unit"))
{
return _unit.Get<T>();
}
return _integration.Get<T>();
}
}
This is the basic idea.

mvn test does not find JUnit Tests

I have a a test class which is named xxxTest.java so the test class if found. But the unit tests within the class are not run when I execute mvn test.
I am using JUnit 4 and the test method's are annotated with #Test. eg.
#Test
public void shouldDoSomeAsserting() {
// unit test impl
}
If I rename that test method so it is name testShouldDoSomeAsserting() then mvn test does find and execute that unit test.
I was under the impression that when I use #Test as long as the method was public and void that it would be considered a test method.
Have I missed something?
Thanks.
Are you absolutely sure you are using JUnit 4? This sounds like JUnit 3 behaviour.
I made a minimal sample project and I couldn't recreate your problem. The test runs as expected.
The sample project is here: https://gist.github.com/1888802
Maybe you can get some kind of hint from it.
I've got it working by manually specifying a provider; more details here: Surefire JUnit 4 config

Grails. mocked data from a unit test is available in an integration test

Im having a failed integration test because of test pollution (tests pass or fail depending on which order they are run in).
What baffles me a bit however is that it seems that a unit test where i have mocked some data with mockDomain(Media.class,[new Movie(...)]) is still present and available in other tests, even integration tests.
Is this the expected behaviour? why doesn't the test framework clean up after itself for each test?
EDIT
Really strange, the documentation states that:
Integration tests differ from unit tests in that you have full access to the Grails environment within the test. Grails will use an in-memory HSQLDB database for integration tests and clear out all the data from the database in between each test.
However in my integration test i have the following code
protected void setUp() {
super.setUp()
assertEquals("TEST POLLUTION!",0,Movie.count())
...
}
Which gives me the output:
TEST POLLUTION! expected:<0> but was:<1>
Meaning that there is data present when there shouldn't be!
Looking at the data that is present int he Movie.list() i find that the data corresponds to data set in a previous test (unit test)
protected void setUp() {
super.setUp()
//mock the superclass and subclasses as instances
mockDomain(Media.class,[
new Movie(id:1,name:'testMovie')
])
...
}
Any idea's of why im experiencing these issues?
It's also possible that the pollution is in the test database. Check the DataSources.groovy to see what is being used for the test environment. If you have it set to use a database where
dbCreate is set to something other than "create-drop", any previous contents of the database could also be showing up.
If this is the case, the pollution has come from an entirely difference source. Instead of coming from the unit tests, it has actually come from the database, but when switching to run the integration tests you get connected to a real database with all the data it contains.
We experienced this problem, in that our test enviroment was set to have dbCreate as "update". Quite why this was set for integration tests puzzled me, so I switched to use dbCreate as "create-drop" and make sure that when running test suites we started with a clean database.