I am using scalatest maven plugin and I would like to run integration test separately from unit tests. The tests path are src/it and src/test for integration test and unit test respectively.
Which is the best approach to achieve this goal?
Thanks
One option is to create an object and then use it as a tag in each test:
object IntegrationTag extends Tag("Integration-Test")
test("Test for correct number of records", IntegrationTag) {
// some stuff
}
Then, if you want to test the Unit Tests simply run the command:
mvn test -DtagsToExclude=Integration-Test
This is a possible solution...sure that will be more.
My tests aren't in the same package as my code. I find this a less cluttered way of organising a codebase with a lot of test files, and I've read that it's a good idea in order to limit tests to interacting via the package's public api.
So it looks something like this:
api_client:
Client.go
ArtistService.go
...
api_client_tests
ArtistService.Events_test.go
ArtistService.Info_test.go
UtilityFunction.go
...
I can type go test bandsintown-api/api_client_tests -cover
and see 0.181s coverage: 100.0% of statements. But that's actually just coverage over my UtilityFunction.go (as I say when I ran go test bandsintown-api/api_client_tests -cover=cover.out and
go tool cover -html=cover.out).
Is there any way to get the coverage for the actual api_client package under test, without bringing it all into the same package?
As it is mentioned in comments you can run
go test -cover -coverpkg "api_client" "api_client_tests"
to run the tests with coverage.
But splitting code files from tests files to a different directories isn't a Go's way.
I suppose that you want to have a black-box testing(nothing package-private stuff can be accessible outside, even for tests).
To accomplish this it's allowed to have tests in another package(without moving the files). Example:
api_client.go
package api_client
// will not be accessible outside of the package
var privateVar = 10
func Method() {
}
api_client_test.go
package api_client_tests
import "testing"
func TestClient(t *testing.T) {
Method()
}
Is there an established best practice for separating unit tests and integration tests in GoLang (testify)? I have a mix of unit tests (which do not rely on any external resources and thus run really fast) and integration tests (which do rely on any external resources and thus run slower). So, I want to be able to control whether or not to include the integration tests when I say go test.
The most straight-forward technique would seem to be to define a -integrate flag in main:
var runIntegrationTests = flag.Bool("integration", false
, "Run the integration tests (in addition to the unit tests)")
And then to add an if-statement to the top of every integration test:
if !*runIntegrationTests {
this.T().Skip("To run this test, use: go test -integration")
}
Is this the best I can do? I searched the testify documentation to see if there is perhaps a naming convention or something that accomplishes this for me, but didn't find anything. Am I missing something?
#Ainar-G suggests several great patterns to separate tests.
This set of Go practices from SoundCloud recommends using build tags (described in the "Build Constraints" section of the build package) to select which tests to run:
Write an integration_test.go, and give it a build tag of integration. Define (global) flags for things like service addresses and connect strings, and use them in your tests.
// +build integration
var fooAddr = flag.String(...)
func TestToo(t *testing.T) {
f, err := foo.Connect(*fooAddr)
// ...
}
go test takes build tags just like go build, so you can call go test -tags=integration. It also synthesizes a package main which calls flag.Parse, so any flags declared and visible will be processed and available to your tests.
As a similar option, you could also have integration tests run by default by using a build condition // +build !unit, and then disable them on demand by running go test -tags=unit.
#adamc comments:
For anyone else attempting to use build tags, it's important that the // +build test comment is the first line in your file, and that you include a blank line after the comment, otherwise the -tags command will ignore the directive.
Also, the tag used in the build comment cannot have a dash, although underscores are allowed. For example, // +build unit-tests will not work, whereas // +build unit_tests will.
To elaborate on my comment to #Ainar-G's excellent answer, over the past year I have been using the combination of -short with Integration naming convention to achieve the best of both worlds.
Unit and Integration tests harmony, in the same file
Build flags previously forced me to have multiple files (services_test.go, services_integration_test.go, etc).
Instead, take this example below where the first two are unit tests and I have an integration test at the end:
package services
import "testing"
func TestServiceFunc(t *testing.T) {
t.Parallel()
...
}
func TestInvalidServiceFunc3(t *testing.T) {
t.Parallel()
...
}
func TestPostgresVersionIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
...
}
Notice the last test has the convention of:
using Integration in the test name.
checking if running under -short flag directive.
Basically, the spec goes: "write all tests normally. if it is a long-running tests, or an integration test, follow this naming convention and check for -short to be nice to your peers."
Run only Unit tests:
go test -v -short
this provides you with a nice set of messages like:
=== RUN TestPostgresVersionIntegration
--- SKIP: TestPostgresVersionIntegration (0.00s)
service_test.go:138: skipping integration test
Run Integration Tests only:
go test -run Integration
This runs only the integration tests. Useful for smoke testing canaries in production.
Obviously the downside to this approach is if anyone runs go test, without the -short flag, it will default to run all tests - unit and integration tests.
In reality, if your project is large enough to have unit and integration tests, then you most likely are using a Makefile where you can have simple directives to use go test -short in it. Or, just put it in your README.md file and call it the day.
I see three possible solutions. The first is to use the short mode for unit tests. So you would use go test -short with unit tests and the same but without the -short flag to run your integration tests as well. The standard library uses the short mode to either skip long-running tests, or make them run faster by providing simpler data.
The second is to use a convention and call your tests either TestUnitFoo or TestIntegrationFoo and then use the -run testing flag to denote which tests to run. So you would use go test -run 'Unit' for unit tests and go test -run 'Integration' for integration tests.
The third option is to use an environment variable, and get it in your tests setup with os.Getenv. Then you would use simple go test for unit tests and FOO_TEST_INTEGRATION=true go test for integration tests.
I personally would prefer the -short solution since it's simpler and is used in the standard library, so it seems like it's a de facto way of separating/simplifying long-running tests. But the -run and os.Getenv solutions offer more flexibility (more caution is required as well, since regexps are involved with -run).
I was trying to find a solution for the same recently.
These were my criteria:
The solution must be universal
No separate package for integration tests
The separation should be complete (I should be able to run integration tests only)
No special naming convention for integration tests
It should work well without additional tooling
The aforementioned solutions (custom flag, custom build tag, environment variables) did not really satisfy all the above criteria, so after a little digging and playing I came up with this solution:
package main
import (
"flag"
"regexp"
"testing"
)
func TestIntegration(t *testing.T) {
if m := flag.Lookup("test.run").Value.String(); m == "" || !regexp.MustCompile(m).MatchString(t.Name()) {
t.Skip("skipping as execution was not requested explicitly using go test -run")
}
t.Parallel()
t.Run("HelloWorld", testHelloWorld)
t.Run("SayHello", testSayHello)
}
The implementation is straightforward and minimal. Although it requires a simple convention for tests, but it's less error prone. Further improvement could be exporting the code to a helper function.
Usage
Run integration tests only across all packages in a project:
go test -v ./... -run ^TestIntegration$
Run all tests (regular and integration):
go test -v ./... -run .\*
Run only regular tests:
go test -v ./...
This solution works well without tooling, but a Makefile or some aliases can make it easier to user. It can also be easily integrated into any IDE that supports running go tests.
The full example can be found here: https://github.com/sagikazarmark/modern-go-application
I encourage you to look at Peter Bourgons approach, it is simple and avoids some problems with the advice in the other answers: https://peter.bourgon.org/blog/2021/04/02/dont-use-build-tags-for-integration-tests.html
There are many downsides to using build tags, short mode or flags, see here.
I would recommend using environment variables with a test helper that can be imported into individual packages:
func IntegrationTest(t *testing.T) {
t.Helper()
if os.Getenv("INTEGRATION") == "" {
t.Skip("skipping integration tests, set environment variable INTEGRATION")
}
}
In your tests you can now easily call this at the start of your test function:
func TestPostgresQuery(t *testing.T) {
IntegrationTest(t)
// ...
}
Why I would not recommend using either -short or flags:
Someone who checks out your repository for the first time should be able to run go test ./... and all tests are passing which is often not the case if this relies on external dependencies.
The problem with the flag package is that it will work until you have integration tests across different packages and some will run flag.Parse() and some will not which will lead to an error like this:
go test ./... -integration
flag provided but not defined: -integration
Usage of /tmp/go-build3903398677/b001/foo.test:
Environment variables appear to be the most flexible, robust and require the least amount of code with no visible downsides.
I inherited a solution with 4 projects: Front-End Project, Business Project, Data Project and a Test Project.
The test project is quite... let's say empty... and now after I changed a few things on some searches methods of the business class I would like to generate some tests to validate the changes i've made.
So my question is: Is there a automatic way to generate a "empty frame test class" to test my actual code? Something like "right click the class you want to test and click generate test class and choose the project where it will be created" maybe?!?
Details:
I'm using VS 2012 Ultimate
There's no tests for the class I'm working on
There is built in functionality that allows you to create unit test classes. I am not sure if that also works in combination with NUnit though.
Anyway, I never used it. What I do is:
add a test class to the test project
decorate the class with the [TestFixture] attribute
write the first method of what I want to test
decorate the method with the [Test] attribute
write the test
and start the TDD cycle.
A typical test class skeleton will look like this
using NUnit.Framework;
namespace Tests.Framework
{
[TestFixture]
public class SomeClassTests
{
[Test]
public void AMeaningfulTestMethodName()
{
// the test
}
}
}
I also have Resharper at my aid so that I can run the test from visual studio straight away.
Since it's so little effort for me to add a new test fixture to the project, I don't see the need of adding it via templates. The most annoying part of templates is that they overgenerate. Templates will generate [SetUp] and [TearDown] fixtures which I don't always need. I like to keep my classes as clean as possible. But it's a matter of taste.
Here are some links that you might find helpful if you want to:
save your own predefined test class template
want to use the built in functionality of visual studio
follow a msdn walkthrough regarding the topic
I am trying to add my first unit test to an existing Open Source project. Specifically, I added a new class, called audio_manager:
src/audio/audio_manager.h
src/audio/audio_manager.cc
I created a src/test directory structure that mirrors the structure of the implementation files, and wrote my googletest unit tests:
src/test/audio/audio_manager.cc
Now, I am trying to set up my Makefile.am to compile and run the unit test:
src/test/audio/Makefile.am
I copied Makefile.am from:
src/audio/Makefile.am
Does anyone have a simple recipe for me, or is it to the cryptic automake documentation for me? :)
If the existing project already has a test structure in place, then you should just add:
TESTS += audio_manager
to the existing tests/Makefile.am. If the existing project does not have a test structure in place, you should run screaming for the hills.
If running for the hills is not acceptable, there's a fair bit of work in getting the test structure in place, but it's not insurmountable. You might prefer to make tests a sibling of src, but that's not necessary. It's probably easier to start with a fresh Makefile.am rather than copying the Makefile.am from src, but maybe not. Possibly, all you'll need to do is change lines of the form:
bin_PROGRAMS = ...
to
check_PROGRAMS = ...
add the line
TESTS = test-audio-manager
change the name of audio_manager.cc to test-audio-manager.cc (that's not strictly necessary, but will help maintainability. I changed _ to - purely out of personal preference) and add a
SUBDIRS = tests/audio
to src/Makefile.am. (If there's already a SUBDIRS directive, append to that assignment or use +=)
William's answer got me where I needed to go. Just for the sake of the community, here's what I ended up doing:
I moved my tests back into the main directory structure and prepended test_, as per William's suggestions.
I added a few lines to src/audio/Makefile.am to enable unit tests:
# Unit tests
noinst_PROGRAMS = test_audio_manager
test_audio_manager_SOURCES = $(libadonthell_audio_la_SOURCES) test_audio_manager.cc
test_audio_manager_CXXFLAGS = $(libadonthell_audio_la_CXXFLAGS)
test_audio_manager_LDADD = $(libadonthell_audio_la_LIBADD) -lgtest
TESTS = test_audio_manager
Now, running "make check" fires the unit tests!
All of this can be seen here: http://github.com/ksterker/adonthell/commit/aacdb0fe22f59e61ef0f5986827af180c56ae9f3
Complimenting the information in the other answers, you can also specify multiple tests to TESTS.
Regardless of how many tests you specify, you don't actually have to specify them twice, instead just set TESTS to $(check_PROGRAMS) - this can help to prevent an accidental situation of adding your test to check_PROGRAMS but forgetting to add it to TESTS, causing your new test to be added to the build, but never being run by make check:
# Unit tests
check_PROGRAMS = test_audio_manager
test_audio_manager_SOURCES = test_audio_manager.cc
TESTS = $(check_PROGRAMS)
...or to do the same with multiple tests:
# Unit tests
check_PROGRAMS = test_audio_manager test_video_manager
test_audio_manager_SOURCES = test_audio_manager.cc
test_video_manager_SOURCES = test_video_manager.cc
TESTS = $(check_PROGRAMS)