I am using Boost.Test library for implementing unit test cases in C++. Suppose I have two suites such as
BOOST_AUTO_TEST_SUITE(TestA)
BOOST_AUTO_TEST_CASE(CorrectAddition)
{
BOOST_CHECK_EQUAL(2+2, 4);
}
BOOST_AUTO_TEST_CASE(WrongAddition)
{
BOOST_CHECK_EQUAL(2 + 2, 5);
}
BOOST_AUTO_TEST_SUITE_END()
BOOST_AUTO_TEST_SUITE(TestB)
BOOST_AUTO_TEST_CASE(CorrectAddition)
{
bool ret = true;
BOOST_CHECK_EQUAL(ret, true);
}
BOOST_AUTO_TEST_CASE(WrongAddition)
{
BOOST_CHECK_EQUAL(2 + 2, 5);
}
BOOST_AUTO_TEST_SUITE_END()
and I would like to run only say suite 'TestB', how shall I execute it.
I really thank for your time and help. Sorry if this question is been posted or documented else where.
Conform to this documentation, the OP should call the unit test executable with the following parameter
--run_test=TestB
to run only the unit tests of test suite TestB.
If the unit test CorrectAddition of all test suites shall be run, then the parameter is
--run_test=*/CorrectAddition
Wildcard ability of Boost.Test is quite powerful, hence the parameter could also be written as
--run_test=*/C*
Assuming you are using the library-supplied main entry point, command-line parsing, etc., and haven't rolled your own, you can select specific test suites and test cases by name or pattern via a command-line switch at run time.
See this page in the documentation for a good example.
Related
The built-in unit testing functionality (unittest {...} code blocks) seems to only be activated when running.
How can I activate unit tests in a library with no main function?
This is somewhat related to this SO question, although the accepted answer there deals with a workaround via the main function.
As an example, I would expect unit testing to fail on a file containing only this code:
int foo(int i) { return i + 1; }
unittest {
assert(foo(1) == 1); // should fail
}
You'll notice I don't have module declared at the top. I'm not sure if that matters for this specific question, but in reality I would have a module statement at the top.
How can I activate unit tests in a library with no main function?
You can use DMD's -main switch, or rdmd's --main switch, to add an empty main function to the set of compiled source files. This allows creating a unit test binary for your library.
If you use Dub, dub test will do something like the above automatically.
I need to specify a specific unit test to run. Of course, I looked at Stack Overflow Answers after I looked at the documentation, but neither used a fixture.
How do I specify a test case to run if I have:
BOOST_AUTO_TEST_SUITE(mysuite)
struct Fixture
{
Fixture()
{
BOOST_TEST_MESSAGE("Setup");
}
~Fixture()
{
BOOST_TEST_MESSAGE("Teardown");
}
};
BOOST_FIXTURE_TEST_CASE(add_remove, Fixture)
{
}
BOOST_AUTO_TEST_SUITE_END()
If I pass --run_test=add_remove, the process returns with a message Test setup error: no test cases matching filter or all test cases were disabled.
I've looked at:
http://www.boost.org/doc/libs/1_64_0/libs/test/doc/html/boost_test/runtime_config/test_unit_filtering.html
Is it possible to run only subsets of a Boost unit test module?
How to set which Boost unit test to run
When you're using the path to the test case as the argument to --run_test, it must also include the name of the suite. In your case, pass --run_test=mysuite/add_remove. This is described in the documentation (look at the table in that section).
Live demo
Is there an established best practice for separating unit tests and integration tests in GoLang (testify)? I have a mix of unit tests (which do not rely on any external resources and thus run really fast) and integration tests (which do rely on any external resources and thus run slower). So, I want to be able to control whether or not to include the integration tests when I say go test.
The most straight-forward technique would seem to be to define a -integrate flag in main:
var runIntegrationTests = flag.Bool("integration", false
, "Run the integration tests (in addition to the unit tests)")
And then to add an if-statement to the top of every integration test:
if !*runIntegrationTests {
this.T().Skip("To run this test, use: go test -integration")
}
Is this the best I can do? I searched the testify documentation to see if there is perhaps a naming convention or something that accomplishes this for me, but didn't find anything. Am I missing something?
#Ainar-G suggests several great patterns to separate tests.
This set of Go practices from SoundCloud recommends using build tags (described in the "Build Constraints" section of the build package) to select which tests to run:
Write an integration_test.go, and give it a build tag of integration. Define (global) flags for things like service addresses and connect strings, and use them in your tests.
// +build integration
var fooAddr = flag.String(...)
func TestToo(t *testing.T) {
f, err := foo.Connect(*fooAddr)
// ...
}
go test takes build tags just like go build, so you can call go test -tags=integration. It also synthesizes a package main which calls flag.Parse, so any flags declared and visible will be processed and available to your tests.
As a similar option, you could also have integration tests run by default by using a build condition // +build !unit, and then disable them on demand by running go test -tags=unit.
#adamc comments:
For anyone else attempting to use build tags, it's important that the // +build test comment is the first line in your file, and that you include a blank line after the comment, otherwise the -tags command will ignore the directive.
Also, the tag used in the build comment cannot have a dash, although underscores are allowed. For example, // +build unit-tests will not work, whereas // +build unit_tests will.
To elaborate on my comment to #Ainar-G's excellent answer, over the past year I have been using the combination of -short with Integration naming convention to achieve the best of both worlds.
Unit and Integration tests harmony, in the same file
Build flags previously forced me to have multiple files (services_test.go, services_integration_test.go, etc).
Instead, take this example below where the first two are unit tests and I have an integration test at the end:
package services
import "testing"
func TestServiceFunc(t *testing.T) {
t.Parallel()
...
}
func TestInvalidServiceFunc3(t *testing.T) {
t.Parallel()
...
}
func TestPostgresVersionIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
...
}
Notice the last test has the convention of:
using Integration in the test name.
checking if running under -short flag directive.
Basically, the spec goes: "write all tests normally. if it is a long-running tests, or an integration test, follow this naming convention and check for -short to be nice to your peers."
Run only Unit tests:
go test -v -short
this provides you with a nice set of messages like:
=== RUN TestPostgresVersionIntegration
--- SKIP: TestPostgresVersionIntegration (0.00s)
service_test.go:138: skipping integration test
Run Integration Tests only:
go test -run Integration
This runs only the integration tests. Useful for smoke testing canaries in production.
Obviously the downside to this approach is if anyone runs go test, without the -short flag, it will default to run all tests - unit and integration tests.
In reality, if your project is large enough to have unit and integration tests, then you most likely are using a Makefile where you can have simple directives to use go test -short in it. Or, just put it in your README.md file and call it the day.
I see three possible solutions. The first is to use the short mode for unit tests. So you would use go test -short with unit tests and the same but without the -short flag to run your integration tests as well. The standard library uses the short mode to either skip long-running tests, or make them run faster by providing simpler data.
The second is to use a convention and call your tests either TestUnitFoo or TestIntegrationFoo and then use the -run testing flag to denote which tests to run. So you would use go test -run 'Unit' for unit tests and go test -run 'Integration' for integration tests.
The third option is to use an environment variable, and get it in your tests setup with os.Getenv. Then you would use simple go test for unit tests and FOO_TEST_INTEGRATION=true go test for integration tests.
I personally would prefer the -short solution since it's simpler and is used in the standard library, so it seems like it's a de facto way of separating/simplifying long-running tests. But the -run and os.Getenv solutions offer more flexibility (more caution is required as well, since regexps are involved with -run).
I was trying to find a solution for the same recently.
These were my criteria:
The solution must be universal
No separate package for integration tests
The separation should be complete (I should be able to run integration tests only)
No special naming convention for integration tests
It should work well without additional tooling
The aforementioned solutions (custom flag, custom build tag, environment variables) did not really satisfy all the above criteria, so after a little digging and playing I came up with this solution:
package main
import (
"flag"
"regexp"
"testing"
)
func TestIntegration(t *testing.T) {
if m := flag.Lookup("test.run").Value.String(); m == "" || !regexp.MustCompile(m).MatchString(t.Name()) {
t.Skip("skipping as execution was not requested explicitly using go test -run")
}
t.Parallel()
t.Run("HelloWorld", testHelloWorld)
t.Run("SayHello", testSayHello)
}
The implementation is straightforward and minimal. Although it requires a simple convention for tests, but it's less error prone. Further improvement could be exporting the code to a helper function.
Usage
Run integration tests only across all packages in a project:
go test -v ./... -run ^TestIntegration$
Run all tests (regular and integration):
go test -v ./... -run .\*
Run only regular tests:
go test -v ./...
This solution works well without tooling, but a Makefile or some aliases can make it easier to user. It can also be easily integrated into any IDE that supports running go tests.
The full example can be found here: https://github.com/sagikazarmark/modern-go-application
I encourage you to look at Peter Bourgons approach, it is simple and avoids some problems with the advice in the other answers: https://peter.bourgon.org/blog/2021/04/02/dont-use-build-tags-for-integration-tests.html
There are many downsides to using build tags, short mode or flags, see here.
I would recommend using environment variables with a test helper that can be imported into individual packages:
func IntegrationTest(t *testing.T) {
t.Helper()
if os.Getenv("INTEGRATION") == "" {
t.Skip("skipping integration tests, set environment variable INTEGRATION")
}
}
In your tests you can now easily call this at the start of your test function:
func TestPostgresQuery(t *testing.T) {
IntegrationTest(t)
// ...
}
Why I would not recommend using either -short or flags:
Someone who checks out your repository for the first time should be able to run go test ./... and all tests are passing which is often not the case if this relies on external dependencies.
The problem with the flag package is that it will work until you have integration tests across different packages and some will run flag.Parse() and some will not which will lead to an error like this:
go test ./... -integration
flag provided but not defined: -integration
Usage of /tmp/go-build3903398677/b001/foo.test:
Environment variables appear to be the most flexible, robust and require the least amount of code with no visible downsides.
I'm starting out using unit tests. I have a situation and don't know how to proceed:
For example:
I have a class that opens and reads a file.
In my unit test, I want to test the open method and the read method, but to read the file I need to open the file first.
If the "open file" test fails, the "read file" test would fail too!
So, how to explicit that the read fail because the open? I test the open inside the read??
The key feature of unit tests is isolation: one specific unit test should cover one specific functionality - and if it fails, it should report it.
In your example, read clearly depends on open functionality: if the latter is broken, there's no reason to test the former, as we do know the result. More, reporting read failure will only add some irrelevant noise to your test results.
What can (and should be) reported for read in this case is test skipped or something similar. That's how it's done in PHPUnit, for example:
class DependencyFailureTest extends PHPUnit_Framework_TestCase
{
public function testOne()
{
$this->assertTrue(FALSE);
}
/**
* #depends testOne
*/
public function testTwo()
{
}
}
Here we have testTwo dependant on testOne. And that's what's shown when the test is run:
There was 1 failure:
1) testOne(DependencyFailureTest)
Failed asserting that <boolean:false> is true.
/home/sb/DependencyFailureTest.php:6
There was 1 skipped test:
1) testTwo(DependencyFailureTest)
This test depends on "DependencyFailureTest::testOne" to pass.
FAILURES!
Tests: 2, Assertions: 1, Failures: 1, Skipped: 1.
Explanation:
To quickly localize defects, we want our attention to be focused on
relevant failing tests. This is why PHPUnit skips the execution of a
test when a depended-upon test has failed.
Opening the file is a prerequisite to reading the file, so it's fine to include that in the test. You can throw an exception in your code if the file failed to open. The error message in the test will then make it clear why the test failed.
I would also recommend that you consider creating the file in the test itself to remove any dependencies on existing files. That way you ensure that you always have a valid file to reference.
Generally speaking, you wouldn't find yourself testing your proposed scenario of unit testing the ability to read from a file, since you will usually end up using a file manipulation library of some kind and can usually safely assume that the maintainers of said library have the appropriate unit tests already in place (for example, I feel pretty confident that I can use the File class in .NET without worry).
That being said, the idea of one condition being an impediment to testing a second is certainly valid. That's why mock frameworks were created, so that you can easily set up a mock object that will always behave in a defined manner that can then be substituted for the initial dependency. This allows you to focus squarely on unit testing the second object/condition/etc. in a test scenario.
I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.