How To Write Pending Tests In Go - unit-testing

Is there legitimate way to write down a test case for which I intent to write full test function later on? As like pending tests of mochajs?

The package docs describe such example with testing.(*T).Skip:
Tests and benchmarks may be skipped if not applicable with a call to the Skip method of *T and *B:
func TestTimeConsuming(t *testing.T) {
if testing.Short() {
t.Skip("skipping test in short mode.")
}
...
}
The message you provided for Skip will be printed if you launch go test with a -v flag (in this example you'll also need to provide -short flag to see the skip message).

Related

Questions about google test and assertion output (test results); can I trust when gtest says a test passed?

When I create a TEST or TEST_F test, how can I know that my assertion is actually executing?
The problem I have is, when I have an empty TEST_F, for example,
TEST_F(myFixture, test1) {}
When it runs, gtest says this test passes. I would have expected the test to fail, until I write test code. Anyway.
So, my problem is that when gtest says that when test is "OK" or that it passed, I can't trust it, because a test could "pass" if there is no test code.
It would be nice to print what my EXPECT_ or ASSERT calls are doing and then see that they pass. Problem is, if I do any std::cout calls, that seems to be out of sync with the test results at the end. The output messages are not in sync with any of my own std::cout calls.
Is there a verbose option to google test? How can I be sure the EXPECT that I coded is actually running?
You might consider looking at TDD, Test Driven Development, https://en.wikipedia.org/wiki/Test-driven_development
write one test => it will fail
write code to make the test pass => test passes
Rinse and repeat: express each requirement as a test, that initially fails. Write code to make that test pass.

How to generate unit test coverage when using command line flags in Golang subprocess testing?

I have unit tests for most of our code. But I cannot figure out how to generate unit tests coverage for certain code in main() in main package.
The main function is pretty simple. It is basically a select block. It reads flags, then either call another function/execute something, or simply print help on screen. However, if commandline options are not set correctly, it will exit with various error codes. Hence, the need for sub-process testing.
I tried sub-process testing technique but modified code so that it include flag for coverage:
cmd := exec.Command(os.Args[0], "-test.run=TestMain -test.coverprofile=/vagrant/ucover/coverage2.out")
Here is original code: https://talks.golang.org/2014/testing.slide#23
Explanation of above slide: http://youtu.be/ndmB0bj7eyw?t=47m16s
But it doesn't generate cover profile. I haven't been able to figure out why not. It does generate cover profile for main process executing tests, but any code executed in sub-process, of course, is not marked as executed.
I try to achieve as much code coverage as possible. I am not sure if I am missing something or if there is an easier way to do this. Or if it is just not possible.
Any help is appreciated.
Thanks
Amer
I went with another approach which didn't involve refactoring main(): see this commit:
I use a global (unexported) variable:
var args []string
And then in main(), I use os.Args unless the private var args was set:
a := os.Args[1:]
if args != nil {
a = args
}
flag.CommandLine.Parse(a)
In my test, I can set the parameters I want:
args = []string{"-v", "-audit", "_tests/p1/conf/gitolite.conf"}
main()
And I still achieve a 100% code coverage, even over main().
I would factor the logic that needs to be tested out of main():
func main() {
start(os.Args)
}
func start(args []string) {
// old main() logic
}
This way you can unit-test start() without mutating os.Args.
Using #VonC solution with Go 1.11, I found I had to reset flag.CommandLine on each test redefining the flags, to avoid a "flag redefined" panic.:
for _, check := range checks {
t.Run("flagging " + check.arg, func(t *testing.T) {
flag.CommandLine = flag.NewFlagSet(cmd, flag.ContinueOnError)
args = []string{check.arg}
main()
})
}

Separating unit tests and integration tests in Go

Is there an established best practice for separating unit tests and integration tests in GoLang (testify)? I have a mix of unit tests (which do not rely on any external resources and thus run really fast) and integration tests (which do rely on any external resources and thus run slower). So, I want to be able to control whether or not to include the integration tests when I say go test.
The most straight-forward technique would seem to be to define a -integrate flag in main:
var runIntegrationTests = flag.Bool("integration", false
, "Run the integration tests (in addition to the unit tests)")
And then to add an if-statement to the top of every integration test:
if !*runIntegrationTests {
this.T().Skip("To run this test, use: go test -integration")
}
Is this the best I can do? I searched the testify documentation to see if there is perhaps a naming convention or something that accomplishes this for me, but didn't find anything. Am I missing something?
#Ainar-G suggests several great patterns to separate tests.
This set of Go practices from SoundCloud recommends using build tags (described in the "Build Constraints" section of the build package) to select which tests to run:
Write an integration_test.go, and give it a build tag of integration. Define (global) flags for things like service addresses and connect strings, and use them in your tests.
// +build integration
var fooAddr = flag.String(...)
func TestToo(t *testing.T) {
f, err := foo.Connect(*fooAddr)
// ...
}
go test takes build tags just like go build, so you can call go test -tags=integration. It also synthesizes a package main which calls flag.Parse, so any flags declared and visible will be processed and available to your tests.
As a similar option, you could also have integration tests run by default by using a build condition // +build !unit, and then disable them on demand by running go test -tags=unit.
#adamc comments:
For anyone else attempting to use build tags, it's important that the // +build test comment is the first line in your file, and that you include a blank line after the comment, otherwise the -tags command will ignore the directive.
Also, the tag used in the build comment cannot have a dash, although underscores are allowed. For example, // +build unit-tests will not work, whereas // +build unit_tests will.
To elaborate on my comment to #Ainar-G's excellent answer, over the past year I have been using the combination of -short with Integration naming convention to achieve the best of both worlds.
Unit and Integration tests harmony, in the same file
Build flags previously forced me to have multiple files (services_test.go, services_integration_test.go, etc).
Instead, take this example below where the first two are unit tests and I have an integration test at the end:
package services
import "testing"
func TestServiceFunc(t *testing.T) {
t.Parallel()
...
}
func TestInvalidServiceFunc3(t *testing.T) {
t.Parallel()
...
}
func TestPostgresVersionIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
...
}
Notice the last test has the convention of:
using Integration in the test name.
checking if running under -short flag directive.
Basically, the spec goes: "write all tests normally. if it is a long-running tests, or an integration test, follow this naming convention and check for -short to be nice to your peers."
Run only Unit tests:
go test -v -short
this provides you with a nice set of messages like:
=== RUN TestPostgresVersionIntegration
--- SKIP: TestPostgresVersionIntegration (0.00s)
service_test.go:138: skipping integration test
Run Integration Tests only:
go test -run Integration
This runs only the integration tests. Useful for smoke testing canaries in production.
Obviously the downside to this approach is if anyone runs go test, without the -short flag, it will default to run all tests - unit and integration tests.
In reality, if your project is large enough to have unit and integration tests, then you most likely are using a Makefile where you can have simple directives to use go test -short in it. Or, just put it in your README.md file and call it the day.
I see three possible solutions. The first is to use the short mode for unit tests. So you would use go test -short with unit tests and the same but without the -short flag to run your integration tests as well. The standard library uses the short mode to either skip long-running tests, or make them run faster by providing simpler data.
The second is to use a convention and call your tests either TestUnitFoo or TestIntegrationFoo and then use the -run testing flag to denote which tests to run. So you would use go test -run 'Unit' for unit tests and go test -run 'Integration' for integration tests.
The third option is to use an environment variable, and get it in your tests setup with os.Getenv. Then you would use simple go test for unit tests and FOO_TEST_INTEGRATION=true go test for integration tests.
I personally would prefer the -short solution since it's simpler and is used in the standard library, so it seems like it's a de facto way of separating/simplifying long-running tests. But the -run and os.Getenv solutions offer more flexibility (more caution is required as well, since regexps are involved with -run).
I was trying to find a solution for the same recently.
These were my criteria:
The solution must be universal
No separate package for integration tests
The separation should be complete (I should be able to run integration tests only)
No special naming convention for integration tests
It should work well without additional tooling
The aforementioned solutions (custom flag, custom build tag, environment variables) did not really satisfy all the above criteria, so after a little digging and playing I came up with this solution:
package main
import (
"flag"
"regexp"
"testing"
)
func TestIntegration(t *testing.T) {
if m := flag.Lookup("test.run").Value.String(); m == "" || !regexp.MustCompile(m).MatchString(t.Name()) {
t.Skip("skipping as execution was not requested explicitly using go test -run")
}
t.Parallel()
t.Run("HelloWorld", testHelloWorld)
t.Run("SayHello", testSayHello)
}
The implementation is straightforward and minimal. Although it requires a simple convention for tests, but it's less error prone. Further improvement could be exporting the code to a helper function.
Usage
Run integration tests only across all packages in a project:
go test -v ./... -run ^TestIntegration$
Run all tests (regular and integration):
go test -v ./... -run .\*
Run only regular tests:
go test -v ./...
This solution works well without tooling, but a Makefile or some aliases can make it easier to user. It can also be easily integrated into any IDE that supports running go tests.
The full example can be found here: https://github.com/sagikazarmark/modern-go-application
I encourage you to look at Peter Bourgons approach, it is simple and avoids some problems with the advice in the other answers: https://peter.bourgon.org/blog/2021/04/02/dont-use-build-tags-for-integration-tests.html
There are many downsides to using build tags, short mode or flags, see here.
I would recommend using environment variables with a test helper that can be imported into individual packages:
func IntegrationTest(t *testing.T) {
t.Helper()
if os.Getenv("INTEGRATION") == "" {
t.Skip("skipping integration tests, set environment variable INTEGRATION")
}
}
In your tests you can now easily call this at the start of your test function:
func TestPostgresQuery(t *testing.T) {
IntegrationTest(t)
// ...
}
Why I would not recommend using either -short or flags:
Someone who checks out your repository for the first time should be able to run go test ./... and all tests are passing which is often not the case if this relies on external dependencies.
The problem with the flag package is that it will work until you have integration tests across different packages and some will run flag.Parse() and some will not which will lead to an error like this:
go test ./... -integration
flag provided but not defined: -integration
Usage of /tmp/go-build3903398677/b001/foo.test:
Environment variables appear to be the most flexible, robust and require the least amount of code with no visible downsides.

Ignore code blocks in Golang test coverage calculation

I am writing unit tests for my golang code, and there are a couple methods that I would like to be ignored when coverage is calculated. Is this possible? If so, how?
One way to do it would be to put the functions you don't want tested in a separate go file, and use a build tag to keep it from being included during tests. For example, I do this sometimes with applications where I have a main.go file with the main function, maybe a usage function, etc., that don't get tested. Then you can add a test tag or something, like go test -v -cover -tags test and the main might look something like:
//+build !test
package main
func main() {
// do stuff
}
func usage() {
// show some usage info
}

How to mark a Google Test test-case as "expected to fail"?

I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".
Is there a way to do this?
EDIT:
I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.
EDIT2:
It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.
EXPECT_NONFATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:
#include "gtest-spi.h"
// ...
TEST_F( testclass, testname )
{
EXPECT_NONFATAL_FAILURE(
// your code here, or just call:
FAIL()
,"Some optional text that would be associated with"
" the particular failure you were expecting, if you"
" wanted to be sure to catch the correct failure mode" );
}
Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures
You can prefix the test name with DISABLED_.
I'm not aware of a direct way to do this, but you can fake it with something like this:
try {
// do something that should fail and throw and exception
...
EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
// return or print a message, etc.
}
Basically, the test will fail if it reaches the contradictory expectation.
It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.
In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.
Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.