How do I unit test command line flags in Go? - unit-testing

I would like a unit test that verifies a particular command line flag is within an enumeration.
Here is the code I would like to write tests against:
var formatType string
const (
text = "text"
json = "json"
hash = "hash"
)
func init() {
const (
defaultFormat = "text"
formatUsage = "desired output format"
)
flag.StringVar(&formatType, "format", defaultFormat, formatUsage)
flag.StringVar(&formatType, "f", defaultFormat, formatUsage+" (shorthand)")
}
func main() {
flag.Parse()
}
The desired test would pass only if -format equalled one of the const values given above. This value would be available in formatType. An example correct call would be: program -format text
What is the best way to test the desired behaviors?
Note: Perhaps I have phrased this poorly, but the displayed code it not the unit test itself, but the code I want to write unit tests against. This is a simple example from the tool I am writing and wanted to ask if there were a good way to test valid inputs to the tool.

Custom testing and processing of flags can be achieved with the flag.Var function in the flag package.
Flag.Var "defines a flag with the specified name and usage string. The type and value of the flag are represented by the first argument, of type Value, which typically holds a user-defined implementation of Value."
A flag.Value is any type that satisfies the Value interface, defined as:
type Value interface {
String() string
Set(string) error
}
There is a good example in the example_test.go file in the flag package source
For your use case you could use something like:
package main
import (
"errors"
"flag"
"fmt"
)
type formatType string
func (f *formatType) String() string {
return fmt.Sprint(*f)
}
func (f *formatType) Set(value string) error {
if len(*f) > 0 && *f != "text" {
return errors.New("format flag already set")
}
if value != "text" && value != "json" && value != "hash" {
return errors.New("Invalid Format Type")
}
*f = formatType(value)
return nil
}
var typeFlag formatType
func init() {
typeFlag = "text"
usage := `Format type. Must be "text", "json" or "hash". Defaults to "text".`
flag.Var(&typeFlag, "format", usage)
flag.Var(&typeFlag, "f", usage+" (shorthand)")
}
func main() {
flag.Parse()
fmt.Println("Format type is", typeFlag)
}
This is probably overkill for such a simple example, but may be very useful when defining more complex flag types (The linked example converts a comma separated list of intervals into a slice of a custom type based on time.Duration).
EDIT: In answer to how to run unit tests against flags, the most canonical example is flag_test.go in the flag package source. The section related to testing custom flag variables starts at Line 181.

You can do this
func main() {
var name string
var password string
flag.StringVar(&name, "name", "", "")
flag.StringVar(&password, "password", "", "")
flag.Parse()
for _, v := range os.Args {
fmt.Println(v)
}
if len(strings.TrimSpace(name)) == 0 || len(strings.TrimSpace(password)) == 0 {
log.Panicln("no name or no passward")
}
fmt.Printf("name:%s\n", name)
fmt.Printf("password:%s\n", password)
}
func TestMainApp(t *testing.T) {
os.Args = []string{"test", "-name", "Hello", "-password", "World"}
main()
}

You can test main() by:
Making a test that runs a command
Which then calls the app test binary, built from go test, directly
Passing the desired flags you want to test
Passing back the exit code, stdout, and stderr which you can assert on.
NOTE This only works when main exits, so that the test does not run infinitely, or gets caught in a recursive loop.
Given your main.go looks like:
package main
import (
"flag"
"fmt"
"os"
)
var formatType string
const (
text = "text"
json = "json"
hash = "hash"
)
func init() {
const (
defaultFormat = "text"
formatUsage = "desired output format"
)
flag.StringVar(&formatType, "format", defaultFormat, formatUsage)
flag.StringVar(&formatType, "f", defaultFormat, formatUsage+" (shorthand)")
}
func main() {
flag.Parse()
fmt.Printf("format type = %v\n", formatType)
os.Exit(0)
}
Your main_test.go may then look something like:
package main
import (
"fmt"
"os"
"os/exec"
"path"
"runtime"
"strings"
"testing"
)
// This will be used to pass args to app and keep the test framework from looping
const subCmdFlags = "FLAGS_FOR_MAIN"
func TestMain(m *testing.M) {
// Only runs when this environment variable is set.
if os.Getenv(subCmdFlags) != "" {
runAppMain()
}
// Run all tests
exitCode := m.Run()
// Clean up
os.Exit(exitCode)
}
func TestMainForCorrectness(tester *testing.T) {
var tests = []struct {
name string
wantCode int
args []string
}{
{"formatTypeJson", 0, []string{"-format", "json"}},
}
for _, test := range tests {
tester.Run(test.name, func(t *testing.T) {
cmd := getTestBinCmd(test.args)
cmdOut, cmdErr := cmd.CombinedOutput()
got := cmd.ProcessState.ExitCode()
// Debug
showCmdOutput(cmdOut, cmdErr)
if got != test.wantCode {
t.Errorf("unexpected error on exit. want %q, got %q", test.wantCode, got)
}
})
}
}
// private helper methods.
// Used for running the application's main function from other test.
func runAppMain() {
// the test framework has process its flags,
// so now we can remove them and replace them with the flags we want to pass to main.
// we are pulling them out of the environment var we set.
args := strings.Split(os.Getenv(subCmdFlags), " ")
os.Args = append([]string{os.Args[0]}, args...)
// Debug stmt, can be removed
fmt.Printf("\nos args = %v\n", os.Args)
main() // will run and exit, signaling the test framework to stop and return the exit code.
}
// getTestBinCmd return a command to run your app (test) binary directly; `TestMain`, will be run automatically.
func getTestBinCmd(args []string) *exec.Cmd {
// call the generated test binary directly
// Have it the function runAppMain.
cmd := exec.Command(os.Args[0], "-args", strings.Join(args, " "))
// Run in the context of the source directory.
_, filename, _, _ := runtime.Caller(0)
cmd.Dir = path.Dir(filename)
// Set an environment variable
// 1. Only exist for the life of the test that calls this function.
// 2. Passes arguments/flag to your app
// 3. Lets TestMain know when to run the main function.
subEnvVar := subCmdFlags + "=" + strings.Join(args, " ")
cmd.Env = append(os.Environ(), subEnvVar)
return cmd
}
func showCmdOutput(cmdOut []byte, cmdErr error) {
if cmdOut != nil {
fmt.Printf("\nBEGIN sub-command out:\n%v", string(cmdOut))
fmt.Print("END sub-command\n")
}
if cmdErr != nil {
fmt.Printf("\nBEGIN sub-command stderr:\n%v", cmdErr.Error())
fmt.Print("END sub-command\n")
}
}

I'm not sure whether we agree on the term 'unit test'. What you want to achieve seems to me
more like a pretty normal test in a program. You probably want to do something like this:
func main() {
flag.Parse()
if formatType != text || formatType != json || formatType != hash {
flag.Usage()
return
}
// ...
}
Sadly, it is not easily possible to extend the flag Parser with own value verifiers
so you have to stick with this for now.
See Intermernet for a solution which defines a custom format type and its validator.

Related

How should I test functions that deal with setting a large number of environment configs/OS arguments?

I've written a Go application, and all of the packages have full test coverage. I'm in the process of writing my main package - which will handle all of the initial setup for the application in the main() function - this function currently reads in 14 environment variables and then sets the relevant variable in the application. A simple overview of the code is:
func main() {
myStruct1 := privatePackage.myStructType{}
myStruct2 := publicPackage.otherStructType{}
if config1 := os.Getenv("CONFIG_FOO"); config1 != "" {
myStruct1.attribute1 = config1
}
// ....
if config14 := os.Getenv("CONFIG_BAR"); config14 != "" {
myStruct2.attribute5 = config14
}
}
When I test unit env variables/OS args, I typically just set the env variable directly in the test function - so something like:
func TestMyArgument(t *testing.T) {
os.Setenv("CONFIG_BAZ", "apple")
//Invoke function that depends on CONFIG_BAZ
//Assert that expected outcome occurred
}
I pretty much always use table-driven tests, so the above snippet is a simplified example.
The issue is that my main() function takes in 14 (and growing) env variables, and whilst some env variables are essentially enums (so there's a small number of valid options - for example there's a small number of database drivers to choose from), other env variables have virtually unlimited potential values. So how can I effectively cover all of the (or enough of the) permutations of potential configs?
EDIT: When this application is deployed, it's going into a K8s cluster. Some of these variables are secrets that will be pulled in from secure store. Using a JSON file isn't viable because some of the values need to be encrypted/changed easily.
Also, using a JSON file would require me to store this file and share it between hundreds/thousands of running pods - this storage would then act as a point of failure.
To clarify, this question isn't about env vars VS config files; this question is about the best way to approach testing when there's a significant number of configurable variables - with each variables having a vast number of potential values - resulting in thousands of possible configuration permutations. How do I guarantee sufficient test coverage in such a scenario?
#Steven Penny is right: uses json
and use reflect can make the code more simple:
package main
import (
"encoding/json"
"fmt"
"os"
"reflect"
"strconv"
)
type MyStructType struct {
Attribute1 string `json:"CONFIG_FOO"`
Attribute2 string `json:"CONFIG_BAZ"`
Attribute3 int `json:"CONFIG_BAR"`
}
func NewMyStructTypeFormEnv() *MyStructType {
myStructType := MyStructType{}
ReflectMyStructType(&myStructType)
fmt.Println("myStructType is now", myStructType)
return &myStructType
}
func NewMyStructTypeFormJson() *MyStructType {
myStructType := MyStructType{}
f, e := os.Open("file.json")
if e != nil {
panic(e)
}
defer f.Close()
json.NewDecoder(f).Decode(&myStructType)
fmt.Println("myStructType is now", myStructType)
return &myStructType
}
func ReflectMyStructType(ptr interface{}){
v := reflect.ValueOf(ptr).Elem()
fmt.Printf("%v\n", v.Type())
for i := 0; i < v.NumField(); i++ {
env_str := v.Type().Field(i).Tag.Get("json")
if(env_str == ""){continue}
if config := os.Getenv(env_str); config != "" {
if v.Field(i).Kind() == reflect.String{
v.Field(i).SetString(config)
}else if v.Field(i).Kind() == reflect.Int{
iConfig,_ := strconv.Atoi(config)
v.Field(i).SetInt(int64(iConfig))
}
}
}
}
func main() {
NewMyStructTypeFormJson()
os.Setenv("CONFIG_FOO", "apple")
os.Setenv("CONFIG_BAZ", "apple")
os.Setenv("CONFIG_BAR", "1")
NewMyStructTypeFormEnv()
}
Beyond one or two, I don't think using environment variables is the right approach, unless it's required (calling something with os/exec). Instead, would be better to read from a config file. Here is an example with JSON:
{
"CONFIG_BAR": "east",
"CONFIG_BAZ": "south",
"CONFIG_FOO": "north"
}
package main
import (
"encoding/json"
"fmt"
"os"
)
func main() {
f, e := os.Open("file.json")
if e != nil {
panic(e)
}
defer f.Close()
var s struct { CONFIG_BAR, CONFIG_BAZ, CONFIG_FOO string }
json.NewDecoder(f).Decode(&s)
// {CONFIG_BAR:east CONFIG_BAZ:south CONFIG_FOO:north}
fmt.Printf("%+v\n", s)
}
TOML would be a good choice as well.
https://golang.org/pkg/encoding/json
https://pkg.go.dev/github.com/pelletier/go-toml

Learning to write unit tests

I am trying to learn how to write tests for my code in order to write better code, but I just seem to have the hardest time figuring out how to actually test some code I have written. I have read so many tutorials, most of which seem to only cover functions that add two numbers or mock some database or server.
I have a simple function I wrote below that takes a text template and a CSV file as input and executes the template using the values of the CSV. I have "tested" the code by trial and error, passing files, and printing values, but I would like to learn how to write proper tests for it. I feel that learning to test my own code will help me understand and learn faster and better. Any help is appreciated.
// generateCmds generates configuration commands from a text template using
// the values from a CSV file. Multiple commands in the text template must
// be delimited by a semicolon. The first row of the CSV file is assumed to
// be the header row and the header values are used for key access in the
// text template.
func generateCmds(cmdTmpl string, filename string) ([]string, error) {
t, err := template.New("cmds").Parse(cmdTmpl)
if err != nil {
return nil, fmt.Errorf("parsing template: %v", err)
}
f, err := os.Open(filename)
if err != nil {
return nil, fmt.Errorf("reading file: %v", err)
}
defer f.Close()
records, err := csv.NewReader(f).ReadAll()
if err != nil {
return nil, fmt.Errorf("reading records: %v", err)
}
if len(records) == 0 {
return nil, errors.New("no records to process")
}
var (
b bytes.Buffer
cmds []string
keys = records[0]
vals = make(map[string]string, len(keys))
)
for _, rec := range records[1:] {
for k, v := range rec {
vals[keys[k]] = v
}
if err := t.Execute(&b, vals); err != nil {
return nil, fmt.Errorf("executing template: %v", err)
}
for _, s := range strings.Split(b.String(), ";") {
if cmd := strings.TrimSpace(s); cmd != "" {
cmds = append(cmds, cmd)
}
}
b.Reset()
}
return cmds, nil
}
Edit: Thanks for all the suggestions so far! My question was flagged as being too broad, so I have some specific questions regarding my example.
Would a test table be useful in a function like this? And, if so, would the test struct need to include the returned cmds string slice and the value of err? For example:
type tmplTest struct {
name string // test name
tmpl string // the text template
filename string // CSV file with template values
expected []string // expected configuration commands
err error // expected error
}
How do you handle errors that are supposed to be returned for specific test cases? For example, os.Open() returns an error of type *PathError if an error is encountered. How do I initialize a *PathError that is equivalent to the one returned by os.Open()? Same idea for template.Parse(), template.Execute(), etc.
Edit 2: Below is a test function I came up with. My two question from the first edit still stand.
package cmd
import (
"testing"
"strings"
"path/filepath"
)
type tmplTest struct {
name string // test name
tmpl string // text template to execute
filename string // CSV containing template text values
cmds []string // expected configuration commands
}
var tests = []tmplTest{
{"empty_error", ``, "", nil},
{"file_error", ``, "fake_file.csv", nil},
{"file_empty_error", ``, "empty.csv", nil},
{"file_fmt_error", ``, "fmt_err.csv", nil},
{"template_fmt_error", `{{ }{{`, "test_values.csv", nil},
{"template_key_error", `{{.InvalidKey}}`, "test_values.csv", nil},
}
func TestGenerateCmds(t *testing.T) {
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
cmds, err := generateCmds(tc.tmpl, filepath.Join("testdata", tc.filename))
if err != nil {
// Unexpected error. Fail the test.
if !strings.Contains(tc.name, "error") {
t.Fatal(err)
}
// TODO: Otherwise, check that the function failed at the expected point.
}
if tc.cmds == nil && cmds != nil {
t.Errorf("expected no commands; got %d", len(cmds))
}
if len(cmds) != len(tc.cmds) {
t.Errorf("expected %d commands; got %d", len(tc.cmds), len(cmds))
}
for i := range cmds {
if cmds[i] != tc.cmds[i] {
t.Errorf("expected %q; got %q", tc.cmds[i], cmds[i])
}
}
})
}
}
You basically need to have some sample files with the contents you want to test, then in your test code you can call the generateCmds function passing in the template string and the files to then verify that the results are what you expect.
It is not so much different as the examples you probably saw for simpler cases.
You can place the files under a testdata folder inside the same package (testdata is a special name that the Go tools will ignore during build).
Then you can do something like:
func TestCSVProcessing(t *testing.T) {
templateStr := `<your template here>`
testFile := "testdata/yourtestfile.csv"
result, err := generateCmds(templateStr, testFile)
if err != nil {
// fail the test here, unless you expected an error with this file
}
// compare the "result" contents with what you expected
// failing the test if it does not match
}
EDIT
About the specific questions you added later:
Would a test table be useful in a function like this? And, if so, would the test struct need to include the returned cmds string slice and the value of err?
Yes, it'd make sense to include both the expected strings to be returned as well as the expected error (if any).
How do you handle errors that are supposed to be returned for specific test cases? For example, os.Open() returns an error of type *PathError if an error is encountered. How do I initialize a *PathError that is equivalent to the one returned by os.Open()?
I don't think you'll be able to "initialize" an equivalent error for each case. Sometimes the libraries might use internal types for their errors making this impossible. Easiest would be to "initialize" a regular error with the same value returned in its Error() method, then just compare the returned error's Error() value with the expected one.

How to test code using the Go logging package glog ?

I have implemented a type wrapping glog so that I can add a prefix to log message identifying the emitter of the log in my program and I can change the log level per emitter.
How could I implement the unit tests ? The problem is that glog outputs text to stdErr.
The code is trivial but I would like the have the unit test and 100% coverage like the rest of the code. This programming effort already payed.
Test which captures stderr:
package main
import (
"bytes"
"io"
"os"
"testing"
"github.com/golang/glog"
"strings"
)
func captureStderr(f func()) (string, error) {
old := os.Stderr // keep backup of the real stderr
r, w, err := os.Pipe()
if err != nil {
return "", err
}
os.Stderr = w
outC := make(chan string)
// copy the output in a separate goroutine so printing can't block indefinitely
go func() {
var buf bytes.Buffer
io.Copy(&buf, r)
outC <- buf.String()
}()
// calling function which stderr we are going to capture:
f()
// back to normal state
w.Close()
os.Stderr = old // restoring the real stderr
return <-outC, nil
}
func TestGlogError(t *testing.T) {
stdErr, err := captureStderr(func() {
glog.Error("Test error")
})
if err != nil {
t.Errorf("should not be error, instead: %+v", err)
}
if !strings.HasSuffix(strings.TrimSpace(stdErr), "Test error") {
t.Errorf("stderr should end by 'Test error' but it doesn't: %s", stdErr)
}
}
running test:
go test -v
=== RUN TestGlogError
--- PASS: TestGlogError (0.00s)
PASS
ok command-line-arguments 0.007s
Write an interface that describes your usage. This won't be very pretty if you use the V method, but you have a wrapper so you've already done the hard work that fixing that would entail.
For each package you need to test, define
type Logger interface {
Infoln(...interface{}) // the methods you actually use in this package
}
And then you can easily swap it out by not referring to glog types directly in your code.

Testing os.Exit scenarios in Go with coverage information (coveralls.io/Goveralls)

This question: How to test os.exit scenarios in Go (and the highest voted answer therein) sets out how to test os.Exit() scenarios within go. As os.Exit() cannot easily be intercepted, the method used is to reinvoke the binary and check the exit value. This method is described at slide 23 on this presentation by Andrew Gerrand (one of the core members of the Go team); the code is very simple and is reproduced in full below.
The relevant test and main files look like this (note that this pair of files alone is an MVCE):
package foo
import (
"os"
"os/exec"
"testing"
)
func TestCrasher(t *testing.T) {
if os.Getenv("BE_CRASHER") == "1" {
Crasher() // This causes os.Exit(1) to be called
return
}
cmd := exec.Command(os.Args[0], "-test.run=TestCrasher")
cmd.Env = append(os.Environ(), "BE_CRASHER=1")
err := cmd.Run()
if e, ok := err.(*exec.ExitError); ok && !e.Success() {
fmt.Printf("Error is %v\n", e)
return
}
t.Fatalf("process ran with err %v, want exit status 1", err)
}
and
package foo
import (
"fmt"
"os"
)
// Coverage testing thinks (incorrectly) that the func below is
// never being called
func Crasher() {
fmt.Println("Going down in flames!")
os.Exit(1)
}
However, this method appears to suffer certain limitations:
Coverage testing with goveralls / coveralls.io does not work - see for instance the example here (the same code as above but put into github for your convenience) which produces the coverage test here, i.e. it does not record the test functions being run. NOTE that you don't need to those links to answer the question - the above example will work fine - they are just there to show what happens if you put the above into github, and take it all the way through travis to coveralls.io
Rerunning the test binary appears fragile.
Specifically, as requested, here is a screenshot (rather than a link) for the coverage failure; the red shading indicates that as far as coveralls.io is concerned, Crasher() is not being called.
Is there a way around this? Particularly the first point.
At a golang level the problem is this:
The Goveralls framework runs go test -cover ..., which invokes the test above.
The test above calls exec.Command / .Run without -cover in the OS arguments
Unconditionally putting -cover etc. in the argument list is unattractive as it would then run a coverage test (as the subprocess) within a non-coverage test, and parsing the argument list for the presence of -cover etc. seems a heavy duty solution.
Even if I put -cover etc. in the argument list, my understanding is that I'd then have two coverage outputs written to the same file, which isn't going to work - these would need merging somehow. The closest I've got to that is this golang issue.
Summary
What I am after is a simple way to run go coverage testing (preferably via travis, goveralls, and coveralls.io), where it is possible to both test cases where the tested routine exits with OS.exit(), and where the coverage of that test is noted. I'd quite like it to use the re-exec method above (if that can be made to work) if that can be made to work.
The solution should show coverage testing of Crasher(). Excluding Crasher() from coverage testing is not an option, as in the real world what I am trying to do is test a more complex function, where somewhere deep within, under certain conditions, it calls e.g. log.Fatalf(); what I am coverage testing is that the tests for those conditions works properly.
With a slight refactoring, you may easily achieve 100% coverage.
foo/bar.go:
package foo
import (
"fmt"
"os"
)
var osExit = os.Exit
func Crasher() {
fmt.Println("Going down in flames!")
osExit(1)
}
And the testing code: foo/bar_test.go:
package foo
import "testing"
func TestCrasher(t *testing.T) {
// Save current function and restore at the end:
oldOsExit := osExit
defer func() { osExit = oldOsExit }()
var got int
myExit := func(code int) {
got = code
}
osExit = myExit
Crasher()
if exp := 1; got != exp {
t.Errorf("Expected exit code: %d, got: %d", exp, got)
}
}
Running go test -cover:
Going down in flames!
PASS
coverage: 100.0% of statements
ok foo 0.002s
Yes, you might say this works if os.Exit() is called explicitly, but what if os.Exit() is called by someone else, e.g. log.Fatalf()?
The same technique works there too, you just have to switch log.Fatalf() instead of os.Exit(), e.g.:
Relevant part of foo/bar.go:
var logFatalf = log.Fatalf
func Crasher() {
fmt.Println("Going down in flames!")
logFatalf("Exiting with code: %d", 1)
}
And the testing code: TestCrasher() in foo/bar_test.go:
func TestCrasher(t *testing.T) {
// Save current function and restore at the end:
oldLogFatalf := logFatalf
defer func() { logFatalf = oldLogFatalf }()
var gotFormat string
var gotV []interface{}
myFatalf := func(format string, v ...interface{}) {
gotFormat, gotV = format, v
}
logFatalf = myFatalf
Crasher()
expFormat, expV := "Exiting with code: %d", []interface{}{1}
if gotFormat != expFormat || !reflect.DeepEqual(gotV, expV) {
t.Error("Something went wrong")
}
}
Running go test -cover:
Going down in flames!
PASS
coverage: 100.0% of statements
ok foo 0.002s
Interfaces and mocks
Using Go interfaces possible to create mock-able compositions. A type could have interfaces as bound dependencies. These dependencies could be easily substituted with mocks appropriate to the interfaces.
type Exiter interface {
Exit(int)
}
type osExit struct {}
func (o* osExit) Exit (code int) {
os.Exit(code)
}
type Crasher struct {
Exiter
}
func (c *Crasher) Crash() {
fmt.Println("Going down in flames!")
c.Exit(1)
}
Testing
type MockOsExit struct {
ExitCode int
}
func (m *MockOsExit) Exit(code int){
m.ExitCode = code
}
func TestCrasher(t *testing.T) {
crasher := &Crasher{&MockOsExit{}}
crasher.Crash() // This causes os.Exit(1) to be called
f := crasher.Exiter.(*MockOsExit)
if f.ExitCode == 1 {
fmt.Printf("Error code is %d\n", f.ExitCode)
return
}
t.Fatalf("Process ran with err code %d, want exit status 1", f.ExitCode)
}
Disadvantages
Original Exit method still won't be tested so it should be responsible only for exit, nothing more.
Functions are first class citizens
Parameter dependency
Functions are first class citizens in Go. A lot of operations are allowed with functions so we can do some tricks with functions directly.
Using 'pass as parameter' operation we can do a dependency injection:
type osExit func(code int)
func Crasher(os_exit osExit) {
fmt.Println("Going down in flames!")
os_exit(1)
}
Testing:
var exit_code int
func os_exit_mock(code int) {
exit_code = code
}
func TestCrasher(t *testing.T) {
Crasher(os_exit_mock) // This causes os.Exit(1) to be called
if exit_code == 1 {
fmt.Printf("Error code is %d\n", exit_code)
return
}
t.Fatalf("Process ran with err code %v, want exit status 1", exit_code)
}
Disadvantages
You must pass a dependency as a parameter. If you have many dependencies a length of params list could be huge.
Variable substitution
Actually it is possible to do it using "assign to variable" operation without explicit passing a function as a parameter.
var osExit = os.Exit
func Crasher() {
fmt.Println("Going down in flames!")
osExit(1)
}
Testing
var exit_code int
func osExitMock(code int) {
exit_code = code
}
func TestCrasher(t *testing.T) {
origOsExit := osExit
osExit = osExitMock
// Don't forget to switch functions back!
defer func() { osExit = origOsExit }()
Crasher()
if exit_code != 1 {
t.Fatalf("Process ran with err code %v, want exit status 1", exit_code)
}
}
disadvantages
It is implicit and easy to crash.
Design notes
If you plan to declare some logic below Exit an exit logic must be isolated with else block or extra return after exit because mock won't stop execution.
func (c *Crasher) Crash() {
if SomeCondition == true {
fmt.Println("Going down in flames!")
c.Exit(1) // Exit in real situation, invoke mock when testing
} else {
DoSomeOtherStuff()
}
}

Subset of table driven test

For testing functions I could select which will run by option -run.
go test -run regex
Very common if we have dozens test cases is put it into array in order not to write function for each of that:
cases := []struct {
arg, expected string
} {
{"%a", "[%a]"},
{"%-a", "[%-a]"},
// and many others
}
for _, c := range cases {
res := myfn(c.arg)
if res != c.expected {
t.Errorf("myfn(%q) should return %q, but it returns %q", c.arg, c.expected, res)
}
}
This work good, but problem is with maintanance. When I add a new testcase, while debugging I want to start just a new test case, but I cannot say something like:
go test -run TestMyFn.onlyThirdCase
Is there any elegant way, how to have many testcases in array together with ability to choose which testcase will run?
With Go 1.6 (and below)
This is not supported directly by the testing package in Go 1.6 and below. You have to implement it yourself.
But it's not that hard. You can use flag package to easily access command line arguments.
Let's see an example. We define an "idx" command line parameter, which if present, only the case at that index will be executed, else all test cases.
Define flag:
var idx = flag.Int("idx", -1, "specify case index to run only")
Parse command line flags (actually, this is not required as go test already calls this, but just to be sure / complete):
func init() {
flag.Parse()
}
Using this parameter:
for i, c := range cases {
if *idx != -1 && *idx != i {
println("Skipping idx", i)
continue
}
if res := myfn(c.arg); res != c.expected {
t.Errorf("myfn(%q) should return %q, but it returns %q", c.arg, c.expected, res)
}
}
Testing it with 3 test cases:
cases := []struct {
arg, expected string
}{
{"%a", "[%a]"},
{"%-a", "[%-a]"},
{"%+a", "[%+a]"},
}
Without idx parameter:
go test
Output:
PASS
ok play 0.172s
Specifying an index:
go test -idx=1
Output:
Skipping idx 0
Skipping idx 2
PASS
ok play 0.203s
Of course you can implement more sophisticated filtering logic, e.g. you can have minidx and maxidx flags to run cases in a range:
var (
minidx = flag.Int("minidx", 0, "min case idx to run")
maxidx = flag.Int("maxidx", -1, "max case idx to run")
)
And the filtering:
if i < *minidx || *maxidx != -1 && i > *maxidx {
println("Skipping idx", i)
continue
}
Using it:
go test -maxidx=1
Output:
Skipping idx 2
PASS
ok play 0.188s
Starting with Go 1.7
Go 1.7 (to be released on August 18, 2016) adds the definition of subtests and sub-benchmarks:
The testing package now supports the definition of tests with subtests and benchmarks with sub-benchmarks. This support makes it easy to write table-driven benchmarks and to create hierarchical tests. It also provides a way to share common setup and tear-down code. See the package documentation for details.
With that, you can do things like:
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
Where the subtests are named "A=1", "A=2", "B=1".
The argument to the -run and -bench command-line flags is a slash-separated list of regular expressions that match each name element in turn. For example:
go test -run Foo # Run top-level tests matching "Foo".
go test -run Foo/A= # Run subtests of Foo matching "A=".
go test -run /A=1 # Run all subtests of a top-level test matching "A=1".
How does this help your case? The names of subtests are string values, which can be generated on-the-fly, e.g.:
for i, c := range cases {
name := fmt.Sprintf("C=%d", i)
t.Run(name, func(t *testing.T) {
if res := myfn(c.arg); res != c.expected {
t.Errorf("myfn(%q) should return %q, but it returns %q",
c.arg, c.expected, res)
}
})
}
To run the case at index 2, you could start it like
go test -run /C=2
or
go test -run TestName/C=2
I wrote a simple code, that work fine with both, although with a bit different command line options. Version for 1.7 is:
// +build go1.7
package plist
import "testing"
func runTest(name string, fn func(t *testing.T), t *testing.T) {
t.Run(name, fn)
}
and 1.6 and older:
// +build !go1.7
package plist
import (
"flag"
"testing"
"runtime"
"strings"
"fmt"
)
func init() {
flag.Parse()
}
var pattern = flag.String("pattern", "", "specify which test(s) should be executed")
var verbose = flag.Bool("verbose", false, "write whether test was done")
// This is a hack, that a bit simulate t.Run available from go1.7
func runTest(name string, fn func(t *testing.T), t *testing.T) {
// obtain name of caller
var pc[10]uintptr
runtime.Callers(2, pc[:])
var fnName = ""
f := runtime.FuncForPC(pc[0])
if f != nil {
fnName = f.Name()
}
names := strings.Split(fnName, ".")
fnName = names[len(names)-1] + "/" + name
if strings.Contains(fnName, *pattern) {
if *verbose {
fmt.Printf("%s is executed\n", fnName)
}
fn(t)
} else {
if *verbose {
fmt.Printf("%s is skipped\n", fnName)
}
}
}