Subset of table driven test - unit-testing

For testing functions I could select which will run by option -run.
go test -run regex
Very common if we have dozens test cases is put it into array in order not to write function for each of that:
cases := []struct {
arg, expected string
} {
{"%a", "[%a]"},
{"%-a", "[%-a]"},
// and many others
}
for _, c := range cases {
res := myfn(c.arg)
if res != c.expected {
t.Errorf("myfn(%q) should return %q, but it returns %q", c.arg, c.expected, res)
}
}
This work good, but problem is with maintanance. When I add a new testcase, while debugging I want to start just a new test case, but I cannot say something like:
go test -run TestMyFn.onlyThirdCase
Is there any elegant way, how to have many testcases in array together with ability to choose which testcase will run?

With Go 1.6 (and below)
This is not supported directly by the testing package in Go 1.6 and below. You have to implement it yourself.
But it's not that hard. You can use flag package to easily access command line arguments.
Let's see an example. We define an "idx" command line parameter, which if present, only the case at that index will be executed, else all test cases.
Define flag:
var idx = flag.Int("idx", -1, "specify case index to run only")
Parse command line flags (actually, this is not required as go test already calls this, but just to be sure / complete):
func init() {
flag.Parse()
}
Using this parameter:
for i, c := range cases {
if *idx != -1 && *idx != i {
println("Skipping idx", i)
continue
}
if res := myfn(c.arg); res != c.expected {
t.Errorf("myfn(%q) should return %q, but it returns %q", c.arg, c.expected, res)
}
}
Testing it with 3 test cases:
cases := []struct {
arg, expected string
}{
{"%a", "[%a]"},
{"%-a", "[%-a]"},
{"%+a", "[%+a]"},
}
Without idx parameter:
go test
Output:
PASS
ok play 0.172s
Specifying an index:
go test -idx=1
Output:
Skipping idx 0
Skipping idx 2
PASS
ok play 0.203s
Of course you can implement more sophisticated filtering logic, e.g. you can have minidx and maxidx flags to run cases in a range:
var (
minidx = flag.Int("minidx", 0, "min case idx to run")
maxidx = flag.Int("maxidx", -1, "max case idx to run")
)
And the filtering:
if i < *minidx || *maxidx != -1 && i > *maxidx {
println("Skipping idx", i)
continue
}
Using it:
go test -maxidx=1
Output:
Skipping idx 2
PASS
ok play 0.188s
Starting with Go 1.7
Go 1.7 (to be released on August 18, 2016) adds the definition of subtests and sub-benchmarks:
The testing package now supports the definition of tests with subtests and benchmarks with sub-benchmarks. This support makes it easy to write table-driven benchmarks and to create hierarchical tests. It also provides a way to share common setup and tear-down code. See the package documentation for details.
With that, you can do things like:
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
Where the subtests are named "A=1", "A=2", "B=1".
The argument to the -run and -bench command-line flags is a slash-separated list of regular expressions that match each name element in turn. For example:
go test -run Foo # Run top-level tests matching "Foo".
go test -run Foo/A= # Run subtests of Foo matching "A=".
go test -run /A=1 # Run all subtests of a top-level test matching "A=1".
How does this help your case? The names of subtests are string values, which can be generated on-the-fly, e.g.:
for i, c := range cases {
name := fmt.Sprintf("C=%d", i)
t.Run(name, func(t *testing.T) {
if res := myfn(c.arg); res != c.expected {
t.Errorf("myfn(%q) should return %q, but it returns %q",
c.arg, c.expected, res)
}
})
}
To run the case at index 2, you could start it like
go test -run /C=2
or
go test -run TestName/C=2

I wrote a simple code, that work fine with both, although with a bit different command line options. Version for 1.7 is:
// +build go1.7
package plist
import "testing"
func runTest(name string, fn func(t *testing.T), t *testing.T) {
t.Run(name, fn)
}
and 1.6 and older:
// +build !go1.7
package plist
import (
"flag"
"testing"
"runtime"
"strings"
"fmt"
)
func init() {
flag.Parse()
}
var pattern = flag.String("pattern", "", "specify which test(s) should be executed")
var verbose = flag.Bool("verbose", false, "write whether test was done")
// This is a hack, that a bit simulate t.Run available from go1.7
func runTest(name string, fn func(t *testing.T), t *testing.T) {
// obtain name of caller
var pc[10]uintptr
runtime.Callers(2, pc[:])
var fnName = ""
f := runtime.FuncForPC(pc[0])
if f != nil {
fnName = f.Name()
}
names := strings.Split(fnName, ".")
fnName = names[len(names)-1] + "/" + name
if strings.Contains(fnName, *pattern) {
if *verbose {
fmt.Printf("%s is executed\n", fnName)
}
fn(t)
} else {
if *verbose {
fmt.Printf("%s is skipped\n", fnName)
}
}
}

Related

How should I test functions that deal with setting a large number of environment configs/OS arguments?

I've written a Go application, and all of the packages have full test coverage. I'm in the process of writing my main package - which will handle all of the initial setup for the application in the main() function - this function currently reads in 14 environment variables and then sets the relevant variable in the application. A simple overview of the code is:
func main() {
myStruct1 := privatePackage.myStructType{}
myStruct2 := publicPackage.otherStructType{}
if config1 := os.Getenv("CONFIG_FOO"); config1 != "" {
myStruct1.attribute1 = config1
}
// ....
if config14 := os.Getenv("CONFIG_BAR"); config14 != "" {
myStruct2.attribute5 = config14
}
}
When I test unit env variables/OS args, I typically just set the env variable directly in the test function - so something like:
func TestMyArgument(t *testing.T) {
os.Setenv("CONFIG_BAZ", "apple")
//Invoke function that depends on CONFIG_BAZ
//Assert that expected outcome occurred
}
I pretty much always use table-driven tests, so the above snippet is a simplified example.
The issue is that my main() function takes in 14 (and growing) env variables, and whilst some env variables are essentially enums (so there's a small number of valid options - for example there's a small number of database drivers to choose from), other env variables have virtually unlimited potential values. So how can I effectively cover all of the (or enough of the) permutations of potential configs?
EDIT: When this application is deployed, it's going into a K8s cluster. Some of these variables are secrets that will be pulled in from secure store. Using a JSON file isn't viable because some of the values need to be encrypted/changed easily.
Also, using a JSON file would require me to store this file and share it between hundreds/thousands of running pods - this storage would then act as a point of failure.
To clarify, this question isn't about env vars VS config files; this question is about the best way to approach testing when there's a significant number of configurable variables - with each variables having a vast number of potential values - resulting in thousands of possible configuration permutations. How do I guarantee sufficient test coverage in such a scenario?
#Steven Penny is right: uses json
and use reflect can make the code more simple:
package main
import (
"encoding/json"
"fmt"
"os"
"reflect"
"strconv"
)
type MyStructType struct {
Attribute1 string `json:"CONFIG_FOO"`
Attribute2 string `json:"CONFIG_BAZ"`
Attribute3 int `json:"CONFIG_BAR"`
}
func NewMyStructTypeFormEnv() *MyStructType {
myStructType := MyStructType{}
ReflectMyStructType(&myStructType)
fmt.Println("myStructType is now", myStructType)
return &myStructType
}
func NewMyStructTypeFormJson() *MyStructType {
myStructType := MyStructType{}
f, e := os.Open("file.json")
if e != nil {
panic(e)
}
defer f.Close()
json.NewDecoder(f).Decode(&myStructType)
fmt.Println("myStructType is now", myStructType)
return &myStructType
}
func ReflectMyStructType(ptr interface{}){
v := reflect.ValueOf(ptr).Elem()
fmt.Printf("%v\n", v.Type())
for i := 0; i < v.NumField(); i++ {
env_str := v.Type().Field(i).Tag.Get("json")
if(env_str == ""){continue}
if config := os.Getenv(env_str); config != "" {
if v.Field(i).Kind() == reflect.String{
v.Field(i).SetString(config)
}else if v.Field(i).Kind() == reflect.Int{
iConfig,_ := strconv.Atoi(config)
v.Field(i).SetInt(int64(iConfig))
}
}
}
}
func main() {
NewMyStructTypeFormJson()
os.Setenv("CONFIG_FOO", "apple")
os.Setenv("CONFIG_BAZ", "apple")
os.Setenv("CONFIG_BAR", "1")
NewMyStructTypeFormEnv()
}
Beyond one or two, I don't think using environment variables is the right approach, unless it's required (calling something with os/exec). Instead, would be better to read from a config file. Here is an example with JSON:
{
"CONFIG_BAR": "east",
"CONFIG_BAZ": "south",
"CONFIG_FOO": "north"
}
package main
import (
"encoding/json"
"fmt"
"os"
)
func main() {
f, e := os.Open("file.json")
if e != nil {
panic(e)
}
defer f.Close()
var s struct { CONFIG_BAR, CONFIG_BAZ, CONFIG_FOO string }
json.NewDecoder(f).Decode(&s)
// {CONFIG_BAR:east CONFIG_BAZ:south CONFIG_FOO:north}
fmt.Printf("%+v\n", s)
}
TOML would be a good choice as well.
https://golang.org/pkg/encoding/json
https://pkg.go.dev/github.com/pelletier/go-toml

How to test if a goroutine has been called while unit testing in Golang?

suppose that we have a method like this:
func method(intr MyInterface) {
go intr.exec()
}
In unit testing method, we want to assert that inter.exec has been called once and only once; so we can mock it with another mock struct in tests, which will give us functionality to check if it has been called or not:
type mockInterface struct{
CallCount int
}
func (m *mockInterface) exec() {
m.CallCount += 1
}
And in unit tests:
func TestMethod(t *testing.T) {
var mock mockInterface{}
method(mock)
if mock.CallCount != 1 {
t.Errorf("Expected exec to be called only once but it ran %d times", mock.CallCount)
}
}
Now, the problem is that since intr.exec is being called with go keyword, we can't be sure that when we reach our assertion in tests, it has been called or not.
Possible Solution 1:
Adding a channel to arguments of intr.exec may solve this: we could wait on receiving any object from it in tests, and after receiving an object from it then we could continue to assert it being called. This channel will be completely unused in production (non-test) codes.
This will work but it adds unnecessary complexity to non-test codes, and may make large codebases incomprehensible.
Possible Solution 2:
Adding a relatively small sleep to tests before assertion may give us some assurance that the goroutine will be called before sleep is finished:
func TestMethod(t *testing.T) {
var mock mockInterface{}
method(mock)
time.sleep(100 * time.Millisecond)
if mock.CallCount != 1 {
t.Errorf("Expected exec to be called only once but it ran %d times", mock.CallCount)
}
}
This will leave non-test codes as they are now.
The problem is that it will make tests slower, and will make them flaky, since they may break in some random circumstances.
Possible Solution 3:
Creating a utility function like this:
var Go = func(function func()) {
go function()
}
And rewrite method like this:
func method(intr MyInterface) {
Go(intr.exec())
}
In tests, we could change Go to this:
var Go = func(function func()) {
function()
}
So, when we're running tests, intr.exec will be called synchronously, and we can be sure that our mock method is called before assertion.
The only problem of this solution is that it's overriding a fundamental structure of golang, which is not right thing to do.
These are solutions that I could find, but non are satisfactory as far as I can see. What is best solution?
Use a sync.WaitGroup inside the mock
You can extend mockInterface to allow it to wait for the other goroutine to finish
type mockInterface struct{
wg sync.WaitGroup // create a wait group, this will allow you to block later
CallCount int
}
func (m *mockInterface) exec() {
m.wg.Done() // record the fact that you've got a call to exec
m.CallCount += 1
}
func (m *mockInterface) currentCount() int {
m.wg.Wait() // wait for all the call to happen. This will block until wg.Done() is called.
return m.CallCount
}
In the tests you can do:
mock := &mockInterface{}
mock.wg.Add(1) // set up the fact that you want it to block until Done is called once.
method(mock)
if mock.currentCount() != 1 { // this line with block
// trimmed
}
This test won't hang forever as with sync.WaitGroup solution proposed above. It will hang for a second (in this particular example) in case when there is no call to mock.exec:
package main
import (
"testing"
"time"
)
type mockInterface struct {
closeCh chan struct{}
}
func (m *mockInterface) exec() {
close(closeCh)
}
func TestMethod(t *testing.T) {
mock := mockInterface{
closeCh: make(chan struct{}),
}
method(mock)
select {
case <-closeCh:
case <-time.After(time.Second):
t.Fatalf("expected call to mock.exec method")
}
}
This is basically what mc.Wait(time.Second) in my answer above.
first of all I would use a mock generator, i.e. github.com/gojuno/minimock
instead of writing mocks yourself:
minimock -f example.go -i MyInterface -o my_interface_mock_test.go
then your test can look like this (btw the test stub is also generated with github.com/hexdigest/gounit)
func Test_method(t *testing.T) {
type args struct {
intr MyInterface
}
tests := []struct {
name string
args func(t minimock.Tester) args
}{
{
name: "check if exec is called",
args: func(t minimock.Tester) args {
return args{
intr: NewMyInterfaceMock(t).execMock.Return(),
}
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mc := minimock.NewController(t)
defer mc.Wait(time.Second)
tArgs := tt.args(mc)
method(tArgs.intr)
})
}
}
In this test
defer mc.Wait(time.Second)
Waits for all mocked methods to be called.

What's the general rules to let 'go test -run <case>' successful as 'go test'?

I found the 'go test' PASS, but if I specific sub test, it will fail, here I give a global variable sample, 'go test' will PASS, and 'go test -run f/sample2' will FAIL.
I'm wonder what's the general rules should I follow to prevent such problems?
t.go
package main
import "fmt"
var g string
func f(s string) string {
g = g + s
return s + g
}
func main() {
fmt.Println(f("a"))
}
t_test.go
package main
import (
"testing"
)
func Test_f(t *testing.T) {
tests := []struct {
name string
g string
s string
r string
}{
{"simple", "g1", "s1", "s1s1"},
{"simple2", "g2", "s2", "s2s1s2"},
{"simple3", "g3", "s3", "s3s1s2s3"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
//g = tt.g
r := f(tt.s)
if r != tt.r {
t.Error("r=", r, "expect r=", tt.r)
}
})
}
}
If global variable is the problem, I often write something like osExit fmtPrint to replace os.Exit and fmt.Print on testing, how to overcome these?
The way to prevent such problems is to not have tests that depend on each other. In your case, the global variable is indeed a problem. If you run the tests in order, they have the expected global state. If you run them out of order, they don't, due to the inter-dependency between the tests.
The solution is to have each test work independently, by having it set up its own expected state. In this case, that would mean setting the global g variable to the expected value for each test.
A better solution is to refactor so you don't have global variables at all.

Testing os.Exit scenarios in Go with coverage information (coveralls.io/Goveralls)

This question: How to test os.exit scenarios in Go (and the highest voted answer therein) sets out how to test os.Exit() scenarios within go. As os.Exit() cannot easily be intercepted, the method used is to reinvoke the binary and check the exit value. This method is described at slide 23 on this presentation by Andrew Gerrand (one of the core members of the Go team); the code is very simple and is reproduced in full below.
The relevant test and main files look like this (note that this pair of files alone is an MVCE):
package foo
import (
"os"
"os/exec"
"testing"
)
func TestCrasher(t *testing.T) {
if os.Getenv("BE_CRASHER") == "1" {
Crasher() // This causes os.Exit(1) to be called
return
}
cmd := exec.Command(os.Args[0], "-test.run=TestCrasher")
cmd.Env = append(os.Environ(), "BE_CRASHER=1")
err := cmd.Run()
if e, ok := err.(*exec.ExitError); ok && !e.Success() {
fmt.Printf("Error is %v\n", e)
return
}
t.Fatalf("process ran with err %v, want exit status 1", err)
}
and
package foo
import (
"fmt"
"os"
)
// Coverage testing thinks (incorrectly) that the func below is
// never being called
func Crasher() {
fmt.Println("Going down in flames!")
os.Exit(1)
}
However, this method appears to suffer certain limitations:
Coverage testing with goveralls / coveralls.io does not work - see for instance the example here (the same code as above but put into github for your convenience) which produces the coverage test here, i.e. it does not record the test functions being run. NOTE that you don't need to those links to answer the question - the above example will work fine - they are just there to show what happens if you put the above into github, and take it all the way through travis to coveralls.io
Rerunning the test binary appears fragile.
Specifically, as requested, here is a screenshot (rather than a link) for the coverage failure; the red shading indicates that as far as coveralls.io is concerned, Crasher() is not being called.
Is there a way around this? Particularly the first point.
At a golang level the problem is this:
The Goveralls framework runs go test -cover ..., which invokes the test above.
The test above calls exec.Command / .Run without -cover in the OS arguments
Unconditionally putting -cover etc. in the argument list is unattractive as it would then run a coverage test (as the subprocess) within a non-coverage test, and parsing the argument list for the presence of -cover etc. seems a heavy duty solution.
Even if I put -cover etc. in the argument list, my understanding is that I'd then have two coverage outputs written to the same file, which isn't going to work - these would need merging somehow. The closest I've got to that is this golang issue.
Summary
What I am after is a simple way to run go coverage testing (preferably via travis, goveralls, and coveralls.io), where it is possible to both test cases where the tested routine exits with OS.exit(), and where the coverage of that test is noted. I'd quite like it to use the re-exec method above (if that can be made to work) if that can be made to work.
The solution should show coverage testing of Crasher(). Excluding Crasher() from coverage testing is not an option, as in the real world what I am trying to do is test a more complex function, where somewhere deep within, under certain conditions, it calls e.g. log.Fatalf(); what I am coverage testing is that the tests for those conditions works properly.
With a slight refactoring, you may easily achieve 100% coverage.
foo/bar.go:
package foo
import (
"fmt"
"os"
)
var osExit = os.Exit
func Crasher() {
fmt.Println("Going down in flames!")
osExit(1)
}
And the testing code: foo/bar_test.go:
package foo
import "testing"
func TestCrasher(t *testing.T) {
// Save current function and restore at the end:
oldOsExit := osExit
defer func() { osExit = oldOsExit }()
var got int
myExit := func(code int) {
got = code
}
osExit = myExit
Crasher()
if exp := 1; got != exp {
t.Errorf("Expected exit code: %d, got: %d", exp, got)
}
}
Running go test -cover:
Going down in flames!
PASS
coverage: 100.0% of statements
ok foo 0.002s
Yes, you might say this works if os.Exit() is called explicitly, but what if os.Exit() is called by someone else, e.g. log.Fatalf()?
The same technique works there too, you just have to switch log.Fatalf() instead of os.Exit(), e.g.:
Relevant part of foo/bar.go:
var logFatalf = log.Fatalf
func Crasher() {
fmt.Println("Going down in flames!")
logFatalf("Exiting with code: %d", 1)
}
And the testing code: TestCrasher() in foo/bar_test.go:
func TestCrasher(t *testing.T) {
// Save current function and restore at the end:
oldLogFatalf := logFatalf
defer func() { logFatalf = oldLogFatalf }()
var gotFormat string
var gotV []interface{}
myFatalf := func(format string, v ...interface{}) {
gotFormat, gotV = format, v
}
logFatalf = myFatalf
Crasher()
expFormat, expV := "Exiting with code: %d", []interface{}{1}
if gotFormat != expFormat || !reflect.DeepEqual(gotV, expV) {
t.Error("Something went wrong")
}
}
Running go test -cover:
Going down in flames!
PASS
coverage: 100.0% of statements
ok foo 0.002s
Interfaces and mocks
Using Go interfaces possible to create mock-able compositions. A type could have interfaces as bound dependencies. These dependencies could be easily substituted with mocks appropriate to the interfaces.
type Exiter interface {
Exit(int)
}
type osExit struct {}
func (o* osExit) Exit (code int) {
os.Exit(code)
}
type Crasher struct {
Exiter
}
func (c *Crasher) Crash() {
fmt.Println("Going down in flames!")
c.Exit(1)
}
Testing
type MockOsExit struct {
ExitCode int
}
func (m *MockOsExit) Exit(code int){
m.ExitCode = code
}
func TestCrasher(t *testing.T) {
crasher := &Crasher{&MockOsExit{}}
crasher.Crash() // This causes os.Exit(1) to be called
f := crasher.Exiter.(*MockOsExit)
if f.ExitCode == 1 {
fmt.Printf("Error code is %d\n", f.ExitCode)
return
}
t.Fatalf("Process ran with err code %d, want exit status 1", f.ExitCode)
}
Disadvantages
Original Exit method still won't be tested so it should be responsible only for exit, nothing more.
Functions are first class citizens
Parameter dependency
Functions are first class citizens in Go. A lot of operations are allowed with functions so we can do some tricks with functions directly.
Using 'pass as parameter' operation we can do a dependency injection:
type osExit func(code int)
func Crasher(os_exit osExit) {
fmt.Println("Going down in flames!")
os_exit(1)
}
Testing:
var exit_code int
func os_exit_mock(code int) {
exit_code = code
}
func TestCrasher(t *testing.T) {
Crasher(os_exit_mock) // This causes os.Exit(1) to be called
if exit_code == 1 {
fmt.Printf("Error code is %d\n", exit_code)
return
}
t.Fatalf("Process ran with err code %v, want exit status 1", exit_code)
}
Disadvantages
You must pass a dependency as a parameter. If you have many dependencies a length of params list could be huge.
Variable substitution
Actually it is possible to do it using "assign to variable" operation without explicit passing a function as a parameter.
var osExit = os.Exit
func Crasher() {
fmt.Println("Going down in flames!")
osExit(1)
}
Testing
var exit_code int
func osExitMock(code int) {
exit_code = code
}
func TestCrasher(t *testing.T) {
origOsExit := osExit
osExit = osExitMock
// Don't forget to switch functions back!
defer func() { osExit = origOsExit }()
Crasher()
if exit_code != 1 {
t.Fatalf("Process ran with err code %v, want exit status 1", exit_code)
}
}
disadvantages
It is implicit and easy to crash.
Design notes
If you plan to declare some logic below Exit an exit logic must be isolated with else block or extra return after exit because mock won't stop execution.
func (c *Crasher) Crash() {
if SomeCondition == true {
fmt.Println("Going down in flames!")
c.Exit(1) // Exit in real situation, invoke mock when testing
} else {
DoSomeOtherStuff()
}
}

How do I unit test command line flags in Go?

I would like a unit test that verifies a particular command line flag is within an enumeration.
Here is the code I would like to write tests against:
var formatType string
const (
text = "text"
json = "json"
hash = "hash"
)
func init() {
const (
defaultFormat = "text"
formatUsage = "desired output format"
)
flag.StringVar(&formatType, "format", defaultFormat, formatUsage)
flag.StringVar(&formatType, "f", defaultFormat, formatUsage+" (shorthand)")
}
func main() {
flag.Parse()
}
The desired test would pass only if -format equalled one of the const values given above. This value would be available in formatType. An example correct call would be: program -format text
What is the best way to test the desired behaviors?
Note: Perhaps I have phrased this poorly, but the displayed code it not the unit test itself, but the code I want to write unit tests against. This is a simple example from the tool I am writing and wanted to ask if there were a good way to test valid inputs to the tool.
Custom testing and processing of flags can be achieved with the flag.Var function in the flag package.
Flag.Var "defines a flag with the specified name and usage string. The type and value of the flag are represented by the first argument, of type Value, which typically holds a user-defined implementation of Value."
A flag.Value is any type that satisfies the Value interface, defined as:
type Value interface {
String() string
Set(string) error
}
There is a good example in the example_test.go file in the flag package source
For your use case you could use something like:
package main
import (
"errors"
"flag"
"fmt"
)
type formatType string
func (f *formatType) String() string {
return fmt.Sprint(*f)
}
func (f *formatType) Set(value string) error {
if len(*f) > 0 && *f != "text" {
return errors.New("format flag already set")
}
if value != "text" && value != "json" && value != "hash" {
return errors.New("Invalid Format Type")
}
*f = formatType(value)
return nil
}
var typeFlag formatType
func init() {
typeFlag = "text"
usage := `Format type. Must be "text", "json" or "hash". Defaults to "text".`
flag.Var(&typeFlag, "format", usage)
flag.Var(&typeFlag, "f", usage+" (shorthand)")
}
func main() {
flag.Parse()
fmt.Println("Format type is", typeFlag)
}
This is probably overkill for such a simple example, but may be very useful when defining more complex flag types (The linked example converts a comma separated list of intervals into a slice of a custom type based on time.Duration).
EDIT: In answer to how to run unit tests against flags, the most canonical example is flag_test.go in the flag package source. The section related to testing custom flag variables starts at Line 181.
You can do this
func main() {
var name string
var password string
flag.StringVar(&name, "name", "", "")
flag.StringVar(&password, "password", "", "")
flag.Parse()
for _, v := range os.Args {
fmt.Println(v)
}
if len(strings.TrimSpace(name)) == 0 || len(strings.TrimSpace(password)) == 0 {
log.Panicln("no name or no passward")
}
fmt.Printf("name:%s\n", name)
fmt.Printf("password:%s\n", password)
}
func TestMainApp(t *testing.T) {
os.Args = []string{"test", "-name", "Hello", "-password", "World"}
main()
}
You can test main() by:
Making a test that runs a command
Which then calls the app test binary, built from go test, directly
Passing the desired flags you want to test
Passing back the exit code, stdout, and stderr which you can assert on.
NOTE This only works when main exits, so that the test does not run infinitely, or gets caught in a recursive loop.
Given your main.go looks like:
package main
import (
"flag"
"fmt"
"os"
)
var formatType string
const (
text = "text"
json = "json"
hash = "hash"
)
func init() {
const (
defaultFormat = "text"
formatUsage = "desired output format"
)
flag.StringVar(&formatType, "format", defaultFormat, formatUsage)
flag.StringVar(&formatType, "f", defaultFormat, formatUsage+" (shorthand)")
}
func main() {
flag.Parse()
fmt.Printf("format type = %v\n", formatType)
os.Exit(0)
}
Your main_test.go may then look something like:
package main
import (
"fmt"
"os"
"os/exec"
"path"
"runtime"
"strings"
"testing"
)
// This will be used to pass args to app and keep the test framework from looping
const subCmdFlags = "FLAGS_FOR_MAIN"
func TestMain(m *testing.M) {
// Only runs when this environment variable is set.
if os.Getenv(subCmdFlags) != "" {
runAppMain()
}
// Run all tests
exitCode := m.Run()
// Clean up
os.Exit(exitCode)
}
func TestMainForCorrectness(tester *testing.T) {
var tests = []struct {
name string
wantCode int
args []string
}{
{"formatTypeJson", 0, []string{"-format", "json"}},
}
for _, test := range tests {
tester.Run(test.name, func(t *testing.T) {
cmd := getTestBinCmd(test.args)
cmdOut, cmdErr := cmd.CombinedOutput()
got := cmd.ProcessState.ExitCode()
// Debug
showCmdOutput(cmdOut, cmdErr)
if got != test.wantCode {
t.Errorf("unexpected error on exit. want %q, got %q", test.wantCode, got)
}
})
}
}
// private helper methods.
// Used for running the application's main function from other test.
func runAppMain() {
// the test framework has process its flags,
// so now we can remove them and replace them with the flags we want to pass to main.
// we are pulling them out of the environment var we set.
args := strings.Split(os.Getenv(subCmdFlags), " ")
os.Args = append([]string{os.Args[0]}, args...)
// Debug stmt, can be removed
fmt.Printf("\nos args = %v\n", os.Args)
main() // will run and exit, signaling the test framework to stop and return the exit code.
}
// getTestBinCmd return a command to run your app (test) binary directly; `TestMain`, will be run automatically.
func getTestBinCmd(args []string) *exec.Cmd {
// call the generated test binary directly
// Have it the function runAppMain.
cmd := exec.Command(os.Args[0], "-args", strings.Join(args, " "))
// Run in the context of the source directory.
_, filename, _, _ := runtime.Caller(0)
cmd.Dir = path.Dir(filename)
// Set an environment variable
// 1. Only exist for the life of the test that calls this function.
// 2. Passes arguments/flag to your app
// 3. Lets TestMain know when to run the main function.
subEnvVar := subCmdFlags + "=" + strings.Join(args, " ")
cmd.Env = append(os.Environ(), subEnvVar)
return cmd
}
func showCmdOutput(cmdOut []byte, cmdErr error) {
if cmdOut != nil {
fmt.Printf("\nBEGIN sub-command out:\n%v", string(cmdOut))
fmt.Print("END sub-command\n")
}
if cmdErr != nil {
fmt.Printf("\nBEGIN sub-command stderr:\n%v", cmdErr.Error())
fmt.Print("END sub-command\n")
}
}
I'm not sure whether we agree on the term 'unit test'. What you want to achieve seems to me
more like a pretty normal test in a program. You probably want to do something like this:
func main() {
flag.Parse()
if formatType != text || formatType != json || formatType != hash {
flag.Usage()
return
}
// ...
}
Sadly, it is not easily possible to extend the flag Parser with own value verifiers
so you have to stick with this for now.
See Intermernet for a solution which defines a custom format type and its validator.