I've been trying to learn about Go's built-in testing framework and getting proper test coverage.
In one of the files I'm testing I'm only getting ~87% coverage:
coverage: 87.5% of statements
Here's a section of the code covered in the test:
// Check that the working directory is set
if *strctFlags.PtrWorkDir == "" {
// if the working directory is empty, set it to the current directory
strTemp, err := os.Getwd()
if err != nil {
return "", errors.New("Could not get current working directory")
}
*strctFlags.PtrWorkDir = strTemp
} else if stat, err := os.Stat(*strctFlags.PtrWorkDir); err != nil ||
!stat.IsDir() {
// Existence check of the work dir
return "", errors.New("Specified directory \"" + *strctFlags.PtrWorkDir + "\" could not be found or was not a directory")
}
*strctFlags.PtrWorkDir, err = filepath.Abs(*strctFlags.PtrWorkDir)
if err != nil {
return "", errors.New("Could not determine absolute filepath: " + err.Error())
}
The parts not covered in the test according to the .out file are the "if err != nil {}" blocks, which would be errors returned from standard library calls.
While I think the likelihood of the standard library passing errors would be slim unless there would be hardware failure, I think it would be good to know that the error is handled in the application properly. Also, checking returned errors is, to my understanding, idiomatic Go so I would think it would be good to be able to test error handling properly.
How do people handle testing errors like the situations above? Is it possible to get 100% coverage, or am I doing or structuring something incorrectly? Or do people skip testing those conditions?
As #flimzy explained in his answer, it is not good to aim for 100% coverage instead aim for useful test coverage.
Though you can test the system calls with slight modification to the code like this
package foo
// Get Working directory function
var osGetWd = os.Getwd
func GetWorkingDirectory() (string,error){
strTemp, err := osGetWd() // Using the function defined above
if err != nil {
return "", errors.New("Could not get current working directory")
return strTemp,nil
}
And while testing
package foo
func TestGetWdError() {
// Mocked function for os.Getwd
myGetWd := func() (string,error) {
myErr := errors.New("Simulated error")
return "",myErr
}
// Update the var to this mocked function
osGetWd = myGetWd
// This will return error
val,err := GetWorkingDirectory()
}
This will help you to achieve 100% coverage
There are many non-hardware failure scenarios where most standard library functoins might fail. Whether you care to test those is another question. For os.Getwd(), for instance, I might expect that call to fail if the working directory doesn't exist, (and you could go to the effort of testing this scenario).
What's probably more useful (and a better testing approach in general), would be to mock those calls, so that you can trigger errors during testing, just so that you can test your error-case code.
But please, for the love of code, don't aim for 100% test coverage. Aim for useful test coverage. It is possible to make a tool report 100% coverage without covering useful cases, and it is possible to cover useful cases without making the tool report 100%.
But true 100% coverage is a literal impossibility in most programs (even the simple "Hello World!"). So don't aim for it.
Related
I'm building an UI for cli apps. I completed the functions but I couldn't figure out how to test it.
Repo: https://github.com/erdaltsksn/cui
func Success(message string) {
color.Success.Println("√", message)
os.Exit(0)
}
// Error prints a success message and exit status 1
func Error(message string, err ...error) {
color.Danger.Println("X", message)
if len(err) > 0 {
for _, e := range err {
fmt.Println(" ", e.Error())
}
}
os.Exit(1)
}
I want to write unit tests for functions. The problem is functions contains print and os.Exit(). I couldn't figure out how to write test for both.
This topic: How to test a function's output (stdout/stderr) in unit tests helps me test print function. I need to add os.Exit()
My Solution for now:
func captureOutput(f func()) string {
var buf bytes.Buffer
log.SetOutput(&buf)
f()
log.SetOutput(os.Stderr)
return buf.String()
}
func TestSuccess(t *testing.T) {
type args struct {
message string
}
tests := []struct {
name string
args args
output string
}{
{"Add test cases.", args{message: "my message"}, "ESC[1;32m"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
want := tt.output
got := captureOutput(func() {
cui.Success(tt.args.message)
})
got := err
if got.Error() != want {
t.Error("Got:", got, ",", "Want:", want)
}
})
}
}
The usual answer in TDD is that you take your function and divide it into two parts; one part that is easy to test, but is not tightly coupled to specific file handles, or a specific implementation of os::Exit; the other part is tightly coupled to these things, but is so simple it obviously has no deficiencies.
Your "unit tests" are mistake detectors that measure the first part.
The second part you write once, inspect it "by hand", and then leave it alone. The idea here being that things are so simple that, once implemented correctly, they don't need to change.
// Warning: untested code ahead
func Foo_is_very_stable() {
bar_is_easy_to_test(stdin, stdout, os.exit)
}
func bar_is_easy_to_test(in *File, out *File , exit func(int)) {
// Do complicated things here.
}
Now, we are cheating a little bit -- os.exit is special magic that never returns, but bar_is_easy_to_test doesn't really know that.
Another design that is a bit more fair is to put the complicated code into a state machine. The state machine decides what to do, and the host invoking the machine decides how to do that....
// More untested code
switch state_machine.next() {
case OUT:
println(state_machine.line())
state_machine.onOut()
case EXIT:
os.exit(state_machine.exitCode())
Again, you get a complicated piece that is easy to test (the state machine) and a much simpler piece that is stable and easily verified by inspection.
This is one of the core ideas underlying TDD - that we deliberately design our code in such a way that it is "easy to test". The justification for this is the claim that code that is easy to test is also easy to maintain (because mistakes are easily detected and because the designs themselves are "cleaner").
Recommended viewing
Boundaries, by Gary Bernhardt
Building Protocol Libraries..., by Cory Benfield
What you have there is called a "side effect" - a situation when execution of your application spans beyond it's environment, it's address space. And the thing is, you don't test side effects. It is not always possible, and when it is - it is unreasonably complicated and ugly.
The basic idea is to have your side effects, like CLI output or os.Exit() (or network connections, or accessing files), decoupled from you main body of logic. There are plenty of ways to do it, the entire "software design" discipline is devoted to that, and #VoiceOfUnreason gives a couple of viable examples.
In your example I would go with wrapping side effects in functions and arranging some way to inject dependencies into Success() & Error(). If you want to keep those two just plain functions, then it's either a function argument or a global variable holding a function for exiting (as per #Peter's comment), but I'd recommend going OO way, employing some patterns and achieving much greater flexibility for you lib.
I have a service that only make queries ( read / write ) to influxDB.
I want to unit test this, but I'm not sure how to do it, I've read a bunch of tutos talking about mocking. A lot deals with components like go-sqlmock. But as I am using influxDB, I could not use it.
I also find out other components I've tried to use like goMock or testify to be over-complicated.
What I think to do is to create a Repository Layer, an interface that should implement all the methods I need to run / test, and pass concrete classes with dependency injection.
I think it could work, but is it the easiest way to do it ?
I guess having Repositories everywhere, even for small services, just for them to be testable, seems to be over-engineered.
I can give you code if needed, but I think my question is a bit more theorical than practical. It is about the easiest way to mock a custom DB for unit testing.
To expand on #Markus W Mahlberg answer:
If the goal is to verify the queries are valid and actually execute against influx there's no shortcut for actually performing these against influx. These are usually considered to be "integration" tests. I have found with docker-compose that these tests can be just as reliable as unit tests, and fast enough to be integrated into CI. Having the tests execute in CI enables local engineers to easily run these tests to verify their query changes as well.
I guess having Repositories everywhere, even for small services, just for them to be testable, seems to be over-engineered.
I have found this to be pretty polarizing discussion. A test implementation IS a concrete implementation and paves the way for reliable, repeatable tests that support easily isolating and exercising specific components of your code.
I want to unit test this, but I'm not sure how to do it,
I think this is pretty nuanced, IMO unit testing queries provides negative value. Value comes from using a repository interface to allow your unit tests to explicitly configure responses that you would receive from influx in order to fully exercise your application code. This provides no feedback on influx, which is why the integration tests are essential in order to verify that your application can validly configure, connect, and query against influx. This validation implicitly happens when you deploy your application, at which point it becomes way more expensive in terms of feedback than verifying it locally and in CI with integration tests.
I created a diagram to try and illustrate these differences:
Unit tests with repository are focused on your application code and provide little feedback/value on anything to do with influx. Integration tests are useful for verifying your client (perhaps being extended to your application depending on where the tests are exercising but I prefer to bound it to the client since you already have the static feedback from go on the interfaces and calls). Then finally, as #Markus points out, the step to e2e tests is pretty small from integration tests, and allow you to test your full service.
By its very definition, if you test your integration with an external resource, we are talking of integration tests, not unit tests. So we have two problems to solve here.
Unit tests
What you typically do is to have a data access layer which accepts interfaces, which in turn are easy to mock and you can unittest your application logic.
package main
import (
"errors"
"fmt"
)
var (
values = map[string]string{"foo": "bar", "bar": "baz"}
Expected = errors.New("Expected error")
)
type Getter interface {
Get(name string) (string, error)
}
// ErrorGetter implements Getter and always returns an error to test the error handling code of the caller.
// ofc, you could (and prolly should) use some mocking here in order to be able to test various other cases
type ErrorGetter struct{}
func (e ErrorGetter) Get(name string) (string, error) {
return "", Expected
}
// MapGetter implements Getter and uses a map as its datasource.
// Here you can see that you actually get an advantage: you decouple your logic from the data source,
// making refactoring (and debugging) **much** easier WTSHTF.
type MapGetter struct {
data map[string]string
}
func (m MapGetter) Get(name string) (string, error) {
if v, ok := m.data[name]; ok {
return v, nil
}
return "", fmt.Errorf("No value found for %s", name)
}
type retriever struct {
g Getter
}
func (r retriever) retrieve(name string) (string, error) {
return r.g.Get(name)
}
func main() {
// Assume this is test code. No tests possible on playground ;)
bad := retriever{g: ErrorGetter{}}
s, err := bad.retrieve("baz")
if s != "" || err == nil {
panic("Something went seriously wrong")
}
// Needs to fail as well, as "baz" is not in values
good := retriever{g: MapGetter{values}}
s, err = good.retrieve("baz")
if s != "" || err == nil {
panic("Something went seriously wrong")
}
s, err = good.retrieve("foo")
if s != "bar" || err != nil {
panic("Something went seriously wrong")
}
}
In the example above, I actually had to implement two Getters to cover all test cases, since I could not use a mocking library, but you get the picture.
As for the over engineering: Plain and simple, no, that is not overengineering. It is what I personally call proper craftsmanship. It will pay in the long run to get used to it. Maybe not in this project, but in one to come.
Integration tests
Dodgy. What I tend to do is to make sure my queries are correct before I commit them ;)
In the rare case I really want to verify my queries in a CI for example, I usually create a Makefile which in turn spins up a docker(-compose) which provides the stuff I want to integrate against and then runs the tests.
I am implementing a storer that is backed by leveldb (https://github.com/syndtr/goleveldb) in go. I am new to go and am trying to figure out how I get test coverage for the condition in the code below where perr != nil. I can test my own errors ok, but I can't figure out how to reliably get the Put method of leveldb to return an error.
Mocking out a db just to get test coverage up for a few edge cases seems like a lot of work for not much reward. Is mocking leveldb my only real choice here? If so what's the recommended mocking framework for go? If there's another way what is it?
if err == leveldb.ErrNotFound {
store.Lock()
perr := store.ldb.Put(itob(p.ID), p.ToBytes(), nil)
if perr != nil {
store.Unlock()
return &StorerError{
Message: fmt.Sprintf("leveldb put error could not create puppy %d : %s", p.ID, perr),
Code: 501,
}
}
store.Unlock()
return nil
}
Mocking is the general chosen approach for this kind of test, which is why golang/mock, for instance, has a mockgen command, to generate the test code.
mockgen has two modes of operation: source and reflect.
Source mode generates mock interfaces from a source file.
It is enabled by using the -source flag.
Other flags that may be useful in this mode are -imports and -aux_files.
Example:
mockgen -source=foo.go [other options]
Reflect mode generates mock interfaces by building a program
that uses reflection to understand interfaces.
It is enabled by passing two non-flag arguments: an import path, and a
comma-separated list of symbols.
Example:
mockgen database/sql/driver Conn,Driver
In the Golang project under test, there's a method that loads a JSON config file into a variable. Its code is like this:
// Load the JSON config file
func Load(configFile string, outputObj interface{}) *errors.ErrorSt {
var err error
// Read the config file
jsonBytes, err := ioutil.ReadFile(configFile)
if err != nil {
fmt.Println(err.Error())
return errors.File().AddDetails(err.Error())
}
// Parse the config
if err := json.Unmarshal(jsonBytes, outputObj); err != nil {
return errors.JSON().AddDetails("Could not parse" + configFile + ": " + err.Error())
}
return nil
}
I wish to test it but I don't know if I should create fake JSON file for the test cases, or just mock the whole function. My Java background has me leaning towards the latter.
Exploring that, I found the testify framework I'm using has a package for mocking methods, but what I'm attempting to mock doesn't belong to interfaces (the pitfalls of non-OOP languages!!)
There's a couple ways to do it. It's certainly not unusual to have a sample data file to test loading & parsing a file (you'll find this in places in the standard library). It's also a pretty common practice for a function like this to take in an io.Reader rather than a file path, so that in testing you can just pass in e.g. a bytes.Reader to effectively "mock" the file while testing everything else. Which method to use (or both, if you choose) depends on your use case and design objectives; switching to an io.Reader gives you more flexibility, but only you know if that flexibility has any value in context. If not, just keep a test file along with your tests and read that.
I have unit tests for most of our code. But I cannot figure out how to generate unit tests coverage for certain code in main() in main package.
The main function is pretty simple. It is basically a select block. It reads flags, then either call another function/execute something, or simply print help on screen. However, if commandline options are not set correctly, it will exit with various error codes. Hence, the need for sub-process testing.
I tried sub-process testing technique but modified code so that it include flag for coverage:
cmd := exec.Command(os.Args[0], "-test.run=TestMain -test.coverprofile=/vagrant/ucover/coverage2.out")
Here is original code: https://talks.golang.org/2014/testing.slide#23
Explanation of above slide: http://youtu.be/ndmB0bj7eyw?t=47m16s
But it doesn't generate cover profile. I haven't been able to figure out why not. It does generate cover profile for main process executing tests, but any code executed in sub-process, of course, is not marked as executed.
I try to achieve as much code coverage as possible. I am not sure if I am missing something or if there is an easier way to do this. Or if it is just not possible.
Any help is appreciated.
Thanks
Amer
I went with another approach which didn't involve refactoring main(): see this commit:
I use a global (unexported) variable:
var args []string
And then in main(), I use os.Args unless the private var args was set:
a := os.Args[1:]
if args != nil {
a = args
}
flag.CommandLine.Parse(a)
In my test, I can set the parameters I want:
args = []string{"-v", "-audit", "_tests/p1/conf/gitolite.conf"}
main()
And I still achieve a 100% code coverage, even over main().
I would factor the logic that needs to be tested out of main():
func main() {
start(os.Args)
}
func start(args []string) {
// old main() logic
}
This way you can unit-test start() without mutating os.Args.
Using #VonC solution with Go 1.11, I found I had to reset flag.CommandLine on each test redefining the flags, to avoid a "flag redefined" panic.:
for _, check := range checks {
t.Run("flagging " + check.arg, func(t *testing.T) {
flag.CommandLine = flag.NewFlagSet(cmd, flag.ContinueOnError)
args = []string{check.arg}
main()
})
}