How to test db errors in go? - unit-testing

I am implementing a storer that is backed by leveldb (https://github.com/syndtr/goleveldb) in go. I am new to go and am trying to figure out how I get test coverage for the condition in the code below where perr != nil. I can test my own errors ok, but I can't figure out how to reliably get the Put method of leveldb to return an error.
Mocking out a db just to get test coverage up for a few edge cases seems like a lot of work for not much reward. Is mocking leveldb my only real choice here? If so what's the recommended mocking framework for go? If there's another way what is it?
if err == leveldb.ErrNotFound {
store.Lock()
perr := store.ldb.Put(itob(p.ID), p.ToBytes(), nil)
if perr != nil {
store.Unlock()
return &StorerError{
Message: fmt.Sprintf("leveldb put error could not create puppy %d : %s", p.ID, perr),
Code: 501,
}
}
store.Unlock()
return nil
}

Mocking is the general chosen approach for this kind of test, which is why golang/mock, for instance, has a mockgen command, to generate the test code.
mockgen has two modes of operation: source and reflect.
Source mode generates mock interfaces from a source file.
It is enabled by using the -source flag.
Other flags that may be useful in this mode are -imports and -aux_files.
Example:
mockgen -source=foo.go [other options]
Reflect mode generates mock interfaces by building a program
that uses reflection to understand interfaces.
It is enabled by passing two non-flag arguments: an import path, and a
comma-separated list of symbols.
Example:
mockgen database/sql/driver Conn,Driver

Related

What should be the easiest way to unit test influxdb queries

I have a service that only make queries ( read / write ) to influxDB.
I want to unit test this, but I'm not sure how to do it, I've read a bunch of tutos talking about mocking. A lot deals with components like go-sqlmock. But as I am using influxDB, I could not use it.
I also find out other components I've tried to use like goMock or testify to be over-complicated.
What I think to do is to create a Repository Layer, an interface that should implement all the methods I need to run / test, and pass concrete classes with dependency injection.
I think it could work, but is it the easiest way to do it ?
I guess having Repositories everywhere, even for small services, just for them to be testable, seems to be over-engineered.
I can give you code if needed, but I think my question is a bit more theorical than practical. It is about the easiest way to mock a custom DB for unit testing.
To expand on #Markus W Mahlberg answer:
If the goal is to verify the queries are valid and actually execute against influx there's no shortcut for actually performing these against influx. These are usually considered to be "integration" tests. I have found with docker-compose that these tests can be just as reliable as unit tests, and fast enough to be integrated into CI. Having the tests execute in CI enables local engineers to easily run these tests to verify their query changes as well.
I guess having Repositories everywhere, even for small services, just for them to be testable, seems to be over-engineered.
I have found this to be pretty polarizing discussion. A test implementation IS a concrete implementation and paves the way for reliable, repeatable tests that support easily isolating and exercising specific components of your code.
I want to unit test this, but I'm not sure how to do it,
I think this is pretty nuanced, IMO unit testing queries provides negative value. Value comes from using a repository interface to allow your unit tests to explicitly configure responses that you would receive from influx in order to fully exercise your application code. This provides no feedback on influx, which is why the integration tests are essential in order to verify that your application can validly configure, connect, and query against influx. This validation implicitly happens when you deploy your application, at which point it becomes way more expensive in terms of feedback than verifying it locally and in CI with integration tests.
I created a diagram to try and illustrate these differences:
Unit tests with repository are focused on your application code and provide little feedback/value on anything to do with influx. Integration tests are useful for verifying your client (perhaps being extended to your application depending on where the tests are exercising but I prefer to bound it to the client since you already have the static feedback from go on the interfaces and calls). Then finally, as #Markus points out, the step to e2e tests is pretty small from integration tests, and allow you to test your full service.
By its very definition, if you test your integration with an external resource, we are talking of integration tests, not unit tests. So we have two problems to solve here.
Unit tests
What you typically do is to have a data access layer which accepts interfaces, which in turn are easy to mock and you can unittest your application logic.
package main
import (
"errors"
"fmt"
)
var (
values = map[string]string{"foo": "bar", "bar": "baz"}
Expected = errors.New("Expected error")
)
type Getter interface {
Get(name string) (string, error)
}
// ErrorGetter implements Getter and always returns an error to test the error handling code of the caller.
// ofc, you could (and prolly should) use some mocking here in order to be able to test various other cases
type ErrorGetter struct{}
func (e ErrorGetter) Get(name string) (string, error) {
return "", Expected
}
// MapGetter implements Getter and uses a map as its datasource.
// Here you can see that you actually get an advantage: you decouple your logic from the data source,
// making refactoring (and debugging) **much** easier WTSHTF.
type MapGetter struct {
data map[string]string
}
func (m MapGetter) Get(name string) (string, error) {
if v, ok := m.data[name]; ok {
return v, nil
}
return "", fmt.Errorf("No value found for %s", name)
}
type retriever struct {
g Getter
}
func (r retriever) retrieve(name string) (string, error) {
return r.g.Get(name)
}
func main() {
// Assume this is test code. No tests possible on playground ;)
bad := retriever{g: ErrorGetter{}}
s, err := bad.retrieve("baz")
if s != "" || err == nil {
panic("Something went seriously wrong")
}
// Needs to fail as well, as "baz" is not in values
good := retriever{g: MapGetter{values}}
s, err = good.retrieve("baz")
if s != "" || err == nil {
panic("Something went seriously wrong")
}
s, err = good.retrieve("foo")
if s != "bar" || err != nil {
panic("Something went seriously wrong")
}
}
In the example above, I actually had to implement two Getters to cover all test cases, since I could not use a mocking library, but you get the picture.
As for the over engineering: Plain and simple, no, that is not overengineering. It is what I personally call proper craftsmanship. It will pay in the long run to get used to it. Maybe not in this project, but in one to come.
Integration tests
Dodgy. What I tend to do is to make sure my queries are correct before I commit them ;)
In the rare case I really want to verify my queries in a CI for example, I usually create a Makefile which in turn spins up a docker(-compose) which provides the stuff I want to integrate against and then runs the tests.

Coverage test for standard library errors

I've been trying to learn about Go's built-in testing framework and getting proper test coverage.
In one of the files I'm testing I'm only getting ~87% coverage:
coverage: 87.5% of statements
Here's a section of the code covered in the test:
// Check that the working directory is set
if *strctFlags.PtrWorkDir == "" {
// if the working directory is empty, set it to the current directory
strTemp, err := os.Getwd()
if err != nil {
return "", errors.New("Could not get current working directory")
}
*strctFlags.PtrWorkDir = strTemp
} else if stat, err := os.Stat(*strctFlags.PtrWorkDir); err != nil ||
!stat.IsDir() {
// Existence check of the work dir
return "", errors.New("Specified directory \"" + *strctFlags.PtrWorkDir + "\" could not be found or was not a directory")
}
*strctFlags.PtrWorkDir, err = filepath.Abs(*strctFlags.PtrWorkDir)
if err != nil {
return "", errors.New("Could not determine absolute filepath: " + err.Error())
}
The parts not covered in the test according to the .out file are the "if err != nil {}" blocks, which would be errors returned from standard library calls.
While I think the likelihood of the standard library passing errors would be slim unless there would be hardware failure, I think it would be good to know that the error is handled in the application properly. Also, checking returned errors is, to my understanding, idiomatic Go so I would think it would be good to be able to test error handling properly.
How do people handle testing errors like the situations above? Is it possible to get 100% coverage, or am I doing or structuring something incorrectly? Or do people skip testing those conditions?
As #flimzy explained in his answer, it is not good to aim for 100% coverage instead aim for useful test coverage.
Though you can test the system calls with slight modification to the code like this
package foo
// Get Working directory function
var osGetWd = os.Getwd
func GetWorkingDirectory() (string,error){
strTemp, err := osGetWd() // Using the function defined above
if err != nil {
return "", errors.New("Could not get current working directory")
return strTemp,nil
}
And while testing
package foo
func TestGetWdError() {
// Mocked function for os.Getwd
myGetWd := func() (string,error) {
myErr := errors.New("Simulated error")
return "",myErr
}
// Update the var to this mocked function
osGetWd = myGetWd
// This will return error
val,err := GetWorkingDirectory()
}
This will help you to achieve 100% coverage
There are many non-hardware failure scenarios where most standard library functoins might fail. Whether you care to test those is another question. For os.Getwd(), for instance, I might expect that call to fail if the working directory doesn't exist, (and you could go to the effort of testing this scenario).
What's probably more useful (and a better testing approach in general), would be to mock those calls, so that you can trigger errors during testing, just so that you can test your error-case code.
But please, for the love of code, don't aim for 100% test coverage. Aim for useful test coverage. It is possible to make a tool report 100% coverage without covering useful cases, and it is possible to cover useful cases without making the tool report 100%.
But true 100% coverage is a literal impossibility in most programs (even the simple "Hello World!"). So don't aim for it.

Mocking method from golang package

I have been unable to find a solution to mocking methods from golang packages.
For example, my project has code that attempts to recover when Os.Getwd() returns an error. The easiest way I can thinking of making a unit test for this, is by mocking the Os.Getwd() method to return an error, and verify that the code works accordingly.
I tried using testify, but it does not seem to be possible.
Anyone have any experience with that?
My own solution was to take the method as an argument, which allow to inject a "mock" instead when testing. Additionnaly, create an exported method as public facade and an unexported one for testing.
Example:
func Foo() int {
return foo(os.Getpid)
}
func foo(getpid func() int) int {
return getpid()
}
Looks like that taking a look at the os.Getwd test could give you some example of how you could test your code. Look for the functions TestChdirAndGetwd and TestProgWideChdir.
From reading those, it seems that the tests create temporary folders.
So a pragmatic approach would be to create temporary folders, like the tests mentioned above do, then break them so os.Getwd throws an error for you to catch on your test.
Just be careful doing these operations as they can mess up your system. I'd suggest testing in a lightweight container or a virtual machine.
I know this is a bit late but, here is how you can do it.
Testing DAL or SystemCalls or package calls is usually difficult. My approach to solve this problem is to push your system function calls behind an interface and then mock the functions of those interface. For example.
type SystemCalls interface {
Getwd() error
}
type SystemCallsImplementation struct{
}
func (SystemCallsImplementation) Getwd() error{
return Os.Getwd()
}
func MyFunc(sysCall SystemCalls) error{
sysCall.Getwd()
}
With this you inject your interface that has the system calls to your function. Now you can easily create a mock implementation of your interface for testing.
like
type MockSystemCallsImplementation struct{
err error
}
func (MockSystemCallsImplementation) Getwd() error{
return err //this can be set to nil or some value in your test function
}
Hope this answers your question.
This is the limitation of go compiler, google developers don't want to allow any hooks or monkey patching. If unit tests are important for you - than you have to select a method of source code poisoning. All these methods are the following:
You can't use global packages directly.
You have to create isolated version of method and test it.
Production version of method includes isolated version of method and global package.
But the best solution is to ignore go language completely (if possible).

Exclude groovy slf4j logging from condition coverage in Sonar with Jacoco

We are using SonarQube 5.1 with Jacoco maven plugin 0.7.4, and all of our slf4j logging statements such as log.debug('Something happened') show that only 1 of 2 branches are covered. I understand that this is because slf4j internally does an if debug, and that's great, but we don't want this to throw off our numbers. We aren't interested in testing slf4j and we'd rather not run every test multiple times for different logging levels.
So, how can we tell Sonar and/or Jacoco to exclude these lines from coverage? Both of them have configurable file exclusions, but from what I can tell those are only for excluding your own classes from coverage (using the target dir), not the imported libraries. I tried adding groovy.util.logging.*' to the exclusion list anyway but it didn't do anything.
logger.isDebugEnabled() is killing my code coverage. I'm planning to exclude it while running cobertura is similar and suggested that for Cobertura the 'ignore' property should be used instead of 'exclude'. I don't see anything like that for Jacoco or Sonar in settings or documentation.
EDIT:
Example image from Eclipse attached, after running Jacoco coverage (Sonar shows the same thing in their GUI). This is actual code from one of our classes.
EDIT 2:
We are using the Slf4j annotation. Docs here:
http://docs.groovy-lang.org/next/html/gapi/groovy/util/logging/Slf4j.html
This local transform adds a logging ability to your program using LogBack logging. Every method call on a unbound variable named log will be mapped to a call to the logger. For this a log field will be inserted in the class. If the field already exists the usage of this transform will cause a compilation error. The method name will be used to determine what to call on the logger.
log.name(exp)
is mapped to
if (log.isNameLoggable() {
log.name(exp)
}
Here name is a place holder for info, debug, warning, error, etc. If the expression exp is a constant or only a variable access the method call will not be transformed. But this will still cause a call on the injected logger.
Hopefully this clarifies what's going on. Our log statements become 2 branch ifs to avoid expensive string building for log levels that aren't enabled (a common practice, as far as I know). But that means that to guarantee coverage of all these branches, we have to run every test repeatedly for every logging level.
I did not find a general solution for excluding it, but if your codebase allows you to do so, you could wrap your logging statements in a method with an annotation containing "Generated" in its name.
A simple example:
package org.example.logging
import groovy.transform.Generated
import groovy.util.logging.Slf4j
#Slf4j
class Greeter {
void greet(name) {
logDebug("called greet for ${name}")
println "Hello, ${name}!"
}
#Generated
private logDebug(message) {
log.debug message
}
}
Unfortunately javax.annotation.Generated is not suitable, because it has only a retention of SOURCE, therefor I (ab)used groovy.transform.Generated here, but can easily create your own annotation for that purpose.
I found that solution here: How would I add an annotation to exclude a method from a jacoco code coverage report?
UPDATE: In Groovy you can solve it most elegantly with a trait:
package org.example.logging
import groovy.transform.Generated
import groovy.util.logging.Slf4j
#Slf4j
trait LoggingTrait {
#Generated
void logDebug(String message) {
log.debug message
}
}
...and then...
package org.example.logging
import groovy.util.logging.Slf4j
#Slf4j
class Greeter implements LoggingTrait {
void greet(name) {
logDebug "called greet for ${name}"
println "Hello, ${name}!"
}
}
Unfortunately the property log is interpreted as property of Greeter, not of LoggingTrait, so you must attach #Slf4j to the trait and the class implementing the trait.
Nevertheless doing so gives you the expected logger - the one of the implementing class:
14:25:09.932 [main] DEBUG org.example.logging.Greeter - called greet for world

How to use "Pex and Moles" library with Entity Framework?

This is a tough one because not too many people use Pex & Moles or so I think (even though Pex is a really great product - much better than any other unit testing tool)
I have a Data project that has a very simple model with just one entity (DBItem). I've also written a DBRepository within this project, that manipulates this EF model. Repository has a method called GetItems() that returns a list of business layer items (BLItem) and looks similar to this (simplified example):
public IList<BLItem> GetItems()
{
using (var ctx = new EFContext("name=MyWebConfigConnectionName"))
{
DateTime limit = DateTime.Today.AddDays(-10);
IList<DBItem> result = ctx.Items.Where(i => i.Changed > limit).ToList();
return result.ConvertAll(i => i.ToBusinessObject());
}
}
So now I'd like to create some unit tests for this particular method. I'm using Pex & Moles. I created my moles and stubs for my EF object context.
I would like to write parametrised unit test (I know I've first written my production code, but I had to, since I'm testing Pex & Moles) that tests that this method returns valid list of items.
This is my test class:
[PexClass]
public class RepoTest
{
[PexMethod]
public void GetItemsTest(ObjectSet<DBItem> items)
{
MEFContext.ConstructorString = (#this, name) => {
var mole = new SEFContext();
};
DBRepository repo = new DBRepository();
IList<BLItem> result = repo.GetItems();
IList<DBItem> manual = items.Where(i => i.Changed > DateTime.Today.AddDays(-10));
if (result.Count != manual.Count)
{
throw new Exception();
}
}
}
Then I run Pex Explorations for this particular parametrised unit test, but I get an error path bounds exceeded. Pex starts this test by providing null to this test method (so items = null). This is the code, that Pex is running:
[Test]
[PexGeneratedBy(typeof(RepoTest))]
[Ignore("the test state was: path bounds exceeded")]
public void DBRepository_GetTasks22301()
{
this.GetItemsTest((ObjectSet<DBItem>)null);
}
This was additional comment provided by Pex:
The test case ran too long for these inputs, and Pex stopped the analysis. Please notice: The method Oblivious.Data.Test.Repositories.TaskRepositoryTest.b__0 was called 50 times; please check that the code is not stuck in an infinite loop or recursion. Otherwise, click on 'Set MaxStack=200', and run Pex again.
Update attribute [PexMethod(MaxStack = 200)]
Question
Am I doing this the correct way or not? Should I use EFContext stub instead? Do I have to add additional attributes to test method so Moles host will be running (I'm not sure it does now). I'm running just Pex & Moles. No VS test or nUnit or anything else.
I guess I should probably set some limit to Pex how many items should it provide for this particular test method.
Moles is not designed to test the parts of your application that have external dependencies (e.g. file access, network access, database access, etc). Instead, Moles allows you to mock these parts of your app so that way you can do true unit testing on the parts that don't have external dependencies.
So I think you should just mock your EF objects and queries, e.g., by creating in-memory lists and having query methods return fake data from those lists based on whatever criteria is relevant.
I am just getting to grips with pex also ... my issues surrounded me wanting to use it with moq ;)
anyway ...
I have some methods similar to your that have the same problem. When i increased the max they went away. Presumably pex was satisfied that it had sufficiently explored the branches. I have methods where i have had to increase the timeout on the code contract validation also.
One thing that you should probably be doign though is passing in all the dependant objects as parameters ... ie dont instantiate the repo in the method but pass it in.
A general problem you have is that you are instantiating big objects in your method. I do the same in my DAL classes, but then i am not tryign to unit test them in isolation. I build up datasets and use this to test my data access code against.
I use pex on my business logic and objects.
If i were to try and test my DAL code id have to use IOC to pass the datacontext into the methods - which would then make testing possible as you can mock the data context.
You should use Entity Framework Repository Pattern: http://www.codeproject.com/KB/database/ImplRepositoryPatternEF.aspx