I am writing unit tests for my golang code, and there are a couple methods that I would like to be ignored when coverage is calculated. Is this possible? If so, how?
One way to do it would be to put the functions you don't want tested in a separate go file, and use a build tag to keep it from being included during tests. For example, I do this sometimes with applications where I have a main.go file with the main function, maybe a usage function, etc., that don't get tested. Then you can add a test tag or something, like go test -v -cover -tags test and the main might look something like:
//+build !test
package main
func main() {
// do stuff
}
func usage() {
// show some usage info
}
Related
We have a special text file on an HTTP server that contains the filenames and test functions we want to skip when our golang tests run.
I must build something which downloads that test file, parses the file names and test functions that should be skipped, and then finally runs our go tests and properly skips the test functions found in the input file.
What's the right way to make this work in golang?
(I realize this sounds like an unusual way to skip, but we really want to make this work as I have described for reasons which are out of context to this question.)
You can skip test cases with (*testing.T).Skip() function. You can download the test file in the init function of go's test file. Then parse and load the function names in map or array. Before running each case, check if the test case is included in the skip list and skip if required.
// Psuedo code -> foo_test.go
var skipCases map[string]bool
func init() {
// download and parse test file.
// add test case names which needs skipped into skipCases map
}
func TestFoo(t *testing.T) {
if skipCases["TestFoo"] {
t.Skip()
}
// else continue testing
}
I have a data driven test method that looks something like this:
[TestCaseSource(typeof(TestDataGetter), nameof(TestDataGetter.GetUniqueAccessionNumbersAndLoginsForToolbarItems))]
public void TCID37452(string accessionNumber, string loginId)
{
//run test steps
}
When this gets populated in Visual Studio Test Explorer, I see multiple test methods with the appropriate test data that was retrieved from TestDataGetter.GetUniqueAccessionNumbersAndLoginsForToolbarItems()
They look something like this in the test explorer:
TCID37452("x", "123")
TCID37452("y", "124")
TCID37452("z", "125")
I would like to run one of these methods with the test data using NUnit command line. However, I cannot figure out how to do this. It seems as though I can only run by test method name, but I can't specify the parameter values. This results in all of the test iterations being executed and not just one.
Thanks in advance for your help.
My tests aren't in the same package as my code. I find this a less cluttered way of organising a codebase with a lot of test files, and I've read that it's a good idea in order to limit tests to interacting via the package's public api.
So it looks something like this:
api_client:
Client.go
ArtistService.go
...
api_client_tests
ArtistService.Events_test.go
ArtistService.Info_test.go
UtilityFunction.go
...
I can type go test bandsintown-api/api_client_tests -cover
and see 0.181s coverage: 100.0% of statements. But that's actually just coverage over my UtilityFunction.go (as I say when I ran go test bandsintown-api/api_client_tests -cover=cover.out and
go tool cover -html=cover.out).
Is there any way to get the coverage for the actual api_client package under test, without bringing it all into the same package?
As it is mentioned in comments you can run
go test -cover -coverpkg "api_client" "api_client_tests"
to run the tests with coverage.
But splitting code files from tests files to a different directories isn't a Go's way.
I suppose that you want to have a black-box testing(nothing package-private stuff can be accessible outside, even for tests).
To accomplish this it's allowed to have tests in another package(without moving the files). Example:
api_client.go
package api_client
// will not be accessible outside of the package
var privateVar = 10
func Method() {
}
api_client_test.go
package api_client_tests
import "testing"
func TestClient(t *testing.T) {
Method()
}
Is there an established best practice for separating unit tests and integration tests in GoLang (testify)? I have a mix of unit tests (which do not rely on any external resources and thus run really fast) and integration tests (which do rely on any external resources and thus run slower). So, I want to be able to control whether or not to include the integration tests when I say go test.
The most straight-forward technique would seem to be to define a -integrate flag in main:
var runIntegrationTests = flag.Bool("integration", false
, "Run the integration tests (in addition to the unit tests)")
And then to add an if-statement to the top of every integration test:
if !*runIntegrationTests {
this.T().Skip("To run this test, use: go test -integration")
}
Is this the best I can do? I searched the testify documentation to see if there is perhaps a naming convention or something that accomplishes this for me, but didn't find anything. Am I missing something?
#Ainar-G suggests several great patterns to separate tests.
This set of Go practices from SoundCloud recommends using build tags (described in the "Build Constraints" section of the build package) to select which tests to run:
Write an integration_test.go, and give it a build tag of integration. Define (global) flags for things like service addresses and connect strings, and use them in your tests.
// +build integration
var fooAddr = flag.String(...)
func TestToo(t *testing.T) {
f, err := foo.Connect(*fooAddr)
// ...
}
go test takes build tags just like go build, so you can call go test -tags=integration. It also synthesizes a package main which calls flag.Parse, so any flags declared and visible will be processed and available to your tests.
As a similar option, you could also have integration tests run by default by using a build condition // +build !unit, and then disable them on demand by running go test -tags=unit.
#adamc comments:
For anyone else attempting to use build tags, it's important that the // +build test comment is the first line in your file, and that you include a blank line after the comment, otherwise the -tags command will ignore the directive.
Also, the tag used in the build comment cannot have a dash, although underscores are allowed. For example, // +build unit-tests will not work, whereas // +build unit_tests will.
To elaborate on my comment to #Ainar-G's excellent answer, over the past year I have been using the combination of -short with Integration naming convention to achieve the best of both worlds.
Unit and Integration tests harmony, in the same file
Build flags previously forced me to have multiple files (services_test.go, services_integration_test.go, etc).
Instead, take this example below where the first two are unit tests and I have an integration test at the end:
package services
import "testing"
func TestServiceFunc(t *testing.T) {
t.Parallel()
...
}
func TestInvalidServiceFunc3(t *testing.T) {
t.Parallel()
...
}
func TestPostgresVersionIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
...
}
Notice the last test has the convention of:
using Integration in the test name.
checking if running under -short flag directive.
Basically, the spec goes: "write all tests normally. if it is a long-running tests, or an integration test, follow this naming convention and check for -short to be nice to your peers."
Run only Unit tests:
go test -v -short
this provides you with a nice set of messages like:
=== RUN TestPostgresVersionIntegration
--- SKIP: TestPostgresVersionIntegration (0.00s)
service_test.go:138: skipping integration test
Run Integration Tests only:
go test -run Integration
This runs only the integration tests. Useful for smoke testing canaries in production.
Obviously the downside to this approach is if anyone runs go test, without the -short flag, it will default to run all tests - unit and integration tests.
In reality, if your project is large enough to have unit and integration tests, then you most likely are using a Makefile where you can have simple directives to use go test -short in it. Or, just put it in your README.md file and call it the day.
I see three possible solutions. The first is to use the short mode for unit tests. So you would use go test -short with unit tests and the same but without the -short flag to run your integration tests as well. The standard library uses the short mode to either skip long-running tests, or make them run faster by providing simpler data.
The second is to use a convention and call your tests either TestUnitFoo or TestIntegrationFoo and then use the -run testing flag to denote which tests to run. So you would use go test -run 'Unit' for unit tests and go test -run 'Integration' for integration tests.
The third option is to use an environment variable, and get it in your tests setup with os.Getenv. Then you would use simple go test for unit tests and FOO_TEST_INTEGRATION=true go test for integration tests.
I personally would prefer the -short solution since it's simpler and is used in the standard library, so it seems like it's a de facto way of separating/simplifying long-running tests. But the -run and os.Getenv solutions offer more flexibility (more caution is required as well, since regexps are involved with -run).
I was trying to find a solution for the same recently.
These were my criteria:
The solution must be universal
No separate package for integration tests
The separation should be complete (I should be able to run integration tests only)
No special naming convention for integration tests
It should work well without additional tooling
The aforementioned solutions (custom flag, custom build tag, environment variables) did not really satisfy all the above criteria, so after a little digging and playing I came up with this solution:
package main
import (
"flag"
"regexp"
"testing"
)
func TestIntegration(t *testing.T) {
if m := flag.Lookup("test.run").Value.String(); m == "" || !regexp.MustCompile(m).MatchString(t.Name()) {
t.Skip("skipping as execution was not requested explicitly using go test -run")
}
t.Parallel()
t.Run("HelloWorld", testHelloWorld)
t.Run("SayHello", testSayHello)
}
The implementation is straightforward and minimal. Although it requires a simple convention for tests, but it's less error prone. Further improvement could be exporting the code to a helper function.
Usage
Run integration tests only across all packages in a project:
go test -v ./... -run ^TestIntegration$
Run all tests (regular and integration):
go test -v ./... -run .\*
Run only regular tests:
go test -v ./...
This solution works well without tooling, but a Makefile or some aliases can make it easier to user. It can also be easily integrated into any IDE that supports running go tests.
The full example can be found here: https://github.com/sagikazarmark/modern-go-application
I encourage you to look at Peter Bourgons approach, it is simple and avoids some problems with the advice in the other answers: https://peter.bourgon.org/blog/2021/04/02/dont-use-build-tags-for-integration-tests.html
There are many downsides to using build tags, short mode or flags, see here.
I would recommend using environment variables with a test helper that can be imported into individual packages:
func IntegrationTest(t *testing.T) {
t.Helper()
if os.Getenv("INTEGRATION") == "" {
t.Skip("skipping integration tests, set environment variable INTEGRATION")
}
}
In your tests you can now easily call this at the start of your test function:
func TestPostgresQuery(t *testing.T) {
IntegrationTest(t)
// ...
}
Why I would not recommend using either -short or flags:
Someone who checks out your repository for the first time should be able to run go test ./... and all tests are passing which is often not the case if this relies on external dependencies.
The problem with the flag package is that it will work until you have integration tests across different packages and some will run flag.Parse() and some will not which will lead to an error like this:
go test ./... -integration
flag provided but not defined: -integration
Usage of /tmp/go-build3903398677/b001/foo.test:
Environment variables appear to be the most flexible, robust and require the least amount of code with no visible downsides.
I am writing a component that, given a ZIP file, needs to:
Unzip the file.
Find a specific dll among the unzipped files.
Load that dll through reflection and invoke a method on it.
I'd like to unit test this component.
I'm tempted to write code that deals directly with the file system:
void DoIt()
{
Zip.Unzip(theZipFile, "C:\\foo\\Unzipped");
System.IO.File myDll = File.Open("C:\\foo\\Unzipped\\SuperSecret.bar");
myDll.InvokeSomeSpecialMethod();
}
But folks often say, "Don't write unit tests that rely on the file system, database, network, etc."
If I were to write this in a unit-test friendly way, I suppose it would look like this:
void DoIt(IZipper zipper, IFileSystem fileSystem, IDllRunner runner)
{
string path = zipper.Unzip(theZipFile);
IFakeFile file = fileSystem.Open(path);
runner.Run(file);
}
Yay! Now it's testable; I can feed in test doubles (mocks) to the DoIt method. But at what cost? I've now had to define 3 new interfaces just to make this testable. And what, exactly, am I testing? I'm testing that my DoIt function properly interacts with its dependencies. It doesn't test that the zip file was unzipped properly, etc.
It doesn't feel like I'm testing functionality anymore. It feels like I'm just testing class interactions.
My question is this: what's the proper way to unit test something that is dependent on the file system?
edit I'm using .NET, but the concept could apply Java or native code too.
Yay! Now it's testable; I can feed in test doubles (mocks) to the DoIt method. But at what cost? I've now had to define 3 new interfaces just to make this testable. And what, exactly, am I testing? I'm testing that my DoIt function properly interacts with its dependencies. It doesn't test that the zip file was unzipped properly, etc.
You have hit the nail right on its head. What you want to test is the logic of your method, not necessarily whether a true file can be addressed. You donĀ“t need to test (in this unit test) whether a file is correctly unzipped, your method takes that for granted. The interfaces are valuable by itself because they provide abstractions that you can program against, rather than implicitly or explicitly relying on one concrete implementation.
Your question exposes one of the hardest parts of testing for developers just getting into it:
"What the hell do I test?"
Your example isn't very interesting because it just glues some API calls together so if you were to write a unit test for it you would end up just asserting that methods were called. Tests like this tightly couple your implementation details to the test. This is bad because now you have to change the test every time you change the implementation details of your method because changing the implementation details breaks your test(s)!
Having bad tests is actually worse than having no tests at all.
In your example:
void DoIt(IZipper zipper, IFileSystem fileSystem, IDllRunner runner)
{
string path = zipper.Unzip(theZipFile);
IFakeFile file = fileSystem.Open(path);
runner.Run(file);
}
While you can pass in mocks, there's no logic in the method to test. If you were to attempt a unit test for this it might look something like this:
// Assuming that zipper, fileSystem, and runner are mocks
void testDoIt()
{
// mock behavior of the mock objects
when(zipper.Unzip(any(File.class)).thenReturn("some path");
when(fileSystem.Open("some path")).thenReturn(mock(IFakeFile.class));
// run the test
someObject.DoIt(zipper, fileSystem, runner);
// verify things were called
verify(zipper).Unzip(any(File.class));
verify(fileSystem).Open("some path"));
verify(runner).Run(file);
}
Congratulations, you basically copy-pasted the implementation details of your DoIt() method into a test. Happy maintaining.
When you write tests you want to test the WHAT and not the HOW.
See Black Box Testing for more.
The WHAT is the name of your method (or at least it should be). The HOW are all the little implementation details that live inside your method. Good tests allow you to swap out the HOW without breaking the WHAT.
Think about it this way, ask yourself:
"If I change the implementation details of this method (without altering the public contract) will it break my test(s)?"
If the answer is yes, you are testing the HOW and not the WHAT.
To answer your specific question about testing code with file system dependencies, let's say you had something a bit more interesting going on with a file and you wanted to save the Base64 encoded contents of a byte[] to a file. You can use streams for this to test that your code does the right thing without having to check how it does it. One example might be something like this (in Java):
interface StreamFactory {
OutputStream outStream();
InputStream inStream();
}
class Base64FileWriter {
public void write(byte[] contents, StreamFactory streamFactory) {
OutputStream outputStream = streamFactory.outStream();
outputStream.write(Base64.encodeBase64(contents));
}
}
#Test
public void save_shouldBase64EncodeContents() {
OutputStream outputStream = new ByteArrayOutputStream();
StreamFactory streamFactory = mock(StreamFactory.class);
when(streamFactory.outStream()).thenReturn(outputStream);
// Run the method under test
Base64FileWriter fileWriter = new Base64FileWriter();
fileWriter.write("Man".getBytes(), streamFactory);
// Assert we saved the base64 encoded contents
assertThat(outputStream.toString()).isEqualTo("TWFu");
}
The test uses a ByteArrayOutputStream but in the application (using dependency injection) the real StreamFactory (perhaps called FileStreamFactory) would return FileOutputStream from outputStream() and would write to a File.
What was interesting about the write method here is that it was writing the contents out Base64 encoded, so that's what we tested for. For your DoIt() method, this would be more appropriately tested with an integration test.
There's really nothing wrong with this, it's just a question of whether you call it a unit test or an integration test. You just have to make sure that if you do interact with the file system, there are no unintended side effects. Specifically, make sure that you clean up after youself -- delete any temporary files you created -- and that you don't accidentally overwrite an existing file that happened to have the same filename as a temporary file you were using. Always use relative paths and not absolute paths.
It would also be a good idea to chdir() into a temporary directory before running your test, and chdir() back afterwards.
I am reticent to pollute my code with types and concepts that exist only to facilitate unit testing. Sure, if it makes the design cleaner and better then great, but I think that is often not the case.
My take on this is that your unit tests would do as much as they can which may not be 100% coverage. In fact, it may only be 10%. The point is, your unit tests should be fast and have no external dependencies. They might test cases like "this method throws an ArgumentNullException when you pass in null for this parameter".
I would then add integration tests (also automated and probably using the same unit testing framework) that can have external dependencies and test end-to-end scenarios such as these.
When measuring code coverage, I measure both unit and integration tests.
There's nothing wrong with hitting the file system, just consider it an integration test rather than a unit test. I'd swap the hard coded path with a relative path and create a TestData subfolder to contain the zips for the unit tests.
If your integration tests take too long to run then separate them out so they aren't running as often as your quick unit tests.
I agree, sometimes I think interaction based testing can cause too much coupling and often ends up not providing enough value. You really want to test unzipping the file here not just verify you are calling the right methods.
One way would be to write the unzip method to take InputStreams. Then the unit test could construct such an InputStream from a byte array using ByteArrayInputStream. The contents of that byte array could be a constant in the unit test code.
This seems to be more of an integration test as you are depending on a specific detail (the file system) that could change, in theory.
I would abstract the code that deals with the OS into it's own module (class, assembly, jar, whatever). In your case you want to load a specific DLL if found, so make an IDllLoader interface and DllLoader class. Have your app acquire the DLL from the DllLoader using the interface and test that .. you're not responsible for the unzip code afterall right?
Assuming that "file system interactions" are well tested in the framework itself, create your method to work with streams, and test this. Opening a FileStream and passing it to the method can be left out of your tests, as FileStream.Open is well tested by the framework creators.
You should not test class interaction and function calling. instead you should consider integration testing. Test the required result and not the file loading operation.
As others have said, the first is fine as an integration test. The second tests only what the function is supposed to actually do, which is all a unit test should do.
As shown, the second example looks a little pointless, but it does give you the opportunity to test how the function responds to errors in any of the steps. You don't have any error checking in the example, but in the real system you may have, and the dependency injection would let you test all the responses to any errors. Then the cost will have been worth it.
For unit test I would suggest that you include the test file in your project(EAR file or equivalent) then use a relative path in the unit tests i.e. "../testdata/testfile".
As long as your project is correctly exported/imported than your unit test should work.