I want to execute all tests from my application, now I do it with command:
go test ./app/...
Unfortunately it takes quite a long time, despite that single tests run quite fast. I think that the problem is that go needs to compile every package (with its dependence) before it runs tests.
I tried to use -i flag, it help a bit but still I'm not satisfied with the testing time.
go test -i ./app/...
go test ./app/...
Do you have any better idea how to efficiently test multiple packages.
This is the nature of go test: it builds a special runtime with addition code to execute (this is how it tracks code coverage).
If it isnt fast enough, you have two options:
1) use bash tooling to compile a list of packages (e.g. using ls), and then execute them each individually in parallel. There exists many ways to do this in bash.
The problem with this approach is that the output will be interleaved and difficult to track down failures.
2) use the t.Parallel() flag with each of your tests to allow the test runtime to execute in parallel. Since Go 1.5, go test runs with GOMAXPROCS set to the number of cores on your CPU which allows for concurrently running tests. Tests are still ran synchronously by default. You have to set the t.Parallel() flag for each test, telling the runtime it is OK to execute this test in parallel.
The problem with this approach being that it assumes you followed best practices and have used SoC/decoupling, don't have global states that would mutate in the middle of another test, no mutex locks (or very few of them), no race condition issues (use -race), etc.
--
Opinion: Personally, I setup my IDE to run gofmt and go test -cover -short on every Save. that way, my code is always formatted and my tests are run, only within the package I am in, telling me if something failed. The -cover flag works with my IDE to show me the lines of code that have been tested versus not tested. The -short flag allows me to write tests that I know will take a while to run, and within those tests I can check t.Short() bool to see if I should t.Skip() that test. There should be packages available for your favorite IDE to set this up (I did it in Sublime, VIM and now Atom).
That way, I have instant feedback within the package I m editing.
Before I commit the code, I then run all tests across all packages. Or, I can just have the C.I. server do it.
Alternatively, you can make use of the -short flag and build tags (e.g. go test -tags integration) to refactor your tests to separate your Unit tests from Integration tests. This is how I write my tests:
test that are fast and can run in parallel <- I make these tests run by default with go test and go test -short.
slow tests or tests that require external components, I require addition input to run, like go test -tags integration is required to run them. This pattern does not run the integration tests with a normal go test, you have to specify the additional tag. I don't run the integration tests across the board either. That's what my CI servers are for.
If you follow a consistent name scheme for your tests you can easily reduce the number of them you execute by using the -run flag.
Quoting from go help testflag:
-run regexp
Run only those tests and examples matching the regular expression.
So let's say you have 3 packages: foo, bar and qux. Tests of those packages are all named like TestFoo.., TestBar.. and TestQux.. respectively.
You could do go test -run '^Test(Foo|Bar)*' ./... to run only tests from foo and bar package.
Related
Is there a way to executes only those tests which are affected by recent changes in Go? We have a large unit test suite and now it is starting to take a while before it finishes. We are thinking that we only run those tests which are affected by the code changes in the first pass.
Python has something like this: https://github.com/tarpas/pytest-testmon
Is there a way to do this in Go?
No, there is no way to do it in Go. All you can do is to split your code into packages and tests one package at a time
go test some/thing
Instead of all of them
go test ./...
go test in Go 1.10 and newer does this automatically at the package level; any packages with no changes will return cached test results, while packages with changes will be re-tested.
If a single package's tests are still taking too long, that points to a problem with your tests; good tests in Go generally execute extremely quickly, which means you probably need to review the tests themselves, and do some combination of the following:
Isolate integration tests using build tags. Tests that hit external resources tend to be slower, so making them optional will help speed up runs where you just want unit test results.
Make use of short tests so that you have the option of a quick pass you can do more frequently.
Review your unit tests - do you have unnecessary tests or test cases? Are your tests unnecessarily complex? Are you reading golden files that could be kept in constants instead? Are you deserializing static JSON into objects when you could create the object programmatically?
Optimize your unit tests. Tests are still code and poor-performing code can be optimized for performance. There are many cases in unit tests where we're happy to opt for convenience over performance in ways we wouldn't with production code, but if performance is a problem, that choice must be reconsidered.
Review your test execution - are you using uncacheable parameters to go test that are preventing it from caching results? Are you engaging the race detector, profiler, or code coverage reporting out of habit in cases where it's unnecessary?
Nabaz may be what you are looking for.
The example from their README.md is
export CMDLINE="go test"
export PKGS="./..." # IMPORTANT make sure packages are written SEPERATLY
nabaz test --cmdline $CMDLINE --pkgs $PKGS .
You cannot rerun tests only for the last edited files. But there are a few ways of optimizing running tests.
Firstly, you have to split your project into logical-separated packages. This will lead to a situation that one change will require rerunning test only in the package (in most cases).
Secondly, you can run the test only for the package you're changing by typing
go test mypkg
or... you can use build tags. The last way of optimizing is to use the short test functionality.
How do I make use of the -short flag given in go test -short?
Is it possible to combine the -short and -benchmark flags?
I am quite new to the Go language but I am trying to adapt myself to some of its common practises. Part of this is to try and ensure my code has not only Unit Tests added in a way that the go test system works but that go test -benchmark also functions in a useful manner.
At the moment I have a benchmark test that includes a series of sub-tests based on varying sized of input data. Running the 15 permutations takes a long time, so it would be nice to given an option of shortening that test time.
The next set of tests I plan to write could include a series of data input examples. I expect running a single one of these could work as a sanity check for a short test but having the option to run several on longer (or normal) test runs would be good.
When I look at the GoLang documentation for the testing flags it says "Tell long-running tests to shorten their run time." which sounds like what I want, but I cannot work out how to pick up this flag in the test code.
How do I make use of the -short flag given in go test -short?
Using the short flag on the command line causes the testing.Short() function to return true. You can use this to either add test cases or skip them:
if testing.Short() == false {
// Extra test code here
}
The above is perhase a little unusal, it may be more common to see:
func TestThatIsLong(t *testing.T) {
if testing.Short() {
t.Skip()
}
}
Be sure to have enough test cases for your -short runs to do at least a bare minimal check. Some people suggest using the -short run for the main continuous integration and pre-commit checks while keeping the longer test runs for scheduled daily or weekly builds or merges.
The Documents section of The Go Programming Language site mentions how to write test code briefly but the bulk of information about the topic is in the Package Documentation for the Go Testing Package. For most topics, the majority of documentation will be found in the package rather than separate. This may be quite different from other languages where the class and package documents are often poor.
Is it possible to combine the -short and -benchmark flags?
It is possible as the testing.Short() is global in scope. However, it is recommended that Benchmark tests do not make extensive use of the -short flag to control their behaviour. It is more common for the person running the benchmarking to alter the -benchtime permitted for each benchmark test case.
By default, the benchtime is set to one second. If you have 60 benchmark test cases, it will take at least sixty seconds to complete the run (setup time + execution time). If the benchtime is set to less:
go test -benchmem -benchtime 0.5s -bench=. <package_names>
the overall execution time will go down proportionality.
The various testing clags are described in the go command documentation's Testing Flags section (not the package documentation which does not mention benchtime). You do not need to do anything different in your benchmark code to have the benchtime be effective, just use the standard for i := 0; i < b.N; i++ { and the framework will adjust the value of N as needed.
Although it is not recommended, I have used -short within a benchmark to reduce the number of test cases when varying input to a function to give an indication of its Big O notation. For all benchmark runs (-short and normal), I keep a representative data size for the input to track long term trends. For the longer runs, I include several small and larger sized data sets to allow an approximation of the functions resource requirements. As with the unit test cases I choose to run the -short version always in the CI and the longer version on a schedule.
Whatever your questions are with Go, it is strongly recommended that you try thoroughly reading both the https://golang.org/doc/ and relevant https://golang.org/pkg/ documents. Frequently the most useful documentation is in the package documentation.
When using WebStorms as a test runner every unit test is run. Is there a way to specify running only one test? Even only running one test file would be better than the current solution of running all of them at once. Is there a way to do this?
I'm using Mocha.
not currently possible, please vote for WEB-10067
You can double up the i on it of d on describe and the runner will run only that test/suite. If you prefix it with x it will exclude it.
There is a plugin called ddescribe that gives you a gui for this.
You can use the --grep <pattern> command-line option in the Extra Mocha options box on the Mocha "Run/Debug Configurations" screen. For example, my Extra Mocha options line says:
--timeout 5000 --grep findRow
All of your test *.js files, and the files they require, still get loaded, but the only tests that get run are the ones that match that pattern. So if the parts you don't want to execute are tests, this helps you a lot. If the slow parts of your process automatically get executed when your other modules get loaded with require, this won't solve that problem. You also need to go into the configuration options to change the every time you want to run tests matching a different pattern, but this is quick enough that it definitely saves me time vs. letting all my passing tests run every time I want to debug one failing test.
You can run the tests within a scope when you have a Mocha config setting by using .only either on the describe or on the it clauses
I had some problems getting it to work all the time, but when it went crazy and kept running all my tests and ignoring the .only or .skip I added to the extra mocha options the path to one of the files containing unit tests just like in the example for node setup and suddenly the .only feature started to work again regardless of the file the tests were situated in.
Well, Maven is too good, when talking about speed.
But I want something that is more acceptable.
Imagine I wrote a test org.fun.AbcTestCase
In such TestCase, I include some JUnit / TestNG tests
Now I want to run only this test case, said org.fun.AbcTestCase from command line.
How can I do that?
I know it's easy to do it within Eclipse or IDEA. However, I am learning Scala and IDE support is currently terrible, especially when it comes to run unit test.
Here is why I find it difficult:
The project would involve many dependencies. When I test my project as a Maven goal, surefire takes care of that. Mimic that with reasonable manual effort is important.
The test process need to be fast enough with real time compiler (well, recompile the whole bunch of scala code is a terrible night mare).
Use the test parameter in the surefire:test mojo
mvn test -Dtest=MyTest
will run only the test MyTest.class, recompiling only if necessary (if changes are found).
If you are free to switch (as I imagine you might be if you have a toy project you're using to learn Scala) you might consider using SBT instead of Maven. Its IDE integration is only rudimentary, but it is quite handy for running tests (it can watch the source tree and re-run tests when you save, or even just the tests that failed during the last run.) Check out the website at http://simple-build-tool.googlecode.com/ .
I am using the Boost 1.34.1 unit test framework. (I know the version is ancient, but right now updating or switching frameworks is not an option for technical reasons.)
I have a single test module (#define BOOST_TEST_MODULE UnitTests) that consists of three test suites (BOOST_AUTO_TEST_SUITE( Suite1 );) which in turn consist of several BOOST_AUTO_TEST_CASE()s.
My question:
Is it possible to run only a subset of the test module, i.e. limit the test run to only one test suite, or even only one test case?
Reasoning:
I integrated the unit tests into our automake framework, so that the whole module is run on make check. I wouldn't want to split it up into multiple modules, because our application generates lots of output and it is nice to see the test summary at the bottom ("X of Y tests failed") instead of spread across several thousand lines of output.
But a full test run is also time consuming, and the output of the test you're looking for is likewise drowned; thus, it would be nice if I could somehow limit the scope of the tests being run.
The Boost documentation left me pretty confused and none the wiser; anyone around who might have a suggestion? (Some trickery allowing to split up the test module while still receiving a usable test summary would also be welcome.)
Take a look at the --run_test parameter - it should provide what you're after.