Only running tests affected by recent changes - unit-testing

Is there a way to executes only those tests which are affected by recent changes in Go? We have a large unit test suite and now it is starting to take a while before it finishes. We are thinking that we only run those tests which are affected by the code changes in the first pass.
Python has something like this: https://github.com/tarpas/pytest-testmon
Is there a way to do this in Go?

No, there is no way to do it in Go. All you can do is to split your code into packages and tests one package at a time
go test some/thing
Instead of all of them
go test ./...

go test in Go 1.10 and newer does this automatically at the package level; any packages with no changes will return cached test results, while packages with changes will be re-tested.
If a single package's tests are still taking too long, that points to a problem with your tests; good tests in Go generally execute extremely quickly, which means you probably need to review the tests themselves, and do some combination of the following:
Isolate integration tests using build tags. Tests that hit external resources tend to be slower, so making them optional will help speed up runs where you just want unit test results.
Make use of short tests so that you have the option of a quick pass you can do more frequently.
Review your unit tests - do you have unnecessary tests or test cases? Are your tests unnecessarily complex? Are you reading golden files that could be kept in constants instead? Are you deserializing static JSON into objects when you could create the object programmatically?
Optimize your unit tests. Tests are still code and poor-performing code can be optimized for performance. There are many cases in unit tests where we're happy to opt for convenience over performance in ways we wouldn't with production code, but if performance is a problem, that choice must be reconsidered.
Review your test execution - are you using uncacheable parameters to go test that are preventing it from caching results? Are you engaging the race detector, profiler, or code coverage reporting out of habit in cases where it's unnecessary?

Nabaz may be what you are looking for.
The example from their README.md is
export CMDLINE="go test"
export PKGS="./..." # IMPORTANT make sure packages are written SEPERATLY
nabaz test --cmdline $CMDLINE --pkgs $PKGS .

You cannot rerun tests only for the last edited files. But there are a few ways of optimizing running tests.
Firstly, you have to split your project into logical-separated packages. This will lead to a situation that one change will require rerunning test only in the package (in most cases).
Secondly, you can run the test only for the package you're changing by typing
go test mypkg
or... you can use build tags. The last way of optimizing is to use the short test functionality.

Related

Technique for TDD testing cycles differentiating types of test?

A newbie in this art... but so far, from my reading, I understand that there are broadly 3 categories: unit tests, acceptance/integration tests (not the same) and end-to-end tests.
The thing is, of these 3, it appears that only unit tests are meant to run lightning-fast. It seems perfectly reasonable to be running ALL the unit tests for the entire project, all the time during development. But the same, it seems, can't be said of the other types.
It seems to me, therefore, that you'd want to be running a single acceptance test (or maybe a group of related ones) at each test run, while running all the unit tests for the whole project.
As for the latest end-to-end test that is in the "red" state, given that these can be even slower than acceptance tests, mightn't you want to run that only intermittently? And the entire end-to-end collection maybe only when you're doing something else, or at night or sthg?
I'm using Gradle, and I'm aware you can create a special test task to only run, for example, all the unit tests under a tests\unittests directory... but, if my thinking is valid, is there a habitual way of skipping, or selecting, particular acceptance tests, other than by constantly editing the code - which can get pretty tiresome?
For example, by somehow tagging particular acceptance or end-to-end tests as a certain "category", or maybe by arranging these tests in a hierarchical folder structure?
I have not used gradle, but in python I regularly use both ways you described:
tagging of specific classes of functional tests (a subset are usually tagged as "smoke" tests, to be run on each deploy)
representing tests in hierarchies
small/unit
integration
function (smoke are usually tagged functional tests)
ui
e2e
it appears that only unit tests are meant to run lightning-fast. It seems perfectly reasonable to be running ALL the unit tests for the entire project,
This is the goal, all unit tests are encouraged to be IO free, to run lighting-fast on ever single commit. This process is usually codifed with CI build jobs to trigger on every commit to a repo.
But the same, it seems, can't be said of the other types.
It really depends on what an acceptable build time is, and the size of your projects. I have found that most projects don't actually have that many integration, and if they do have an excessive number of integration, it is usually a good indication that the service should be rethought. For ever integration how many tests are necessary to protect against difficult to reproduce error cases, and to make sure their are checks that will break on interface changes?? In my experience, not many. I have recently started to use docker-compose for integration tests, which allows many tests 20-30 to be executed very quickly for every commit.
docker-compose also allows for a clean e2e environment to be brought up to have acceptance/functional tests executed against it.
It is also my experience that the higher level tests are executed less frequently, but should be executed as frequently as they can be. For example I work with an API, with 300 functional tests covering every method on every endpoint. Because they don't interact with a UI and only use HTTP, they take about a minute to execute. They are executed on every deploy to an environment and at regular intervals.

Golang - Effective test of multiple packages

I want to execute all tests from my application, now I do it with command:
go test ./app/...
Unfortunately it takes quite a long time, despite that single tests run quite fast. I think that the problem is that go needs to compile every package (with its dependence) before it runs tests.
I tried to use -i flag, it help a bit but still I'm not satisfied with the testing time.
go test -i ./app/...
go test ./app/...
Do you have any better idea how to efficiently test multiple packages.
This is the nature of go test: it builds a special runtime with addition code to execute (this is how it tracks code coverage).
If it isnt fast enough, you have two options:
1) use bash tooling to compile a list of packages (e.g. using ls), and then execute them each individually in parallel. There exists many ways to do this in bash.
The problem with this approach is that the output will be interleaved and difficult to track down failures.
2) use the t.Parallel() flag with each of your tests to allow the test runtime to execute in parallel. Since Go 1.5, go test runs with GOMAXPROCS set to the number of cores on your CPU which allows for concurrently running tests. Tests are still ran synchronously by default. You have to set the t.Parallel() flag for each test, telling the runtime it is OK to execute this test in parallel.
The problem with this approach being that it assumes you followed best practices and have used SoC/decoupling, don't have global states that would mutate in the middle of another test, no mutex locks (or very few of them), no race condition issues (use -race), etc.
--
Opinion: Personally, I setup my IDE to run gofmt and go test -cover -short on every Save. that way, my code is always formatted and my tests are run, only within the package I am in, telling me if something failed. The -cover flag works with my IDE to show me the lines of code that have been tested versus not tested. The -short flag allows me to write tests that I know will take a while to run, and within those tests I can check t.Short() bool to see if I should t.Skip() that test. There should be packages available for your favorite IDE to set this up (I did it in Sublime, VIM and now Atom).
That way, I have instant feedback within the package I m editing.
Before I commit the code, I then run all tests across all packages. Or, I can just have the C.I. server do it.
Alternatively, you can make use of the -short flag and build tags (e.g. go test -tags integration) to refactor your tests to separate your Unit tests from Integration tests. This is how I write my tests:
test that are fast and can run in parallel <- I make these tests run by default with go test and go test -short.
slow tests or tests that require external components, I require addition input to run, like go test -tags integration is required to run them. This pattern does not run the integration tests with a normal go test, you have to specify the additional tag. I don't run the integration tests across the board either. That's what my CI servers are for.
If you follow a consistent name scheme for your tests you can easily reduce the number of them you execute by using the -run flag.
Quoting from go help testflag:
-run regexp
Run only those tests and examples matching the regular expression.
So let's say you have 3 packages: foo, bar and qux. Tests of those packages are all named like TestFoo.., TestBar.. and TestQux.. respectively.
You could do go test -run '^Test(Foo|Bar)*' ./... to run only tests from foo and bar package.

Junit: changing sequence of test running

I have a big mess with 100 tests in one class and running all of them by clicking "Test project (...). They run in a random order and I would like them to run in a specific order - from beginning to the end, the same order that I wrote them. In eclipse it's not a problem because eclipse just works like that, how to do it in netbeans?
Any help will be appreciated.
Edit (due to answers): Tests order is required for the clearance of the log. They are independent.
If your tests needs to run in a specific order, something is wrong with your design.
2 test that needs to run one after another are 1 test. Consider this before searching for a solution.
check this https://blogs.oracle.com/mindless/entry/controlling_the_order_of_junit
Having tests depending on other tests 99.9% of the time a very bad idea. Unit tests should be independent from each other, as otherwise you might have a cascade of errors, or (even worse) one test failing because something another test did sometime before.
If you still want to go through this pain, you'll need to use a different unit testing framework (such as TestNG - see dependsOnMethods) which supports test dependencies.
Junit doesn't support this feature because it's seen by many as a bad practice (for very good reasons).
The next JUnit release will support ordering of test methods. The standard Maven Surefire Plugin supports ordering of test methods already.
Netbeans has good integration with ant build files. You could write a specific ant target that could execute each test in order.

How can I efficiently unit test when using dependency resolution via BuildConfig.groovy in Grails?

I want to follow TDD, but the command grails test-app CUT needs almost a minute to run due to Resolving dependencies... and Resolving new plugins. Please wait... ...
Each of those two stages takes about 20 seconds to complete while the tests only take up some seconds.
(I am unsure if this has any effect on the performance, but I am using dependency resolution via BuildConfig.groovy - and want to stick with it.)
How can I have grails only execute the tests any maybe skip the process of resolving?
How else could I speed up the process? (Note that grails interactive is unable to influence the speed of resolving.)
I had a similar issue and solved it by not using *-SNAPHOT versions of any plugins. I downgraded to the latest non-SNAPSHOT release and cut "resolving dependencies" from 10 seconds to 1 second.
Ideas:
Try removing (or to be safe moving) the directory /.ivy2/cache. The next time you do a 'run-app' all the dependencies will be downloaded again from scratch. After doing this I got my 'Resolving Dependencies...' time down by about 5 seconds.
There are some more tips on how to fully clean your directories here A full clean may help if you have some inconsistent files etc.
Try turning the logging on in BuildConfig.groovy by setting log to "info" in the grails.project.dependency.resolution section. This can give you a better idea of which dependencies are taking the longest.
Make sure your .ivy2 directory is on your local machine. See here for more info
In Grails 2 there's a new variant of the old (now deprecated) 'interactive' command. In order to start it, one must start grails without any arguments (i.e. grails <ENTER>).
Running test-app from there seems to skip dependecy resolution which ultimately makes tests run much faster now (~40 seconds less in the case mentioned).
You should write your unit tests in a way that you can run them directly from the IDE. I like looking at the green bar. For example in STS/Eclipse, just do "Run As->Junit Test". If the test requires Grails to be running, it's not an unit test anymore (it's an integration test).
I am going to have to back up FlareCoder on this. Too many Grails developers get lazy using Grails specific unit tests or worse, make everything an integration test. This is fine if your project is relatively small and your team does not mind Grails to start up every time but it does kind of fly in the face of true TDD.
Once you understand the full power of Groovy outside of Grails, you should try to write unit tests without depending on Grails. The true spirit of a unit test is not requiring a framework. Groovy on its own has many ways to stub/mock classes that don't require a long startup time. Then your unit tests can run individually and as a whole very fast. I do TDD this way in IntelliJ IDEA on a method level that is very fast.
It is NOT true that mocking in Grails requires Grails mocking ALL the time. Sometimes it is harder than other times to achieve this but remember, Grails is simply an abstraction of many cool technologies using some Groovy metaprogramming that allow quick development. If they aren't running like you expect, dig in and understand them so you can remove anything Grails is doing that you don't need.

Run scala unit test from command line separately with Maven

Well, Maven is too good, when talking about speed.
But I want something that is more acceptable.
Imagine I wrote a test org.fun.AbcTestCase
In such TestCase, I include some JUnit / TestNG tests
Now I want to run only this test case, said org.fun.AbcTestCase from command line.
How can I do that?
I know it's easy to do it within Eclipse or IDEA. However, I am learning Scala and IDE support is currently terrible, especially when it comes to run unit test.
Here is why I find it difficult:
The project would involve many dependencies. When I test my project as a Maven goal, surefire takes care of that. Mimic that with reasonable manual effort is important.
The test process need to be fast enough with real time compiler (well, recompile the whole bunch of scala code is a terrible night mare).
Use the test parameter in the surefire:test mojo
mvn test -Dtest=MyTest
will run only the test MyTest.class, recompiling only if necessary (if changes are found).
If you are free to switch (as I imagine you might be if you have a toy project you're using to learn Scala) you might consider using SBT instead of Maven. Its IDE integration is only rudimentary, but it is quite handy for running tests (it can watch the source tree and re-run tests when you save, or even just the tests that failed during the last run.) Check out the website at http://simple-build-tool.googlecode.com/ .