Setting up the priority in bazel test [duplicate] - unit-testing

I have a certain number of end-to-end integration tests that I'd like to run before other deeper end-to-end tests run. So long as I am not using the --keep_going flag the first test failure should quit a bazel test //... session. I'd like the more shallow end-to-end tests to run before the deeper tests, is there a way to govern test execution order from bazel?
I suppose I could do stuff in the shell like marking the tests as manual and then piecewise invoking the relevant tests in the order I wish, but it would be great if there's some built in way to accomplish the above.

There's no direct support for this in Bazel, AFAIK.
You could set user specified tags and then use the --test_tag_filter flag to run tests in batches. F.e. you could have shallow and deep tags, and then first run the tests with the shallow tag, then the tests without either tag and then the tests with the deep tag.

Related

What is the difference between test cases coverage and just a console application coverage?

I am having a hard time running a code coverage since most of the tools (including visual studio one) requires unit tests.
I dont understand why do I need to create unit tests, why cant I basically run a code coverage with my own console exe application?
just click F5 and get the report. without putting an effort into creating unit tests or whatever.
thanks
In general, with good test coverage tool, coverage data is collected for whatever causes execution of the code. You should be able to exercise the code by hand, and get coverage data, and/or execute the code via a test case, and get coverage data.
There's a question of how you track the coverage information back to a test. Most coverage tools just collect coverage data; some other mechanism is needed to associate that coverage data with a set-of or specific test. [Note that none of this discussion cares if your tests pass. its up to you whether you want track coverage data for failed tests or not; this information is often useful when trying to find out why the test failed].
You can associate coverage data with tests by convention; Semantic Designs test coverage tools (from my company) do this. After setting up the coverage collection, you run any set of tests, by any means you desire. Each program execution produces a test coverage vector with date stamp. You can combine the coverage vectors for such a set into a single set (the UI helps you do this, or you can do it in batch script by invoking the UI as a command line tool), and then you associate the set of tests you ran with the combined coverage data. Some people associate individual tests with the coverage data collected by the individual test execution. (Doing this allows you later discover if one tests covers everything another test covers, implying you don't need the second one).
You can be more organized; you can configure your automated tests to each exercise the code, and automatically store the generated vector away in place unique to the test. This is just adding a bit of scripting to each automated test. Some specific test coverage tools come with unit test mechanisms that have a way to do it. (Our tools don't insist on a specific test framework, so they work with any framework).

Only running tests affected by recent changes

Is there a way to executes only those tests which are affected by recent changes in Go? We have a large unit test suite and now it is starting to take a while before it finishes. We are thinking that we only run those tests which are affected by the code changes in the first pass.
Python has something like this: https://github.com/tarpas/pytest-testmon
Is there a way to do this in Go?
No, there is no way to do it in Go. All you can do is to split your code into packages and tests one package at a time
go test some/thing
Instead of all of them
go test ./...
go test in Go 1.10 and newer does this automatically at the package level; any packages with no changes will return cached test results, while packages with changes will be re-tested.
If a single package's tests are still taking too long, that points to a problem with your tests; good tests in Go generally execute extremely quickly, which means you probably need to review the tests themselves, and do some combination of the following:
Isolate integration tests using build tags. Tests that hit external resources tend to be slower, so making them optional will help speed up runs where you just want unit test results.
Make use of short tests so that you have the option of a quick pass you can do more frequently.
Review your unit tests - do you have unnecessary tests or test cases? Are your tests unnecessarily complex? Are you reading golden files that could be kept in constants instead? Are you deserializing static JSON into objects when you could create the object programmatically?
Optimize your unit tests. Tests are still code and poor-performing code can be optimized for performance. There are many cases in unit tests where we're happy to opt for convenience over performance in ways we wouldn't with production code, but if performance is a problem, that choice must be reconsidered.
Review your test execution - are you using uncacheable parameters to go test that are preventing it from caching results? Are you engaging the race detector, profiler, or code coverage reporting out of habit in cases where it's unnecessary?
Nabaz may be what you are looking for.
The example from their README.md is
export CMDLINE="go test"
export PKGS="./..." # IMPORTANT make sure packages are written SEPERATLY
nabaz test --cmdline $CMDLINE --pkgs $PKGS .
You cannot rerun tests only for the last edited files. But there are a few ways of optimizing running tests.
Firstly, you have to split your project into logical-separated packages. This will lead to a situation that one change will require rerunning test only in the package (in most cases).
Secondly, you can run the test only for the package you're changing by typing
go test mypkg
or... you can use build tags. The last way of optimizing is to use the short test functionality.

How to Force a Google Test Case to Run Last

Our team has a very mature suite of Google Test (GTest) test cases. The test cases, via a custom test environment, build up a test report in addition to the standard JUnit XML output that GTest produces on its own.
I would like to add one final test that ensures that the Google Test suite produced its test report after all other tests in the suite execute. In other words, I would like to force which test executes last so it can write the custom output and then verify that it was properly written, failing if it was not.
The solution should work even if Google Test is executing tests in random order. Can I force one test to run last? Can I write a test that GTest won't automatically discover, call it myself from my "main", and have its results rolled into the rest of them, or ??
I see no way to do this with the current GTest API, but thought it was worth asking.
This is probably closest to what you're looking for.
https://github.com/google/googletest/blob/master/docs/advanced.md#sharing-resources-between-tests-in-the-same-test-suite
Perhaps you can use the destruction of the static object to collect information about all tests that ran.
However, beware of forks.
I really would write your own main(), fork the test process and wait for the child to finish so you can collect data from it.

Golang - Effective test of multiple packages

I want to execute all tests from my application, now I do it with command:
go test ./app/...
Unfortunately it takes quite a long time, despite that single tests run quite fast. I think that the problem is that go needs to compile every package (with its dependence) before it runs tests.
I tried to use -i flag, it help a bit but still I'm not satisfied with the testing time.
go test -i ./app/...
go test ./app/...
Do you have any better idea how to efficiently test multiple packages.
This is the nature of go test: it builds a special runtime with addition code to execute (this is how it tracks code coverage).
If it isnt fast enough, you have two options:
1) use bash tooling to compile a list of packages (e.g. using ls), and then execute them each individually in parallel. There exists many ways to do this in bash.
The problem with this approach is that the output will be interleaved and difficult to track down failures.
2) use the t.Parallel() flag with each of your tests to allow the test runtime to execute in parallel. Since Go 1.5, go test runs with GOMAXPROCS set to the number of cores on your CPU which allows for concurrently running tests. Tests are still ran synchronously by default. You have to set the t.Parallel() flag for each test, telling the runtime it is OK to execute this test in parallel.
The problem with this approach being that it assumes you followed best practices and have used SoC/decoupling, don't have global states that would mutate in the middle of another test, no mutex locks (or very few of them), no race condition issues (use -race), etc.
--
Opinion: Personally, I setup my IDE to run gofmt and go test -cover -short on every Save. that way, my code is always formatted and my tests are run, only within the package I am in, telling me if something failed. The -cover flag works with my IDE to show me the lines of code that have been tested versus not tested. The -short flag allows me to write tests that I know will take a while to run, and within those tests I can check t.Short() bool to see if I should t.Skip() that test. There should be packages available for your favorite IDE to set this up (I did it in Sublime, VIM and now Atom).
That way, I have instant feedback within the package I m editing.
Before I commit the code, I then run all tests across all packages. Or, I can just have the C.I. server do it.
Alternatively, you can make use of the -short flag and build tags (e.g. go test -tags integration) to refactor your tests to separate your Unit tests from Integration tests. This is how I write my tests:
test that are fast and can run in parallel <- I make these tests run by default with go test and go test -short.
slow tests or tests that require external components, I require addition input to run, like go test -tags integration is required to run them. This pattern does not run the integration tests with a normal go test, you have to specify the additional tag. I don't run the integration tests across the board either. That's what my CI servers are for.
If you follow a consistent name scheme for your tests you can easily reduce the number of them you execute by using the -run flag.
Quoting from go help testflag:
-run regexp
Run only those tests and examples matching the regular expression.
So let's say you have 3 packages: foo, bar and qux. Tests of those packages are all named like TestFoo.., TestBar.. and TestQux.. respectively.
You could do go test -run '^Test(Foo|Bar)*' ./... to run only tests from foo and bar package.

unit testing failing when run in batch

I am new to unit testing. I have created various tests and when I run test each one by one, all tests passing. However, when I run run on the whole as a batch, some tests failing. Why is that so? How can I correct that?
To resolve this issue it is important to follow certain rules when writing unit tests. Some of the rules are easy to follow and apply while other may need further considerations depending your circumstances.
Each test should set up a unique set of data. This is in particular important when you work with persistent data, e.g. in a database. When the test creates a user with a particular user id, then write the test so that it uses a different user id each time. For example (C#):
var user = new User(Guid.NewGuid());
At the end of each test, cleanup the data that the test has created. For example in the tear down method remove the data that you created (C#, NUnit):
[TearDown]
public void TheTearDownMethod() {
_repository.Delete(_user);
}
Variations are possible, e.g. when you test against a database you may choose to load a backup just before you run the test suite. If you craft your tests carefully you don't need to cleanup the database after each test (or subset of tests).
To get from where you are now (each test passes when run in isolation) to where you would like to be start with running the first two tests in sequence, make them pass. Then run three in sequence, make them pass, etc. In each iteration identify what previous test causes the added test to fail. Resolve that dependency. This way you learn a lot about your tests but also how to avoid writing tests that depend on each other.
Once the suite passes in one patch run it frequently so that you detect dependency as early as possible.
This doesn't cover all scenarios and variations but hopefully gives you a guideline for building on what you already have.
Probably some of your tests are dependent on the prior state of the machine. Unit tests should not depend on the previous state of the process/machine, so you should look at the failing tests and work out what they are depending on.
Sometimes final conditions of one test have an impact on initial conditions of the next.
Manual run and batch run may have different behaviour regarding how initial conditions of each test are set.
You obviously have side effects from some tests that create unintended dependencies. Debug.
Good unit tests are atomic and with zero dependencies to other tests (important). A good practice is that each test creates (removes) everything it is dependent on before running the test. Cleaning up afterwards is also a good practice, it really helps and is recommended but not 100 % necessary.