Our team has a very mature suite of Google Test (GTest) test cases. The test cases, via a custom test environment, build up a test report in addition to the standard JUnit XML output that GTest produces on its own.
I would like to add one final test that ensures that the Google Test suite produced its test report after all other tests in the suite execute. In other words, I would like to force which test executes last so it can write the custom output and then verify that it was properly written, failing if it was not.
The solution should work even if Google Test is executing tests in random order. Can I force one test to run last? Can I write a test that GTest won't automatically discover, call it myself from my "main", and have its results rolled into the rest of them, or ??
I see no way to do this with the current GTest API, but thought it was worth asking.
This is probably closest to what you're looking for.
https://github.com/google/googletest/blob/master/docs/advanced.md#sharing-resources-between-tests-in-the-same-test-suite
Perhaps you can use the destruction of the static object to collect information about all tests that ran.
However, beware of forks.
I really would write your own main(), fork the test process and wait for the child to finish so you can collect data from it.
Related
I am having a hard time running a code coverage since most of the tools (including visual studio one) requires unit tests.
I dont understand why do I need to create unit tests, why cant I basically run a code coverage with my own console exe application?
just click F5 and get the report. without putting an effort into creating unit tests or whatever.
thanks
In general, with good test coverage tool, coverage data is collected for whatever causes execution of the code. You should be able to exercise the code by hand, and get coverage data, and/or execute the code via a test case, and get coverage data.
There's a question of how you track the coverage information back to a test. Most coverage tools just collect coverage data; some other mechanism is needed to associate that coverage data with a set-of or specific test. [Note that none of this discussion cares if your tests pass. its up to you whether you want track coverage data for failed tests or not; this information is often useful when trying to find out why the test failed].
You can associate coverage data with tests by convention; Semantic Designs test coverage tools (from my company) do this. After setting up the coverage collection, you run any set of tests, by any means you desire. Each program execution produces a test coverage vector with date stamp. You can combine the coverage vectors for such a set into a single set (the UI helps you do this, or you can do it in batch script by invoking the UI as a command line tool), and then you associate the set of tests you ran with the combined coverage data. Some people associate individual tests with the coverage data collected by the individual test execution. (Doing this allows you later discover if one tests covers everything another test covers, implying you don't need the second one).
You can be more organized; you can configure your automated tests to each exercise the code, and automatically store the generated vector away in place unique to the test. This is just adding a bit of scripting to each automated test. Some specific test coverage tools come with unit test mechanisms that have a way to do it. (Our tools don't insist on a specific test framework, so they work with any framework).
Supposing I have system tests: A and B, where A includes a record to a database, B tries to modify it. When A fails, B will fail as well. A and B are written as "unit tests" (test methods), A and B are tests cases in TFS as well, automated, linked to these "unit tests". I put them on a test plan, test suite, both of them. I want to execute them with the "Run Functional Tests" step.
How can I tell TFS to execute them in the right order?
What is the best practice to develop tests like these?
You could created an ordered test, which is a container that holds other tests and guarantees that tests run in a specific order.
How to create an ordered test, you could refer this tutorial.
In TFS, you could follow below steps to run ordered test:
Add an Order Test file in your test project and use it to define the
testing order.
In your build definition, add a Run Functional Tests task. Change
the Test Assembly like the picture below.
In the Test Drop location I have the complete Project, and in the
Executions folder I have the ordered test. hope it helps
Update
It's able to order for manual tests, however not able for automated test. If you need the ordering support for automated tests, please vote on this user voice item:
enable changing the order of test cases on the web gui and let them be tested in this order for automated tests
https://visualstudio.uservoice.com/forums/330519-team-services/suggestions/13489221-enable-changing-the-order-of-test-cases-on-the-web
I want to execute all tests from my application, now I do it with command:
go test ./app/...
Unfortunately it takes quite a long time, despite that single tests run quite fast. I think that the problem is that go needs to compile every package (with its dependence) before it runs tests.
I tried to use -i flag, it help a bit but still I'm not satisfied with the testing time.
go test -i ./app/...
go test ./app/...
Do you have any better idea how to efficiently test multiple packages.
This is the nature of go test: it builds a special runtime with addition code to execute (this is how it tracks code coverage).
If it isnt fast enough, you have two options:
1) use bash tooling to compile a list of packages (e.g. using ls), and then execute them each individually in parallel. There exists many ways to do this in bash.
The problem with this approach is that the output will be interleaved and difficult to track down failures.
2) use the t.Parallel() flag with each of your tests to allow the test runtime to execute in parallel. Since Go 1.5, go test runs with GOMAXPROCS set to the number of cores on your CPU which allows for concurrently running tests. Tests are still ran synchronously by default. You have to set the t.Parallel() flag for each test, telling the runtime it is OK to execute this test in parallel.
The problem with this approach being that it assumes you followed best practices and have used SoC/decoupling, don't have global states that would mutate in the middle of another test, no mutex locks (or very few of them), no race condition issues (use -race), etc.
--
Opinion: Personally, I setup my IDE to run gofmt and go test -cover -short on every Save. that way, my code is always formatted and my tests are run, only within the package I am in, telling me if something failed. The -cover flag works with my IDE to show me the lines of code that have been tested versus not tested. The -short flag allows me to write tests that I know will take a while to run, and within those tests I can check t.Short() bool to see if I should t.Skip() that test. There should be packages available for your favorite IDE to set this up (I did it in Sublime, VIM and now Atom).
That way, I have instant feedback within the package I m editing.
Before I commit the code, I then run all tests across all packages. Or, I can just have the C.I. server do it.
Alternatively, you can make use of the -short flag and build tags (e.g. go test -tags integration) to refactor your tests to separate your Unit tests from Integration tests. This is how I write my tests:
test that are fast and can run in parallel <- I make these tests run by default with go test and go test -short.
slow tests or tests that require external components, I require addition input to run, like go test -tags integration is required to run them. This pattern does not run the integration tests with a normal go test, you have to specify the additional tag. I don't run the integration tests across the board either. That's what my CI servers are for.
If you follow a consistent name scheme for your tests you can easily reduce the number of them you execute by using the -run flag.
Quoting from go help testflag:
-run regexp
Run only those tests and examples matching the regular expression.
So let's say you have 3 packages: foo, bar and qux. Tests of those packages are all named like TestFoo.., TestBar.. and TestQux.. respectively.
You could do go test -run '^Test(Foo|Bar)*' ./... to run only tests from foo and bar package.
I have never done ordered tests as I am of the beleif that it's not good practice.
Where I work I am told to do them ,so let's cast aside what's good or bad practice.
I am new to msTests so could you help me here.
I have 10 tests and have to run in a particular order or some of them will fail.
I have created a Basic test class and added all the 10 tests.
I have created an Ordered test and moved to the right in the order I want to execute them.All fine.
Run the tests but MsTest runs the tests twice.Once the ordered tests all successed!! But also runs the same tests in no particular order
Am I missing the obvious if I have a set of tests that are in order shouldnt those be removed as normal tests only run as ordered test.
How can I make a set of tests only run as ordered tests?
Any suggestions?
I too struggled with this one, but then I found the following documentation on MSDN:
Ordered Test Overview
Apparently you don't get a list of the tests in the right order in the Test View.
Instead the ordered test appears as a single test.
To me this was not a very good news as my tests will be run twice when I choose to "Run All Tests In Solution" (and fail the second time when run in the wrong order), but at least I got an explanation to why it is behaving this way.
In VSTS, whenever you create an ordered test, it actually creates a separate file for that test. So, while executing you need to execute that ordered test file only. It will include all the tests in a particular order & during execution it will run as according to it only.
This is a popular question (though I agree, it's very bad practise). Check out this SO question:
How Does MSTEST/Visual Studio 2008 Team Test Decide Test Method Execution Order?
I've not done this myself, so cannot guarantee that any of the answers in the above question worked, but it's worth a shot.
This may be an old topic to answer, but this question does come up on the first page when searching on Google. I think what you are looking for is a Playlist. Create a new test playlist and then add only the tests you want to run.
I am new to unit testing. I have created various tests and when I run test each one by one, all tests passing. However, when I run run on the whole as a batch, some tests failing. Why is that so? How can I correct that?
To resolve this issue it is important to follow certain rules when writing unit tests. Some of the rules are easy to follow and apply while other may need further considerations depending your circumstances.
Each test should set up a unique set of data. This is in particular important when you work with persistent data, e.g. in a database. When the test creates a user with a particular user id, then write the test so that it uses a different user id each time. For example (C#):
var user = new User(Guid.NewGuid());
At the end of each test, cleanup the data that the test has created. For example in the tear down method remove the data that you created (C#, NUnit):
[TearDown]
public void TheTearDownMethod() {
_repository.Delete(_user);
}
Variations are possible, e.g. when you test against a database you may choose to load a backup just before you run the test suite. If you craft your tests carefully you don't need to cleanup the database after each test (or subset of tests).
To get from where you are now (each test passes when run in isolation) to where you would like to be start with running the first two tests in sequence, make them pass. Then run three in sequence, make them pass, etc. In each iteration identify what previous test causes the added test to fail. Resolve that dependency. This way you learn a lot about your tests but also how to avoid writing tests that depend on each other.
Once the suite passes in one patch run it frequently so that you detect dependency as early as possible.
This doesn't cover all scenarios and variations but hopefully gives you a guideline for building on what you already have.
Probably some of your tests are dependent on the prior state of the machine. Unit tests should not depend on the previous state of the process/machine, so you should look at the failing tests and work out what they are depending on.
Sometimes final conditions of one test have an impact on initial conditions of the next.
Manual run and batch run may have different behaviour regarding how initial conditions of each test are set.
You obviously have side effects from some tests that create unintended dependencies. Debug.
Good unit tests are atomic and with zero dependencies to other tests (important). A good practice is that each test creates (removes) everything it is dependent on before running the test. Cleaning up afterwards is also a good practice, it really helps and is recommended but not 100 % necessary.