It's possible to create a GoogleTest filter without wildcard in it ?
I have a test suite TestSuiteA
TEST_P(TestSuiteA, Blabla) {
// ...
}
INSTANTIATE_TEST_CASE_P(Apple, TestSuiteA, testing::Combine(...));
INSTANTIATE_TEST_CASE_P(Ananas, TestSuiteA, testing::Combine(...));
And a second test suite TestSuiteB
TEST_P(TestSuiteB, Kokoko) {
// ...
}
INSTANTIATE_TEST_CASE_P(Apple, TestSuiteB, testing::Combine(...));
INSTANTIATE_TEST_CASE_P(Ananas, TestSuiteB, testing::Combine(...));
I know precisely which tests I want to launch, so I don't want use the wildcard.
My filter
Apple/TestSuiteA:Apple/TestSuiteB
But it doesn't work -> 0 tests to launch
But with this
Apple/TestSuiteA.*:Apple/TestSuiteB.*
It works
What does this .* mean and why it doesn't work without ?
Thanks
Each tests needs to have a unique name, and you haven't provided the exact correct name. That's why no tests were run. Make sure you know how exactly the tests are named - you can use --gtest_list_tests to list all the test cases defined in a given test executable. Then use the exact names with --gtest_filter separated with :. To read more how to run multiple tests, see gtest advanced documentation.
Related
I have a large suite of tests that are currently running in series. I would like to make the suite parallelizable as much as possible. One big problem I have is that some tests require certain global application state to be set one way, and some require it to be set another way. Is there a way I can make the NUnit test runner group my tests based on which global state they require, and then run the tests within each group in parallel with each other, and modify the global state between groups?
For example, let's say there is a global setting Foo and a custom attribute [RequiresFoo(X)] that can be used to annotate which Foo value a test requires. At runtime I want NUnit to group all tests by their argument to RequiresFoo, counting unmarked tests as having some default Foo value.
Then for each group, I want it to
Set Foo = N where N is the Foo value for that group.
Run all tests in that group in parallel.
In an ideal world I would have a mock system for this global state, but that would take a lot of time that I don't have right now. Can I get NUnit to do this or something like it?
Note, I need to be able to execute any method between groups, not just set a variable. The global context I'm actually dealing with can involve starting or stopping microservices, updating configuration files, etc. I can serialize any of these requirements to a string to pass to a custom attribute, but at runtime I need to be able to run arbitrary code to parse the requirements and reconfigure the environment.
Here is a pseudo-code example.
By default tests execute in series like this:
foreach (var t in allTests)
{
Run(t);
}
NUnits basic parallel behavior is like this:
Parallel.ForEach(allTests, t => Run(t));
I want something like this:
var grouped = allTests
.GroupBy(t => GetRequirementsFromCustomAttributes(t));
foreach (var g in grouped)
{
SetGlobalState(g.Key);
Parallel.ForEach(g, t => Run(t));
}
As you state the problem, it's not yet possible in NUnit. There is a feature planned but not yet implemented that would allow arbitrary grouping of tests that may not run together.
Your workaround is to make each "group" a test, since NUnit only allows specification of parallelization on tests. Note that by test, we mean either a test case or a group of tests, i.e. fixture or namespace suite.
Putting [Parallelizable] on a test anywhere in the hierarchy causes that test to run in parallel with other tests at the same level. Putting [NonParallelizable] on it causes that same test to run in isolation.
Let's say you have five fixtures that require a value of Foo. You would make each of those fixtures non-parallelizable. In that way, none of them could run at the same time and interfere with the others.
If you want to allow those fixtures to run in parallel with other non-Foo fixtures, simply put them all in the same namespace, like Tests.FooRequred.
Create a SetupFixture in that namespace - possibly a dummy without any actual setup action. Put the [Parallelizable] attribute on it.
The group of all foo tests would then run in parallel with other tests while the individual fixtures would not run in parallel with one another.
I have unit test with two test cases (I use NUnit). Test case argument is long string condition.
For example:
[TestCase("condition1 AND condition2 AND ... condition3")]
[TestCase("condition1 AND condition2 AND ... condition4")]
In TeamCity Build->Tests output I can see than only 1 test passed and status OK (2 runs). But it should show that 2 tests passed because they are independent test cases.
As I understand the problem is that TeamCity identifies tests by their names but crops long names. So in my case name is the same for both tests, it cropped on the word "AND" and contains no last condition. Looks like:
TestClass.TestName("condition1 AND condition2 AND
I think it's TeamCity bug/feature but maybe someone knows if it can be configurable or fixed somehow? Because I want to see 2 tests passed instead of 1 as a result. Of course I can set custom name for my test cases in code but I don't want to fix it this way.
TeamCity version 9.1.30.476.
Thanks in advance for any help.
I have a big set of unit and some integration tests implemented with google test framework or gtest.
Since there is no tagging I am using the disable convention to separate tests in groups or prefixing them with GROUPA_, GROUPB_, etc.
This works well. I can filter different groups, to run in different situations etc.
The problem I have is with typed tests that belong to different groups. Since the name of the test is fixed no matter what arguments I pass to the test fixture I cannot assign the same test to more than one group.
My question is, can I control the name of the test somehow runtime before the runner or something. Any other way to control the name of a typed tests?
As a workaround, you can include a typed test into the different groups but for all the types. You could use as many prefixes as needed:
TYPED_TEST(FooTest, GROUPA_GROUPB_Bar)
{
}
Then use filtering strings like FooTest.*GROUPX*_Bar.
I cannot think of a way to map each type the test is instanciated for to a a different group.
I sometimes see that tests are taged as :pending
ExUnit.start
ExUnit.configure(exclude: :pending)
defmodule SublistTest do
use ExUnit.Case, async: true
test "empty equals empty" do
assert Sublist.compare([], []) == :equal
end
#tag :pending
test "empty is a sublist of anything" do
assert Sublist.compare([], [nil]) == :sublist
end
end
Obviously there are excluded from execution, when you run tests from shell
elixir sublist_test.exs
Is there a way to include :pending test when running tests from command line?
And second question: Why people tag tests as :pending.
You can do this with the mix test task inside a Mix project. Mix projects are super simple to set up:
$ mix new sublist
You can specify your default exclusions in test/test_helper.exs:
ExUnit.start()
ExUnit.configure(exclude: :pending)
then you can write your test in test/sublist_test.exs. To run the tests, do
$ mix test
and to include pending tests as well do
$ mix test --include pending
Now for your second question: people usually mark tests as pending because they are not implemented yet, but they don't want to forget about them. For example you might be on a tight deadline but want to make sure that the tests will eventually be completed. Or maybe the test is not working yet because you need to implement other things first.
If the tests were not excluded by default they would convey the wrong message: that the affected tests are red. But they are rather to do items than actual tests, so they should not fail by default.
I am trying to group individual ctest tests together but have so far been unsuccessful. For example if I have the following tests:
add_test(test1)
add_test(test2)
add_test(test3)
I would like to group them into a test suite that will upload to a dashboard as one test.
Maybe helpful for you can be the possibelity to set different labels to test. Look at that possibelity (goes over set Property)