I want to disable INSTANTIATE_TEST_CASE_P during run time. Somehow
if(condition)
INSTANTIATE_TEST_CASE_P(DISABLE_TEST, xxx) // disable
else
INSTANTIATE_TEST_CASE_P(TEST, xxx) // do not disable
Related
In a test case I would like to test a function which in debug mode generates an assertion for invalid input. This unfortunately stops Catch test runner. Is there any way to bypass this assertion so that the test runner keeps going ?
Here is my test case:
SCENARIO("Simple test case", "[tag]") {
GIVEN("some object") {
MyObject myobject;
WHEN("object is initialized with invalid data") {
// method init generates an assertion when parameters are invalid
bool result = myObject.init(nullptr, nullptr, nullptr, nullptr);
REQUIRE(false == result);
THEN("data processing can't be started") {
}
}
}
}
Usually assert is a macro doing something like
#define assert(e) ((void) ((e) \
? 0
: (void)printf ("%s:%u: failed assertion `%s'\n", __FILE__, __LINE__, #e),
abort(), // <-- this aborts you program after printf's above
0
)
and this macro is enabled in debug builds. For more specifics, look into your standard assert.h
So, if you have a binary library against which you link your test cases, you will need to tell the dev team that, unless they gave you a release build with no asserts enabled, you won't be able to unit-test their API for negative test cases.
If you need to test a header-only library or you compile against the source code to be tested, you'll need to
compile your test cases/suite with -DNDEBUG; and/or
define your own assert macro (e,g. to throw an error which you'll catch) and hope that your specific assert.h will test if already defined and won't try to define it again (again, look for specifics in your compiler/std libs assert.h header)
I'm testing some code that uses CHECK from glog and I'd like to test that this check fails in certain scenarios. My code looks like:
void MyClass::foo() {
// stuff...
// It's actually important that the binary gets aborted if this flag is false
CHECK(some_flag) << "flag must be true";
// more stuff...
}
I've done some research into gtest and how I might be able to test for this. I found EXPECT_FATAL_FALIURE, EXPECT_NONFATAL_FAILURE, and HAS_FATAL_FAILURE but I haven't managed to figure out how to use them. I'm fairly confident that if I change CHECK(some_flag) to EXPECT_TRUE(some_flag) then EXPECT_FATAL_FAILURE will work correctly but then I'm introducing test dependencies in non-test files which is...icky.
Is there a way for gtest to catch the abort signal (or whatever CHECK raises) and expect it?
aaaand I found an answer 5 minutes after posting this question. Typical.
This can be done using Death tests from gtest. Here's how my test looks:
TEST(MyClassTest, foo_death_test) {
MyClass clazz(false); // make some_flag false so the CHECK fails
ASSERT_DEATH( { clazz.foo(); }, "must be true");
}
This passes. Woohoo!
Is there legitimate way to write down a test case for which I intent to write full test function later on? As like pending tests of mochajs?
The package docs describe such example with testing.(*T).Skip:
Tests and benchmarks may be skipped if not applicable with a call to the Skip method of *T and *B:
func TestTimeConsuming(t *testing.T) {
if testing.Short() {
t.Skip("skipping test in short mode.")
}
...
}
The message you provided for Skip will be printed if you launch go test with a -v flag (in this example you'll also need to provide -short flag to see the skip message).
Does anyone know, how to fail only one step in the test and allow the test finish all steps, using Allure framework!
For exemple, I have one test wich consists of 3 test steps, and each of the steps has it's own assertion. It can look like this:
#Test
public void test()
step1();
step2();
step3();
}
#Step
public void step1() {
Assert.assertEquals(1, 0);
}
#Step
public void step2() {
Assert.assertEquals(1, 1);
}
#Step
public void step3() {
Assert.assertEquals(2, 2);
}
When step1 fail, then test method will fail too. Is there a possibility to finish other two steps with their own assertions and not fail the test? Like TestNG does with SoftAssert. (org.testng.asserts.SoftAssert)
And as a result I would like to see the report where we can see all broken and passed test steps,(in one test method) like in 1.4.9 Allure release https://github.com/allure-framework/allure-core/releases/tag/allure-core-1.4.9 on the picture report.
Maybe you can, but you shouldn't. You're breaking the concept of a test. A test is something that either passes or fails with a description of a failure. It is not something that can partially fail.
When you write a test you should include only those assertions that are bound to each other. Like if the first assertion fails, then the second is not needed by your functionality at all. That means if you have assertions that are not dependent on each other – you better make a couple of test methods and they will be completely separated and will fail separately.
In short, the test should not continue after a failed step and that's it. Otherwise – it's a bad test.
P.S. That's why JUnit does not allow soft assertions.
P.P.S If you reallyreallyreally need to check all the three things – possible workaround is using an ErrorCollector.
W.r.t. Nunit;
Is there a mechanism to conditionally ignore a specific test case?
Something in the lines of :
[TestCase(1,2)]
[TestCase(3,4, Ignore=true, IgnoreReason="Doesn't meet conditionA", Condition=IsConditionA())]
public voidTestA(int a, int b)
So is there any such mechanism or the only way to do so is to create separate test for each case and do Assert.Ignore in the test body?
You could add the following to the body of the test:
if (a==3 && b == 4 && !IsConditionA()) { Assert.Ignore() }
This you would have to do for every testcase you would want to ignore.
You would not replicate the testbody in this case, but you would add to it for every ignored testcase.
I think it helps test readability to minimize the conditional logic inside the test body. But you can definitely generate the tests cases dynamically using the testcasesource attribute on the test and in a separate method dynamically generate a list of test cases to run using the nunit testcasedata object.
So only the tests that you need to/ are valid to execute are run but you still have a chance to log etc the cases.
http://www.nunit.org/index.php?p=testCaseSource&r=2.6.4