I'm using Google Test and Google Mock frameworks for a project's unit tests. I have various unit tests projects and want to automate my build so to run all of them.
I was expecting the unit tests executable to return 0 on success and 1 (or any other value) on any test failure, but I'm getting 1 when all tests passed. I'm getting some GMOCK warnings but couldn't find any documentation about warnings affecting return value.
I tried running the tests filtering to run just one test case where no GMOCK warnings are triggered and still get 1 as return value.
I had a couple of DISABLED test cases, so I commented them out. Still getting 1 as return value.
According to documentation and code comments for RUN_ALL_TESTS macro, the return value should be 0.
I can't think of anything else causing return value 1. Am I missing anything?
If you look at the definition of the RUN_ALL_TESTS() macro from gtest.h there's clearly stated that 0 is returned on no failures:
// Use this macro in main() to run all tests. It returns 0 if all
// tests are successful, or 1 otherwise.
//
// RUN_ALL_TESTS() should be invoked after the command line has been
// parsed by InitGoogleTest().
#define RUN_ALL_TESTS()\
(::testing::UnitTest::GetInstance()->Run())
Appearantly even warnings (from gmock) may result in a return value of 1. Try what happens if you get rid of the gmock warnings (e.g. using s.th. like NiceMock<> to wrap your mock class instance).
Related
I have a unit test, written with JUnit 5 (Jupiter), that is failing. I do not currently have time to fix the problem, so I would like to mark the test as an expected failure. Is there a way to do that?
I see #Disable which causes the test to not be run. I would like the test to still run (and ideally fail the build if it starts to work), so that I remember that the test is there.
Is there such an annotation in Junit 5? I could use assertThrows to catch the error, but I would like the build output to indicate that this is not a totally normal test.
You can disable the failing test with the #Disabled annotation. You can then add another test that asserts the first one does indeed fail:
#Test
#Disabled
void fixMe() {
Assertions.fail();
}
#Test
void fixMeShouldFail() {
assertThrows(AssertionError.class, this::fixMe);
}
I'm writing a simple Postman test that even checks if true == false but it always passes. What am I doing wrong? You can see the green light here:
Just a single test on its own without the wrapper function will fail [good!], but that doesn't seem a scalable way to write a lot of tests.
so wrapping stuff in pm.test( ) with either a function() or an ()=> arrow function means everything false passes... ???
If I use a test runner, or check test results below I can see the fails. So maybe that little happy green light in the test authoring panel is just buggy / should be ignored? Or maybe it means syntax error rather than results error? Confusing.
I think there is a misunderstanding here.
pm.expect(true).to.eql(false); throws an error.
If it is wrapped by a test, this error is being caught.
If no test wrapper, it's not being caught.
The red/ green dot next to "Tests" just indicates if the Javascript has been executed without problems.
So if you execute this as a test, the Javascript went trough without errors, thus the green dot. Because the error has been caught by the test function.
If you only execute the .expect() without a test, the error is not caught, thus the Javascript failed, thus the red dot.
Did you check the Test Results area at the bottom?
There you can clearly see, that a test which expects true to equal false is failing.
When I create a TEST or TEST_F test, how can I know that my assertion is actually executing?
The problem I have is, when I have an empty TEST_F, for example,
TEST_F(myFixture, test1) {}
When it runs, gtest says this test passes. I would have expected the test to fail, until I write test code. Anyway.
So, my problem is that when gtest says that when test is "OK" or that it passed, I can't trust it, because a test could "pass" if there is no test code.
It would be nice to print what my EXPECT_ or ASSERT calls are doing and then see that they pass. Problem is, if I do any std::cout calls, that seems to be out of sync with the test results at the end. The output messages are not in sync with any of my own std::cout calls.
Is there a verbose option to google test? How can I be sure the EXPECT that I coded is actually running?
You might consider looking at TDD, Test Driven Development, https://en.wikipedia.org/wiki/Test-driven_development
write one test => it will fail
write code to make the test pass => test passes
Rinse and repeat: express each requirement as a test, that initially fails. Write code to make that test pass.
I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".
Is there a way to do this?
EDIT:
I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.
EDIT2:
It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.
EXPECT_NONFATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:
#include "gtest-spi.h"
// ...
TEST_F( testclass, testname )
{
EXPECT_NONFATAL_FAILURE(
// your code here, or just call:
FAIL()
,"Some optional text that would be associated with"
" the particular failure you were expecting, if you"
" wanted to be sure to catch the correct failure mode" );
}
Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures
You can prefix the test name with DISABLED_.
I'm not aware of a direct way to do this, but you can fake it with something like this:
try {
// do something that should fail and throw and exception
...
EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
// return or print a message, etc.
}
Basically, the test will fail if it reaches the contradictory expectation.
It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.
In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.
Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.
I'm testing a set of classes and my unit tests so far are along the lines
1. read in some data from file X
2. create new object Y
3. sanity assert some basic properties of Y
4. assert advanced properties of Y
There's about 30 of these tests, that differ in input/properties of Y that can be checked. However, at the current project state, it sometimes crashes at #2 or already fails at #3. It should never crash at #1. For the time being, I'm accepting all failures at #4.
I'd like to e.g. see a list of unit tests that fail at #3, but so far ignore all those that fail at #4. What's the standard approach/terminology to create this? I'm using JUnit for Java with Eclipse.
You need reporting/filtering on your unit test results.
jUnit itself wants your tests to pass, fail, or not run - nothing in between.
However, it doesn't care much about how those results are tied to passing/failing the build, or reported.
Using tools like maven (surefire execution plugin) and some custom code, you can categorize your tests to distinguish between 'hard failures', 'bad, but let's go on', etc. But that's build validation or reporting based on test results rather than testing.
(Currently, our build process relies on annotations such as #Category(WorkInProgress.class) for each test method to decide what's critical and what's not).
What I could think of would be to create assert methods that check some system property as to whether to execute the assert:
public static void assertTrue(boolean assertion, int assertionLevel){
int pro = getSystemProperty(...);
if (pro >= assertionLevel){
Assert.assertTrue(assertion);
}
}