When I write Test fixture to test some c code I use the same set up from:
https://github.com/google/googletest/blob/master/googletest/docs/primer.md#test-fixtures-using-the-same-data-configuration-for-multiple-tests. The c code to be tested is as such:
static struct {
uint32_t success;
uint32_t errors;
}stats;
uint32_t get_errors(void)
{
return stats.errors;
}
uint32_t get_success(void)
{
return stats.success;
}
void increment_errors(void)
{
stats.errors++;
}
void increment_success(void)
{
stats.success++;
}
void main_function(int arg)
{
if (arg >=0)
increment_success();
else
increment_errors();
}
Now when I write unit tests for this as such:
class MyTest : public ::testing::Test
{
protected:
void SetUp(void)
{
}
void TearDown(void)
{
}
};
TEST_F(MyTest, Test1)
{
main_function(1);
EXPECT_EQ(1, decoder_get_success());
EXPECT_EQ(0, decoder_get_errors());
}
TEST_F(MyTest, Test2)
{
main_function(40);
EXPECT_EQ(1, decoder_get_success()); //This is a fail as number ends up being 2 not 1 which includes prev. test results
EXPECT_EQ(0, decoder_get_errors());
}
Now I have noticed that when I write different test fixtures for this code the values of the stats struct variables do not reset i.e. if first test is supposed to increment success number then when we start the second test the number of success = 1 not 0 and so on and so forth. I find this behavior odd as I assumed it is supposed to reset everything with each test.
To solve this issue I ended up adding the following function to c code:
void stats_init(void)
{
decoder_stats.errors = 0;
decoder_stats.success = 0;
}
and add this to the TearDown():
void TearDown(void)
{
stats_init();
}
This makes sure that all gets reset. The question here is this the correct behavior for gtests when using test fixtures? Am I wrong to assume that it should not require m to define the init() function and add it to TearDown()?
The correct behavior of gtest is to create a new and fresh MyTest instance for each TEST_F you defined.
So, by defining a member attribute in the test fixture you are sure that you will have a different instance of your member attribute in each TEST_F
Unfortunetly, you are testing a static variable which is instanciate once. gtest does not magically known about it. So, yes, you have to reset the value of your static structure between each TEST_F.
Personnally, I will use SetUp() instead of TearDown to call stats_init.
Related
Given those interfaces:
class ITemperature
{
public:
virtual ~ITemperature() = deafult;
virtual int get_temp() const = 0;
};
class IHumidity
{
public:
virtual ~IHumidity() = deafult;
virtual int get_humidity() const = 0;
};
And this SUT:
class SoftwareUnderTest
{
public:
SoftwareUnderTest(std::unique_ptr<ITemperature> p_temp,
std::unique_ptr<IHumidity> p_humidity)
: m_temp{std::move(p_temp)}, m_humidity{std::move(p_humidity)}
{}
bool checker()
{
assert(m_temp && "No temperature!");
if (m_temp->get_temp() < 50)
{
return true;
}
assert(m_humidity && "No humidity");
if (m_humidity->get_humidity() < 50)
{
return true;
}
return false;
}
private:
std::unique_ptr<ITemperature> m_temp;
std::unique_ptr<IHumidity> m_humidity;
};
And this mocks:
class MockITemperature : public ITemperature
{
public:
MOCK_METHOD(int, get_temp, (), (const override));
};
class MockIHumidity : public IHumidity
{
public:
MOCK_METHOD(int, get_humidity, (), (const override));
};
I want to make a test that checks that get_temp is called and also that the second assert (the one that checks that the humidity is nullptr), but when a do this test, I get the assert, but I the expectation tells me that it's never called (but it is actually called once)
this is the test:
class Fixture : pu`blic testing::Test
{
protected:
void SetUp() override
{
m_sut = std::make_unique<SoftwareUnderTest>(m_mock_temperature, m_mock_humidity);
}
std::unique_ptr<StrickMockOf<MockITemperature>> m_mock_temperature = std::make_shared<StrickMockOf<MockITemperature>>();
std::unique_ptr<StrickMockOf<MockIHumidity>> m_mock_humidity;
std::unique_ptr<SoftwareUnderTest> m_sut;
};
TEST_F(Fixture, GIVEN_AnInvalidHumidityInjection_THEN_TestMustDie)
{
EXPECT_CALL(*m_mock_temperature, get_temp).Times(1);
ASSERT_DEATH(m_sut->checker(), "No humidity");
}
Apparently, this is a known limitation, see here and here.
From what I have managed to discover by experimentation so far:
If you can live with the error message about leaking mocks (haven't checked if it's true or a false positive, suppressing it by AllowLeak triggers the actual crash), it can be done by making the mocks outlive the test suite and then wrapping references/pointers to them in one more interface implementation.
//mocks and SUT as they were
namespace
{
std::unique_ptr<testing::StrictMock<MockIHumidity>> mock_humidity;
std::unique_ptr<testing::StrictMock<MockITemperature>> mock_temperature;
}
struct MockITemperatureWrapper : MockITemperature
{
MockITemperatureWrapper(MockITemperature* ptr_) : ptr{ptr_} {assert(ptr);}
int get_temp() const override { return ptr->get_temp(); }
MockITemperature* ptr;
};
struct Fixture : testing::Test
{
void SetUp() override
{
mock_temperature
= std::make_unique<testing::StrictMock<MockITemperature>>();
m_mock_temperature = mock_temperature.get();
// testing::Mock::AllowLeak(m_mock_temperature);
m_sut = std::make_unique<SoftwareUnderTest>(
std::make_unique<MockITemperatureWrapper>(m_mock_temperature), nullptr);
}
testing::StrictMock<MockITemperature>* m_mock_temperature;
std::unique_ptr<SoftwareUnderTest> m_sut;
};
TEST_F(Fixture, GIVEN_AnInvalidHumidityInjection_THEN_TestMustDie)
{
EXPECT_CALL(*m_mock_temperature, get_temp).WillOnce(testing::Return(60));
ASSERT_DEATH(m_sut->checker(), "No humidity");
}
https://godbolt.org/z/vKnP7TsrW
Another option would be passing a lambda containing the whole to ASSERT_DEATH:
TEST_F(Fixture, GIVEN_AnInvalidHumidityInjection_THEN_TestMustDie)
{
ASSERT_DEATH(
[this] {
EXPECT_CALL(*m_mock_temperature, get_temp)
.WillOnce(testing::Return(60));
m_sut->checker();
}(), "No humidity");
}
Works, but looks ugly, see here.
Last but not least: one can use custom assert or replace__assert_failed function and throw from it (possibly some custom exception), then use ASSERT_THROW instead of ASSERT_DEATH. While I'm not sure replacing __assert_failed is legal standard-wise (probably not), it works in practice:
struct AssertFailed : std::runtime_error
{
using runtime_error::runtime_error;
};
void __assert_fail(
const char* expr,
const char *filename,
unsigned int line,
const char *assert_func )
{
std::stringstream conv;
conv << expr << ' ' << filename << ' ' << line << ' ' << assert_func;
throw AssertFailed(conv.str());
}
Example: https://godbolt.org/z/Tszv6Echj
I want to make a test that checks that get_temp is called and also
that the second assert (the one that checks that the humidity is
nullptr), but when a do this test, I get the assert, but I the
expectation tells me that it's never called (but it is actually called
once)
First you have to understand how death test are working.
Before executing code in a macro ASSERT_DEATH gtest forks test process so when death happens test can continue.
Then forked process is executing code which should lead to process death.
Now test process joins forked process to see result.
Outcome is that in one process checker() is executed and mock is invoked and in test process it is checker() is not invoked so also mock is not invoked. That is why you get an error that mock is not satisfied.
Now answer from alager makes mock eternal so missing expected call is not reported. And since code uses global state adding other tests will lead to some problems. So I would not recommend this approach.
After edit he moved EXPECT_CALL inside ASSERT_DEATH so now only forked process expects call, but this is not verified since process dies before mock is verified. So again I would not recommend this approach either.
So question is what you should do? IMO your problem is that you are testing to much of implementation details. You should loosen test requirement (drop StrictMock or make it even make it NiceMock). Still I find this a bit clunky. Live demo
I would change code in such way that it is impossible to construct SoftwareUnderTest with nullptr dependencies. You can use gsl::not_null for that.
It seems to be due to some tricky mechanism that is used in googletest to assert death (they mention creating a child process). I did not find a way to fix it correctly, but I found one (not so great) workaround:
SoftwareUnderTest(ITemperature* p_temp, IHumidity* p_humidity) // changed signature to allow leaks, I guess you cannot really do it in the production
and then:
class Fixture : public testing::Test
{
public:
Fixture(): m_mock_temperature(new MockITemperature), m_mock_humidity(nullptr) {}
~Fixture() {
// Mock::VerifyAndClearExpectations(m_mock_temperature); // if I uncomment that, for some reason the test will fail anyway
std::cout << "Dtor" << std::endl;
// delete m_mock_temperature; // if I delete the mock correctly, the test will fail
}
protected:
void SetUp() override
{
// m_sut.reset(new SoftwareUnderTest(m_mock_temperature.get(), m_mock_humidity.get()));
m_sut.reset(new SoftwareUnderTest(m_mock_temperature, m_mock_humidity));
}
// std::unique_ptr<MockITemperature> m_mock_temperature; // if I use smart pointers, the test will fail
// std::unique_ptr<MockIHumidity> m_mock_humidity;
MockITemperature* m_mock_temperature;
MockIHumidity* m_mock_humidity;
std::unique_ptr<SoftwareUnderTest> m_sut;
};
TEST_F(Fixture, GIVEN_AnInvalidHumidityInjection_THEN_TestMustDie)
{
EXPECT_CALL(*m_mock_temperature, get_temp).Times(1).WillOnce(Return(60)); // this is to ensure to go over first check, seems you forgot
ASSERT_DEATH(m_sut->checker(), "No humidity");
std::cout << "after checks" << std::endl;
}
Sorry, that's all I could figure out for the moment. Maybe you can submit a new issue in gtest github while waiting for a better answer.
I am new to google test environment. I have a sample code written in C and want to perform unit test with Google test framework.
Below is the sample code
// My module function (test.c)
void Update_user_value(void)
{
int user_value;
user_value = get_val_module(); /* return a integer value*/
if(user_value == 0x1)
update_user_flag(true);
else
update_user_flag(false);
}
// This function is in the other module(stub.c) , so we can use Mock function
void update_user_flag(bool val)
{
struct *temp;
if(val == true)
{
temp->userflag = 1;
}
else
{
temp->userflag = 0;
}
}
I wan to write a test case for the Update_user_value function (only for test.c). Through this function, i am sending some flag value to other module (update_user_flag) to set the flag.
I have written a simple google test like this
TEST_F(sampletest, setuser_flag_true_testcase)
{
//Get value from module
ON_CALL(*ptr1, get_val_module()).WillByDefault(Return(0x1)); //Mock function
EXPECT_CALL(*ptr2, get_val_module(_)).Times(1); // Mock function
Update_user_value();
}
TEST_F(sampletest, setuser_flag_false_testcase)
{
//Get value from module
ON_CALL(*ptr1, get_val_module()).WillByDefault(Return(0x0)); //Mock function
EXPECT_CALL(*ptr2, get_val_module(_)).Times(1); // Mock function
Update_user_value();
}
My question: Is this test case is enough to validate the Update_user_value function ?
Also i want to know, EXPECT_CALL() is good to use for setting a value to other module ?
If my understanding is wrong, please suggest me a better test case ?
ON_CALL, EXPECT_CALL are macros designed to be used on mock objects. Usually the use case is as follows:
You create an interface to derive from (it will be mocked in your test).
You pass the mock object (to method or via dependency injection).
You make expectations on this object.
See example:
class Foo {
public:
virtual ~Foo() = default;
virtual int bar() = 0;
};
class FooMock : public Foo {
public:
MOCK_METHOD0(bar, int());
};
bool check_bar_over_42(Foo& foo) {
if (foo.bar() > 42) {
return true;
}
return false;
}
TEST(check_bar_over_42_test, bar_below_42) {
FooMock fooMock{};
EXPECT_CALL(fooMock, bar()).WillOnce(testing::Return(41));
ASSERT_FALSE(check_bar_over_42(fooMock));
}
TEST(check_bar_over_42_test, bar_above_42) {
FooMock fooMock{};
EXPECT_CALL(fooMock, bar()).WillOnce(testing::Return(43));
ASSERT_TRUE(check_bar_over_42(fooMock));
}
AFAIK there is no way of using EXPECT_CALLs on C-like functions. One approach for your problem would be link-time mocking: given a method get_val_module is defined in a separate library, you can create test-only library with get_val_module that will allow you to return the expected values. In tests you would link against test lib, in production - against the real lib.
I have a project_file.cpp file like:
//Private Function definition
void my_fun(int &value)
{
//Do something
}
//Function Calling
int value = any_other_function();
myclass.my_fun(val);
Now I need to fetch val parameter value through mock my_fun() method as below:
utc_file.cpp
namespace TEST
{
int my_fun_val; //global variable
class myTest : public Test
{
//Do something
};
ACTION(getMyFunctionVal)
{
my_fun_val = arg0; //copy val value
}
TEST_F(myTest, readValueOfFunction)
{
EXPECT_CALL(m_MyFunMock, my_fun(_)).WillOnce(::testing::DoAll(getMyFunctionVal(),
Return(true)));
EXPECT_EQ(my_fun_val, 5);
}
}
Above code work fine with global varibale "my_fun_val" but I don't want to use global variable.
And we cannot make function (my_fun()) as "public". Please guide other way to get the parameter at unit test file.
Another way to solve above problem, we can compare the value inside the ACTION method.
utc_file.cpp
namespace TEST
{
class myTest : public Test
{
//Do something
};
ACTION_P(getMyFunctionVal, value)
{
EXPECT_EQ(arg0, value); //Here we can compare the value but unable to fill it as in reference variable and pass to callie method.
}
TEST_F(myTest, readValueOfFunction)
{
int expectedValue = 5;
EXPECT_CALL(m_MyFunMock, my_fun(_)).WillOnce(::testing::DoAll(getMyFunctionVal(expectedValue),
Return(true)));
}
}
Note: Here we cannot able to use SaveArg() OR SaveArgPointee() due to the Pointer reference usage. Currently gmock doesn't provide any method/mechanism to read parameter value when passed as a reference.
I am trying to test a void method such as following:
#Override
public void onApplicationEvent(ApplicationEvent myEvent) {
if (myEvent instanceof ApplicationEnvironmentPreparedEvent) {
ConfigurableEnvironment myEnv= ((ApplicationEnvironmentPreparedEvent) myEvent).getEnvironment();
setSystemVariables(myEnv);
}
}
I am using Matchers and here is the unit test (which obviously is not testing anything)
#Test
public void testOnApplicationEvent() {
loggingListener.onApplicationEvent(any(ApplicationEnvironmentPreparedEvent.class));
}
Two issues:
1. The error I get from the build is "Invalid use of Matchers" and test doesn't pass in my Jenkins build (but passes in idea IDE)
2. How to test these methods to keep the test coverage percentage up to a desired level
1 - This issue because any is used incorrectly. Refer the Mockito guide for details. Below my example does not use any and the problem will be gone.
2 - To cover 2 branches of if I would recommend below test cases.
#Test
public void onApplicationEventShouldSetEnvironmentWhenApplicationEnvironmentPreparedEvent() {
ConfigurableEnvironment actualEnvironment = null;
// Given a listener with overridden setSystemVariables() to store passed env.
LoggingListener loggingListener = new LoggingListener() {
#Override
void setSystemVariables(ConfigurableEnvironment var){
actualEnvironment = var;
}
};
// Given some dummy environment which is delivered by an event.
ConfigurableEnvironment expectedEnvironment = new ConfigurableEnvironment();
// Given a mocked event with above dummy environment.
ApplicationEvent mockedEvent = Mockito(ApplicationEnvironmentPreparedEvent.class);
Mockito.when(mockedEvent.getEnvironment()).returns(expectedEnvironment);
// When call a method under test
loggingListener.onApplicationEvent(mockedEvent);
// Then make sure the given environment was passed and set correctly
assertSame(expectedEnvironment, actualEnvironment);
}
#Test
public void onApplicationEventShouldSkipNotApplicationEnvironmentPreparedEvent() {
// Given a listener with overridden setSystemVariables() to fail the test if called.
LoggingListener loggingListener = new LoggingListener() {
#Override
void setSystemVariables(ConfigurableEnvironment var){
fail("This method should not be called");
}
};
// Given a mocked other (not ApplicationEnvironmentPreparedEvent) event.
ApplicationEvent mockedEvent = Mockito(UnknownEvent.class);
// When call a method under test
loggingListener.onApplicationEvent(mockedEvent);
// Then make sure an environment was not asked at all.
Mockito.verify(mockedEvent.getEnvironment(), never);
}
Note, this is not compilable code, because I don't know your full production code, so treat this as an idea to apply it on your real code with corresponding modifications.
In GMock, is it possible to replace a previously set expectation?
Assume, a test suite has a default expectation for a specific method call, which is what most test cases want:
class MyClass {
public:
virtual int foo() = 0;
};
class MyMock {
public:
MOCK_METHOD0(foo, int());
};
class MyTest: public Test {
protected:
void SetUp() {
EXPECT_CALL(m_mock, foo()).WillOnce(Return(1));
}
MyMock m_mock;
};
TEST_F(MyTest, myTestCaseA) {
EXPECT_EQ(1, m_mock.foo());
}
This is working fine. Some of the test cases, however, have different expectations. If I add a new expectation, as shown below, it does not work.
TEST_F(MyTest, myTestCaseB) {
EXPECT_CALL(m_mock, foo()).WillOnce(Return(2));
EXPECT_EQ(2, m_mock.foo());
};
I get this message:
[ RUN ] MyTest.myTestCaseB
/home/.../MyTest.cpp:94: Failure
Actual function call count doesn't match EXPECT_CALL(m_mock, foo())...
Expected: to be called once
Actual: never called - unsatisfied and active
[ FAILED ] MyTest.myTestCaseB (0 ms)
I understand why I am getting this. The question is how to cancel the default expectation, if a test case specifies its own? Does GMock allow it or what approaches can I use to achieve the intended behaviour?
No, there's no way to clear an arbitrary expectation. You can use VerifyAndClearExpectations to clear all of them, that's probably more than you want. I can think of several alternatives that avoid the issue:
You could work around your problem by simply calling m_mock.foo() once in advance, thus fulfilling the initial expectation.
TEST_F(MyTest, myTestCaseB) {
EXPECT_CALL(m_mock, foo()).WillOnce(Return(2));
(void)m_mock.foo();
EXPECT_EQ(2, m_mock.foo());
}
Another alternative is to change the expectation to have it return the value of a variable, then then update the variable prior to the test body, as described in the cookbook under Returning Live Values from Mock Methods. For example:
void SetUp() {
m_foo_value = 1;
EXPECT_CALL(m_mock, foo()).WillOnce(Return(ByRef(m_foo_value)));
}
TEST_F(MyTest, myTestCaseB) {
m_foo_value = 2;
EXPECT_EQ(2, m_mock.foo());
}
Yet another alternative is to specify the return value and the count separately.
void SetUp() {
ON_CALL(m_mock, foo()).WillByDefault(Return(1));
EXPECT_CALL(m_mock, foo()).Times(1);
}
Then, you only need to specify a new return value for the special test:
TEST_F(MyTest, myTestCaseB) {
ON_CALL(m_mock, foo()).WillByDefault(Return(2));
EXPECT_EQ(2, m_mock.foo());
}