Mixing GoogleTest EXPECT_CALL with EXPECT_DEATH [duplicate] - c++

I have created a mock of an external socket api in my_inet.cpp file.
GMock functions for that socket api is in mock.h file.
I am using my created socket api of my_inet in server.cpp file.
The test is written in gtest.cpp.
The death case that is written here is executing as per the output. Output says that the the process died. But it also says that the socket() call was never made so the case is shown as failed.
Please tell what is the reason for this and what is the solution.
gtest.cpp
TEST(MyTest, SocketConnectionFail)
{
MockMyTCPAPI obj_myTCP;
EXPECT_CALL( obj_myTCP, socket( 1, 0, 0 ))
.Times( 1 )
.WillOnce( Return( -1 ));
Server obj_server( &obj_myTCP );
EXPECT_DEATH( obj_server.InitializeSocket(), "No socket connection!");
}
server.cpp
int Server::InitializeSocket()
{
if( ret_val_socket == -1 )
{
ret_val_socket = myTCPAPI->socket( PF_INET, SOCK_STREAM, 0 );
if( ret_val_socket == -1 )
{
printf( "\nNo socket connection!" );
exit(1);
}
return ret_val_socket;
}
else
{
printf( "Warning! Attempting to create socket again. %d" , ret_val_socket);
return 2;
}
return 0;
}
my_inet.cpp
int MyTCPAPI::socket( int arg1, int arg2, int arg3 )
{
return -1;
}
Output:
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from MyTest
[ RUN ] MyTest.SocketConnectionFail
[WARNING] /usr/src/gtest/src/gtest-death-test.cc:825:: Death tests use fork(), which is unsafe particularly in a threaded context. For this test, Google Test couldn't detect the number of threads.
No socket connection!
/home/../Documents/office/tdd/tcp/server/gtest.cpp:49: ERROR: this mock object (used in test MyTest.SocketConnectionFail) should be deleted but never is. Its address is #0x7fff2e94a890.
ERROR: 1 leaked mock object found at program exit.
/home/../Documents/office/tdd/tcp/server/gtest.cpp:56: Failure
Death test: obj_server.InitializeSocket()
Result: died but not with expected error.
Expected: No socket connection!
Actual msg:
[ DEATH ]
/home/../Documents/office/tdd/tcp/server/gtest.cpp:49: Failure
Actual function call count doesn't match EXPECT_CALL(obj_myTCP, socket( 1, 0, 0 ))...
Expected: to be called once
Actual: never called - unsatisfied and active
[ FAILED ] MyTest.SocketConnectionFail (3 ms)
[----------] 1 test from MyTest (3 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (3 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] MyTest.SocketConnectionFail
1 FAILED TEST

According to the docs, EXPECT_DEATH and ASSERT_DEATH spawn a child process which executes the death test statement (in this case obj_server.InitializeSocket()).
After termination of the child process, they check the exit code and the stderr message. If the exit code is 0 or the message in stderr does not match the expected value, the test is invalid. stdout on the other hand is not checked.
Therefore, printf( "\nNo socket connection!" ) has to be replaced with fprintf(stderr, "\nNo socket connection!" ) right before the applicatons exits:
int Server::InitializeSocket()
{
if( ret_val_socket == -1 )
{
ret_val_socket = myTCPAPI->socket( PF_INET, SOCK_STREAM, 0 );
if( ret_val_socket == -1 )
{
fprintf(stderr, "\nNo socket connection!" ); // print to stderr here
exit(1);
}
return ret_val_socket;
}
else
{
printf( "Warning! Attempting to create socket again. %d" , ret_val_socket);
return 2;
}
return 0;
}
As pointed out in the comments, EXPECT_CALL will fail. Using AnyNumber when specifying the number of times it might be called could help in this case (only the first call will then return -1 and the test case might still fail).
TEST(MyTest, SocketConnectionFail)
{
MockMyTCPAPI obj_myTCP;
EXPECT_CALL( obj_myTCP, socket( 1, 0, 0 ))
.Times(testing::AnyNumber())
.WillOnce( Return( -1 ));
Server obj_server( &obj_myTCP );
EXPECT_DEATH( obj_server.InitializeSocket(), "No socket connection!");
}

Related

Inconsistent move in debug and release configurations while passing unique_ptr around?

So I got some code handling some simple tcp sockets using the SFML Library. Thereby a socket is created under the usage of SFML capabilities and returned from a function as an rvalue reference.
An organizing function then passes this socket on ( to currently only be stored ) and signals its caller whether a socket was processed or not. This however does not work as expected.
struct TcpSocket : public ::sf::TcpSocket {};
unique_ptr<TcpSocket>&& TcpListener::nonBlockingNext()
{
unique_ptr<TcpSocket> new_socket (new TcpSocket) ;
listener.setBlocking(false);
if( listener.accept(*new_socket) == ::sf::Socket::Status::Done)
{
new_socket->setBlocking(false);
std::cout << "Connection established! " << new_socket.get() << "\n";
return std::move(new_socket);
}
return std::move( unique_ptr<TcpSocket>(nullptr) );
}
bool ConnectionReception::processNextIncoming()
{
unique_ptr<TcpSocket> new_socket (listener.nonBlockingNext());
std::cout << " and then " << new_socket.get() << "\n";
if( !new_socket ) return false;
processNewTcpConnection( ::std::move(new_socket) );
return true;
}
The afore used class of TcpListener encapsulates a sf::TcpListener in composition and simply forwards its usage.
I have a simple test, that attempts a connection.
TEST(test_NetworkConnection, single_connection)
{
ConnectionReception reception;
reception.listen( 55555 );
std::this_thread::sleep_for( 50ms );
TcpSocket remote_socket;
remote_socket.connect( "127.0.0.1", 55555 );
std::this_thread::sleep_for( 10ms );
EXPECT_TRUE( reception.processNextIncoming() );
}
This test fails differently in both configurations I am compiling it.
In debug ( g++ -g3 ) the test is failing unexpectedly.
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from test_NetworkConnection
[ RUN ] test_NetworkConnection.single_connection
Connection established! 0x6cf7ff0
and then 0
test\test_NetworkConnection.cpp:24: Failure
Value of: reception.processNextIncoming()
Actual: false
Expected: true
[ FAILED ] test_NetworkConnection.single_connection (76 ms)
[----------] 1 test from test_NetworkConnection (78 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (87 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] test_NetworkConnection.single_connection
1 FAILED TEST
Debugging and output shows, that the first return of nonBlockingNext(), the one that returns an accepted socket by the listener, has been reached, but in the subsequent outer function of processNextIncoming the value of new_socket is not set/is nullptr.
In Release, that is with g++ -O3 the output shows promise, but the test itself crashes with a segfault, seemingly in test-teardown, maybe when freeing sockets, which I determined through further outputs, as debugging in optimized code is not very fruitfull.
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from test_NetworkConnection
[ RUN ] test_NetworkConnection.single_connection
Connection established! 0xfe7ff0
and then 0xfe7ff0
I further have noticed while debugging in the -g3 compilation, that the construction of new_socket in 'nonBlockingNext()' is seemingly reached again before returning:
Thread 1 hit Breakpoint 1, network::TcpListener::nonBlockingNext (this=0x640f840)
at test/../src/NetworkConnection.hpp:40
40 unique_ptr<TcpSocket> new_socket (new TcpSocket) ;
(gdb) n
41 listener.setBlocking(false);
(gdb)
42 if( listener.accept(*new_socket) == ::sf::Socket::Status::Done)
(gdb)
44 new_socket->setBlocking(false);
(gdb)
45 std::cout << "Connection established! " << new_socket.get() << "\n";
(gdb)
Connection established! 0x6526340
46 return std::move(new_socket);
(gdb)
40 unique_ptr<TcpSocket> new_socket (new TcpSocket) ; <<<<<<--------- here
(gdb)
49 }
(gdb)
network::ConnectionReception::processNextIncoming (this=0x640f840) at test/../src/NetworkConnection.hpp:79
79 std::cout << " and then " << new_socket.get() << "\n";
(gdb)
and then 0
80 if( !new_socket ) return false;
(gdb)
A step, that is most likely optimized away in a release configuration or could just be gdb being weird.
What is going wrong? How do I proceed and get this to work? Did I make any mistakes with the rvalues and moves?
You have undefined behavior here:
unique_ptr<TcpSocket>&& TcpListener::nonBlockingNext()
{
unique_ptr<TcpSocket> new_socket (new TcpSocket) ;
//...
if( /*...*/)
{
//...
return std::move(new_socket);
}
//...
}
The problem is that you are returning a reference to a local variable (new_socket). Don't be distracted by it being an rvalue reference - it is still a reference! You should return unique_ptr by value instead. And, even though it is legal to std::move() the value you are returning, it is useless at best or misses an optimization at worst - so make it just return new_socket.

GMock death case - mock function not being called

I have created a mock of an external socket api in my_inet.cpp file.
GMock functions for that socket api is in mock.h file.
I am using my created socket api of my_inet in server.cpp file.
The test is written in gtest.cpp.
The death case that is written here is executing as per the output. Output says that the the process died. But it also says that the socket() call was never made so the case is shown as failed.
Please tell what is the reason for this and what is the solution.
gtest.cpp
TEST(MyTest, SocketConnectionFail)
{
MockMyTCPAPI obj_myTCP;
EXPECT_CALL( obj_myTCP, socket( 1, 0, 0 ))
.Times( 1 )
.WillOnce( Return( -1 ));
Server obj_server( &obj_myTCP );
EXPECT_DEATH( obj_server.InitializeSocket(), "No socket connection!");
}
server.cpp
int Server::InitializeSocket()
{
if( ret_val_socket == -1 )
{
ret_val_socket = myTCPAPI->socket( PF_INET, SOCK_STREAM, 0 );
if( ret_val_socket == -1 )
{
printf( "\nNo socket connection!" );
exit(1);
}
return ret_val_socket;
}
else
{
printf( "Warning! Attempting to create socket again. %d" , ret_val_socket);
return 2;
}
return 0;
}
my_inet.cpp
int MyTCPAPI::socket( int arg1, int arg2, int arg3 )
{
return -1;
}
Output:
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from MyTest
[ RUN ] MyTest.SocketConnectionFail
[WARNING] /usr/src/gtest/src/gtest-death-test.cc:825:: Death tests use fork(), which is unsafe particularly in a threaded context. For this test, Google Test couldn't detect the number of threads.
No socket connection!
/home/../Documents/office/tdd/tcp/server/gtest.cpp:49: ERROR: this mock object (used in test MyTest.SocketConnectionFail) should be deleted but never is. Its address is #0x7fff2e94a890.
ERROR: 1 leaked mock object found at program exit.
/home/../Documents/office/tdd/tcp/server/gtest.cpp:56: Failure
Death test: obj_server.InitializeSocket()
Result: died but not with expected error.
Expected: No socket connection!
Actual msg:
[ DEATH ]
/home/../Documents/office/tdd/tcp/server/gtest.cpp:49: Failure
Actual function call count doesn't match EXPECT_CALL(obj_myTCP, socket( 1, 0, 0 ))...
Expected: to be called once
Actual: never called - unsatisfied and active
[ FAILED ] MyTest.SocketConnectionFail (3 ms)
[----------] 1 test from MyTest (3 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (3 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] MyTest.SocketConnectionFail
1 FAILED TEST
According to the docs, EXPECT_DEATH and ASSERT_DEATH spawn a child process which executes the death test statement (in this case obj_server.InitializeSocket()).
After termination of the child process, they check the exit code and the stderr message. If the exit code is 0 or the message in stderr does not match the expected value, the test is invalid. stdout on the other hand is not checked.
Therefore, printf( "\nNo socket connection!" ) has to be replaced with fprintf(stderr, "\nNo socket connection!" ) right before the applicatons exits:
int Server::InitializeSocket()
{
if( ret_val_socket == -1 )
{
ret_val_socket = myTCPAPI->socket( PF_INET, SOCK_STREAM, 0 );
if( ret_val_socket == -1 )
{
fprintf(stderr, "\nNo socket connection!" ); // print to stderr here
exit(1);
}
return ret_val_socket;
}
else
{
printf( "Warning! Attempting to create socket again. %d" , ret_val_socket);
return 2;
}
return 0;
}
As pointed out in the comments, EXPECT_CALL will fail. Using AnyNumber when specifying the number of times it might be called could help in this case (only the first call will then return -1 and the test case might still fail).
TEST(MyTest, SocketConnectionFail)
{
MockMyTCPAPI obj_myTCP;
EXPECT_CALL( obj_myTCP, socket( 1, 0, 0 ))
.Times(testing::AnyNumber())
.WillOnce( Return( -1 ));
Server obj_server( &obj_myTCP );
EXPECT_DEATH( obj_server.InitializeSocket(), "No socket connection!");
}

EXPECT_DEATH with GMock - failed to die

I have created a mock of an external socket api in my_inet.cpp file.
GMock functions for that socket api is in mock.h file.
I am using my created socket api of my_inet in server.cpp file.
The test is written in gtest.cpp.
I want to execute a death case successfully through exit(1) but GMock says that "failed to die".
Why?
gtest.cpp
TEST(MyTest, SocketConnectionFail)
{
MockMyTCPAPI obj_myTCP;
Server obj_server( &obj_myTCP );
EXPECT_DEATH( obj_server.InitializeSocket(), "No socket connection!");
}
server.cpp
int Server::InitializeSocket()
{
if( ret_val_socket == -1 )
{
ret_val_socket = myTCPAPI->socket( PF_INET, SOCK_STREAM, 0 );
if( ret_val_socket == -1 )
{
printf( "\nNo socket connection!" );
exit(1);
}
return ret_val_socket;
}
else
{
printf( "Warning! Attempting to create socket again. %d" , ret_val_socket);
return 2;
}
return 0;
}
my_inet.cpp
int MyTCPAPI::socket( int arg1, int arg2, int arg3 )
{
return -1;
}
Output:
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from MyTest
[ RUN ] MyTest.SocketConnectionFail
[WARNING] /usr/src/gtest/src/gtest-death-test.cc:825:: Death tests use fork(), which is unsafe particularly in a threaded context. For this test, Google Test couldn't detect the number of threads.
GMOCK WARNING:
Uninteresting mock function call - returning default value.
Function call: socket(1, 0, 0)
Returns: 0
Stack trace:
/home/anisha/Documents/office/tdd/tcp/server/gtest.cpp:56: Failure
Death test: obj_server.InitializeSocket()
Result: failed to die.
Error msg:
[ DEATH ]
[ FAILED ] MyTest.SocketConnectionFail (3 ms)
[----------] 1 test from MyTest (3 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (3 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] MyTest.SocketConnectionFail
1 FAILED TEST
The output explains the issue:
GMOCK WARNING:
Uninteresting mock function call - returning default value.
Function call: socket(1, 0, 0)
Returns: 0
Which means myTCPAPI->socket(PF_INET, SOCK_STREAM, 0) returns 0, not -1.
Since MockMyTCPAPI obj_myTCP is a mock object (not MyTCPAPI), it won't run MyTCPAPI::socket(). You need to specify its return value. Something like the following should help:
EXPECT_CALL(obj_myTCP, socket(_, _, _))
.WillRepeatedly(Return(-1));
Or use MyTCPAPI instead of MockMyTCPAPI in your test.

GMock's `WillOnce` with `Return` does not fail on wrong return value

I have created a mock of an external socket api in my_inet.cpp file.
GMock functions for that socket api is in mock.h file.
I am using my created socket api of my_inet in server.cpp file.
The test is written in gtest.cpp.
Problem is that in my_inet.cpp I am returning 1000 and in gtest.cpp I have written .WillOnce( Return( 10 ));, and it does NOT fail.
Why?
gtest.cpp
TEST(HelloTest, HelloReturnsOne)
{
MockMyTCPAPI obj_myTCP;
EXPECT_CALL( obj_myTCP, hello())
.Times( 2 )
.WillOnce( Return( -100 ))
.WillOnce( Return( 10 ));
Server obj_server( &obj_myTCP );
EXPECT_EQ( obj_server.hi(), -100 );
EXPECT_EQ( obj_server.hi(), 10 );
}
mock.h
#include "my_inet.h"
#include <gmock/gmock.h>
class MockMyTCPAPI : public MyTCPAPI {
public:
MOCK_METHOD0( hello, int());
MOCK_METHOD3( socket, int(int arg1, int arg2, int arg3));
MOCK_METHOD3( bind, int(int arg1, int arg2, int arg3));
MOCK_METHOD2( listen, int(int arg1, int arg2));
MOCK_METHOD3( accept, int(int arg1, int arg2, int arg3));
MOCK_METHOD2( send, int(int arg1, int arg4));
};
my_inet.cpp
int MyTCPAPI::hello()
{
return 1000;
}
server.cpp
int Server::hi()
{
return myTCPAPI->hello();
}
Output:
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from HelloTest
[ RUN ] HelloTest.HelloReturnsOne
[ OK ] HelloTest.HelloReturnsOne (0 ms)
[----------] 1 test from HelloTest (0 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (0 ms total)
[ PASSED ] 1 test.
In server.cpp, myTCPAPI->hello() will return -100 and 10, but Server::hi is not returning them; it always returns 1.
Could you try:
int Server::hi() {
return myTCPAPI->hello();
}
Updated answer on mocking
By mocking, we create an object for which we can control (not test) its return value. For example, the EXPECT_CALL statement says "the hello method of obj_myTCP will be called twice. For the first call, return -100; for the second call, return 10." In your example, the first call returns -100, the second call returns 10. That matches the expectations. The my_inet.cpp implementation is overridden.
The use of a mock object is to inject a return value, not test its return value. Its benefit is more obvious if you imagine mocking a timing object. In this case, you can control the time to return instead of relying on a real clock.
For more information on mocking, please refer to What is the purpose of mock objects? and What is Mocking?.

Using pipes in Boost unit tests

I'm trying to write Boost unit tests for my server.
I want to launch my server - start() launches the server in a thread - open a client, connect it and try to download a file.
I'm doing this this way:
tftp_server* my_test_server;
BOOST_AUTO_TEST_SUITE (tftptest) // name of the test suite is tftptest
BOOST_AUTO_TEST_CASE (test1)
{
my_test_server = new tftp_server(69);
my_test_server->start();
FILE *in;
char buff[512];
if(!(in = popen("tftp", "w"))){
exit(1);
}
fputs ( "connect xx.xx.xx.xx\n", in );
fputs ( "mode binary\n", in );
fputs ( "mode\n", in );
fputs ( "get truc.txt\n", in );
while(fgets(buff, sizeof(buff), in)!=NULL)
printf("%s", buff);
sleep(10);
BOOST_CHECK(true);
}
1) Creating and launching the server,
2) using the system TFTP client (I'm on OSX),
3) waiting 10 seconds.
It doesn't work : It only executes the client after the test is done.
Running 1 test case...
Server started on port 69
Server running.
*** No errors detected
$ Using octet mode to transfer files.
Transfer timed out.
Any idea how I could solve my problem ?
Thanks !
EDIT
void tftp_server::start()
{
LOG_INFO("Server running.", 0);
boost::thread bt(boost::bind(&boost::asio::io_service::run, &_io_service));
}
I've added close, and it works.
Thanks to Arne Mertz.
if(!(in = popen("tftp", "w"))){
exit(1);
}
//...
while(fgets(buff, sizeof(buff), in)!=NULL)
printf("%s", buff);
pclose(in);