I'm working on a library that defines a client interface for some service. Under the hood I have to validate the data provided by users and then pass it to "engine" process using Connection class from another library (note: the Connection class isn't known to the users of our library). One of my colleagues proposed using PIMPL:
class Client {
public:
Client();
void sendStuff(const Stuff &stuff) {_pimpl->sendStuff(stuff);}
Stuff getStuff(const StuffId &id) {return _pimpl->getStuff(id);}
private:
ClientImpl *_pimpl;
}
class ClientImpl { // not exported
public:
void sendStuff(const Stuff &stuff);
Stuff getStuff(const StuffId &id);
private:
Connection _connection;
}
However, I find it very hard to test - even if I link my tests to some mocked implementation of Connection, I don't have an easy access to it to set and validate expectations. Am I missing something, or the much cleaner and testable solution is using interface + factory:
class ClientInterface {
public:
void sendStuff(const Stuff &stuff) = 0;
Stuff getStuff(const StuffId &id) = 0;
}
class ClientImplementation : public ClientInterface { // not exported
public:
ClientImplementation(Connection *connection);
// +implementation of ClientInterface
}
class ClientFactory {
static ClientInterface *create();
}
Are there any reasons to go with PIMPL in this situation?
AFAIK the usual reason to use the Pimpl idiom is to reduce compile/link time dependencies to the implementation of the class (by removing the implementation details from the public header file altogether). Another reason may be to enable the class to change its behaviour dynamically (aka the State pattern).
The second does not seem to be the case here, and the first can be achieved with inheritance + a factory as well. However, as you noted, the latter solution is much easier to unit test, so I would prefer this.
GoTW15
GoTW28
From Herb Sutter. Good pointers to get you started.
Yes this is a good place to use the Pimpl pattern, and yes it will be difficult to test as is.
The problem is that the two concepts oppose one another:
Pimpl is about hiding dependencies from the client: this reduces compile/link time and is better from an ABI stability point of view.
Unit Testing is usually about surgical intervention in the dependencies (use of mock ups, for example)
However, it does not mean that you should sacrifice one for another. It merely means that you should adapt your code.
Now, what if Connection was implemented with the same idiom ?
class Connection
{
private:
ConnectionImpl* mImpl;
};
And delivered through a Factory:
// Production code:
Client client = factory.GetClient();
// Test code:
MyTestConnectionImpl impl;
Client client = factory.GetClient(impl);
This way, you can access the nitty gritty details of your test implement of connection while testing client without exposing the implementation to the client or breaking the ABI.
Related
im very curious for peoples opinions on what way a software side application that interfaces with hardware should be unit tested.
For example, the main class of the software application "Connection" would be constructing a handle to a USB device.
I want to test the "Connection" class base function, say "OpenConnection" that would attempt to connect to a USB hardware device.
so far I have constructed a MOCK hardware device, and have included in my connection class a compiler flag, so if its built in unit test mode, it will use a mock object, otherwish it will use the actual hardware interface.
See example below
class TConnection
{
public:
static TConnection* GetConnection();
static void Shutdown();
bool DidInitialise();
bool Write(uint8_t* _pu8_buffer);
bool Read(uint8_t* _pu8_buffer);
protected:
TConnection();
virtual ~TConnection();
bool init();
private:
static TConnection* mp_padConnection;
static bool mb_DidInitialise;
#ifdef _UNIT_TEST_BUILD
static mock_device* mp_handle;
#else
static device* mp_handle;
#endif
};
then in the source file I include something like
#include "connection.h"
#ifdef _UNIT_TEST_BUILD
mock_device* TConnection::mp_handle = nullptr;
#else
device* TConnection::mp_handle = nullptr;
#endif // _UNIT_TEST_BUILD
TConnection::TConnection()
{
...
init();
...
}
bool TConnection::init()
{
mp_handle = hid_open( _VENDOR_ID, _PRODUCT_ID, nullptr );
if (mp_hidHandle == nullptr) {
return false;
}
if (hid_set_nonblocking(mp_hidHandle, _DISABLE_NB) == _ERROR_CODE) {
return false;
}
return true;
}
The only thing I really dislike about my code is that my actual connection class contains test code. I would much prefer them to be separate.
Saying that, I also dont agree with having an entirely new mocked connection class written solely for the purpose of unit testing, it makes me feel like im just writting something designed to work as expected.
So I ask, what would be a better approach to testing such a class
Thank you in advance for your time and advice
You can avoid adding test code to your class by using dependency injection. Create an interface IDevice and make class Device implement that interface. Then, in class TConnection, use pointers to this interface instead of a member of type Device. Also create a helper method that allows you to set a new device, something like:
void setDevice(IDevice *device);
Now, for your production code simple use an instance of class Device, while in your test code use setDevice to swap implementation of device with a mock object. This mock object will be an instance of class MockDevice which will also implement interface IDevice. That way you can change the implementation in tests and use the mock class. Since you are using gtest already, I would suggest you do not write the mock class yourself but use C++ mocking framework gmock (which is fully compatible with gtest) instead. This way, you will also need to create a separate class but almost everything will be handled by the mocking framework. All you need to do is define mocked methods. Creation of an additional interface and mock class seems like overkill at first, but it definitely pays off in the long run. If you want to do any serious test-driving of code, learning to use interfaces, dependency injection and mock classes is essential. Check the documentation for more details:
https://github.com/google/googlemock/blob/master/googlemock/docs/CheatSheet.md
Personally I would have the mock as either a separate class, or part of the test code. To differentiate between mock and actual library I would do the changes in the build script, which I assume would be to include the test file (and mock) or linking to a library.
Creating a separate class is not wasted effort. It should behave as expected, but this can be simplified to the bare minimum needed for the test. A more interesting thing is to make the class generate error events, to make sure your code handles these events correctly. The alternative is sometimes to wait for the error to occur, which I wouldn't recommend.
Two highly recommended books on the subject:
Test-Driven Development for Embedded C: Part 2 gives a good overview of test doubles (mocks being one example) and how to use them.
Modern C++ Programming with Test-Driven Development: Less on test doubles, but more geared towards C++.
Suppose I have code like below. http_client is an external dependency (a 3rd party API) I don't have control over. transaction_handler is a class I control and would like to write unit tests for.
// 3rd party
class http_client
{
public:
std::string get(std::string url)
{
// makes an HTTP request and returns response content as string
// throws if status code is not 200
}
};
//code to be tested
enum class transaction_kind { sell, buy };
enum class status { ok, error };
class transaction_handler
{
private:
http_client client;
public:
status issue_transaction(transaction_kind action)
{
try
{
auto response =
client.get(std::string("http://fake.uri/") +
(action == transaction_kind::buy ? "buy" : "sell"));
return response == "OK" ? status::ok : status::error;
}
catch (const std::exception &)
{
return status::error;
}
}
};
Because http_client makes network calls I would like to be able to substitute it in my tests with a mock implementation which cuts off the network and allows for testing different conditions (ok/error/exception). Because transaction_handler is supposed to be internal I can modify it to make it testable but I wouldn't want to go over the border (i.e. I would like to avoid pointers or dynamic polymorphism if possible). Ideally I would like to use a kind of dependency injection where my tests would inject a mock http_client. I don't think I can/want to use a 'poor man's DI' where I would create an http_client implementation in the caller and pass it to the transaction_handler (by const reference? std::shared_ptr?) - because I don't control the http_client I would have to come up with an interface and -in the product code - I would have to wrap the http_client in a wrapper class that implements this interface and forwards the calls to the actual/wrapped http_client instance. In the test code I would create a mock implementation of that interface. The interface would have to be a pure abstract method which entails using runtime polymorphism which I wanted to avoid. Another option is to use templates. If I changed the transaction_handler class to look as follows:
template <typename T = http_client>
class transaction_handler
{
private:
T client;
public:
transaction_handler(const std::function<T()> &create) : client(create())
{}
status issue_transaction(transaction_kind action)
{
// same as above, omitted for brevity
}
}
I could now create a mock http_client class:
class http_client_mock
{
public:
std::string get(std::string url)
{
return std::string("OK");
}
};
and create the transaction_class object in my tests like this:
transaction_handler<http_client_mock> t(
[]() -> http_client_mock { return http_client_mock(); });
while I could use the following in my product code:
transaction_handler<> t1(
[]() -> http_client { return http_client(); });
While it seems to work and fullfill most of my requirements (even though I don't like the fact that the code instantiating transaction_handler need to be aware of the http_client type - maybe it can be somehow hidden as a factory class) - does it make sense at all? Or may be there are better ways of doing this kind of things? I spent a considerable amount of time looking for some simple DI patterns to make unit testing easier bud had hard time finding something that would suit my needs. Also, my background is mostly C so maybe I approach the problem from a wrong angle?
I'm maintaining a DI library, and your case is really interesting for me.
Mocking is about dynamic polymorphism or compile time mocking (at least when using C++). You pay 1 indirection to obtain ability to inject what you want (dependence only on 1 interface).
If you want to make code testable, the best way is using interfaces (pure virtual classes in C++ wich does not have interfaces) and inject dependencies only through constructor.
If you really want to avoid polymorphism (or you can't because of external API) you could still accept to have some code that is not fully testable.
Conventional way of doing things:
class ConcreteHttpClient : public virtual AbstractHttpClient { /*...*/}
class MockHttpClient : public virtual AbstractHttpClient{ /*...*/ }
You just choose what to inject based on needs ( I intentionally use "new" instead of showing some DI framwork at work).
Production code.
new TransactionHandler ( static_cast< AbstractService>( ConcreteService));
Unit testing the transaction handler
new TransactionHandler ( static_cast< AbstractService>( MockService));
If you later need to test some class using the transaction handler and the transaction handler implements a interface
class TransactionHandler: public virtual AbstractTransactionHandler { /*...*/}
You have just to create a TransactionHandlerMock inheriting from AbstractTransactionHandler
When you use interfaces the advantage is that you can use a Dependency Injection framework to avoid poor man's injection.
Compile time mocking.
What you proposed is a viable alternative, basically you assume a "static polymorphism" thanks to templates
Production code:
template <typename T = http_client>
class transaction_handler{/*...*/};
new transaction_handler( /*...*/ );
Unit test code:
using transaction_handler_mocked = transaction_handler< http_client_mock>;
new transaction_handler_mocked( /*...*/ );
However this has few Issues:
You are depending on "transaction_handler" type on each part of your production code so if you change it you have to recompile every file depending on it.
You can't inject a mocked handler as a mock itself, unless you change all classes depending on it to become templates accepting the handler.
Point number 2 means that basically in a complex project you have each class depending on each other, increasin compile times and forcing to whole recompiles of your project just for little changes
You are not using a Interface (pure virtual class) that means that you can still accidentally access fields or members of template parameters that were not intended to be accessed making your debuggin harder.
Other alternatives
Provide mock at link time, you don't have to mock a whole 3rd party library. Just the classes you are testing. Most IDEs are not thinked to work that way, but probably you can work around with some bash script or custom makefile. (In example, I do that for testing code dependent on C functions, in particular OpenGL)
Wrap interesting functionalities of libraries you want to test behind a class implementing a pure virtual class (don't know why you want to avoid it). You have good chances that you don't need to wrap all methods and you'll end with a smaller API to test against (just wrap parts you need to use, don't start up front wrapping the whole library)
Use a mocking framework, and possibly a dependency Injection framework.
Just write test routines that match whichever exports of http_client you're using. You're source will be linked in preference to any lib.
I think it's a good idea to mock it. Since http_client is an external dependency, I chose Typemock Isolator++ to handle with it. Look at the code below:
TEST_METHOD(FakeHttpClient)
{
//Arrange
string okStr = "OK";
http_client* mock_client = FAKE_ALL<http_client>();
WHEN_CALLED(mock_client->get(ANY_VAL(std::string))).Return(&okStr);
//Act
transaction_handler my_handler;
status result = my_handler.issue_transaction(transaction_kind::buy);
//Assert
Assert::AreEqual((int)status::ok, (int)result);
}
Method FAKE_ALL<> allows me to set the behavior of all http_client instances, so no injection needed. Simple API, the code looks accurate, and you don't need to change the production code.
Hope it helps!
Suppose I have classes
class Inner {
public:
void doSomething();
};
class Outer {
public:
Outer(Inner *inner); // Dependency injection.
void callInner();
};
Proper unit-testing says I should have tests for Inner. Then, I should have tests for Outer that uses not a real Inner but rather a MockInner so that I would be performing unit-tests on the functionality added by just Outer instead of the full stack Outer/Inner.
To do so, Googletest seems to suggest turning Inner into a pure abstract class (interface) like this:
// Introduced merely for the sake of unit-testing.
struct InnerInterface {
void doSomething() = 0;
};
// Used in production.
class Inner : public InnerInterface {
public:
/* override */ void doSomething();
};
// Used in unit-tests.
class MockInner : public InnerInterface {
public:
/* override */ void doSomething();
};
class Outer {
public:
Outer(Inner *inner); // Dependency injection.
void callInner();
};
So, in production code, I would use Outer(new Inner); while in test, Outer(new MockInner).
OK. Seems nice in theory, but when I started using this idea throughout the code, I find myself creating a pure abstract class for every freaking class. It's a lot of boiler-plate typing, even if you can ignore the slight run-time performance degradable due to the unnecessary virtual dispatch.
An alternative approach is to use templates as in the following:
class Inner {
public:
void doSomething();
};
class MockInner {
public:
void doSomething();
};
template<class I>
class Outer {
public:
Outer(I *inner);
void callInner();
};
// In production, use
Outer<Inner> obj;
// In test, use
Outer<MockInner> test_obj;
This avoids the boiler-plating and the unnecessary virtual dispatch; but now my entire codebase is in the freaking header files, which makes it impossible to hide source implementations (not to mention dealing with frustrating template compilation errors and the long build time).
Are those two methods, virtuals and templates, the only ways to do proper unit-testing? Are there better ways to do proper unit-testing?
By proper unit-testing, I mean each unit-test tests only the functionalities introduced by that unit but not the unit's dependencies also.
I don't think you must mock out every dependency of your tested class in practice. If it is complicated to create, use or sense through, then yes. Also if it directly depends on some unneeded external resource such as a DB, network or filesystem.
But if none of these is an issue, IMO it is OK to just use an instance of it directly. As you already unit tested it, you can be reasonably sure that it works as expected and doesn't interfere with higher-level unit tests.
I personally prefer working unit tests and simple, clean, maintainable design over adhering to some ideal set up by unit test purists.
each unit-test tests only the functionalities introduced by that unit but not the unit's dependencies also.
Using a functionality and testing a functionality are two very different things.
I also think that using an instance on Inner directly is OK.
My problem is mocking external objects that are not part of my code (provided through static libraries or DLLs, sometimes 3rd party).
I am inclined to rewrite a mock DLL or library with the same class names, and then linking differently for test. Modifying the header file of the external dependency to add "virtual"s seems unacceptable to me.
Does anyone have a better solution?
I'm writing an RPC middleware in C++. I have a class named RPCClientProxy that contains a socket client inside:
class RPCClientProxy {
...
private:
Socket* pSocket;
...
}
The constructor:
RPCClientProxy::RPCClientProxy(host, port) {
pSocket = new Socket(host, port);
}
As you can see, I don't need to tell the user that I have a socket inside.
Although, to make unit tests for my proxies it would be necessary to create mocks for sockets and pass them to the proxies, and to do so I must use a setter or pass a factory to the sockets in the proxies's constructors.
My question: According to TDD, is it acceptable to do it ONLY because the tests? As you can see, these changes would change the way the library is used by a programmer.
I don't adhere to a certain canon i would say if you think you would benefit from testing through a mock socket the do it, you could implement a parallel constructor
RPCClientProxy::RPCClientProxy(Socket* socket)
{
pSocket = socket
}
Another option would be to implement a host to connect to for testing that you can configure to expect certain messages
What you describe is a perfectly normal situation, and there are established patterns that can help you implement your tests in a way that won't affect your production code.
One way to solve this is to use a Test Specific Subclass where you could add a setter for the socket member and use a mock socket in the case of a test. Of-course you would need to make the variable protected rather than private but that's probably no biggie. For example:
class RPCClientProxy
{
...
protected:
Socket* pSocket;
...
};
class TestableClientProxy : public RPCClientProxy
{
TestableClientProxy(Socket *pSocket)
{
this->pSocket = pSocket;
}
};
void SomeTest()
{
MockSocket *pMockSocket = new MockSocket(); // or however you do this in your world.
TestableClientProxy proxy(pMockSocket);
....
assert pMockSocket->foo;
}
In the end it comes down to the fact that you often (more often than not in C++) have to design your code in such a way as to make it testable and there is nothing wrong with that. If you can avoid these decisions leaking out into the public interfaces that may be better sometimes, but in other cases it can be better to choose, for example, dependency inject through constructor parameters above say, using a singleton to provide access to a specific instance.
Side note: It's probably worth taking a look through the rest of the xunitpatterns.com site: there are a whole load of well established unit-testing patterns to understand and hopefully you can gain from the knowledge of those who have been there before you :)
Your issue is more a problem of design.
If you ever with to implement another behavior for Socket, you're toasted, as it involves rewriting all the code that created sockets.
The usual idea is to use an abstract base class (interface) Socket and then use an Abstract Factory to create the socket you wish depending on the circumstances. The factory itself could be either a Singleton (though I prefer Monoid) or passed down as arguments (according to the tenants of Dependency Injection). Note that the latter means no global variable, which is much better for testing, of course.
So I would advise something along the lines of:
int main(int argc, char* argv[])
{
SocketsFactoryMock sf;
std::string host, port;
// initialize them
std::unique_ptr<Socket> socket = sf.create(host,port);
RPCClientProxy rpc(socket);
}
It has an impact on the client: you no longer hide the fact that you use sockets behind the scenes. On the other hand, it gives control to the client who may wish to develop some custom sockets (to log, to trigger actions, etc..)
So it IS a design change, but it is not caused by TDD itself. TDD just takes advantage of the higher degree of control.
Also note the clear resource ownership expressed by the use of unique_ptr.
As others have pointed out, a factory architecture or a test-specific subclass are both good options in this situation. For completeness, one other possibility is to use a default argument:
RGCClientProxy::RPCClientProxy(Socket *socket = NULL)
{
if(socket == NULL) {
socket = new Socket();
}
//...
}
This is, perhaps somewhere between the factory paradigm (which is ultimately the most flexible, but more painful for the user) and newing up a socket inside your constructor. It has the benefit that existing client code doesn't need to be modified.
I have probably a quite simple problem but I did not find a proper design decision yet.
Basically, I have 4 different classes and each of those classes has more than 10 methods.
Each of those classes should make use of the same TCP Socket; this object keeps a socket open to the server throughout program execution. My idea was to have the TCP obejct declared as "global" so that all other classes can use it:
classTCP TCPSocket;
class classA
{
private:
public:
classA();
...
};
class classB
{
private:
public:
classB();
...
};
Unfortunately, when declaring it like this my C++ compiler gives me an error message that some initialized data is written in the executable (???). So I am wondering if there is any other way I could declare this TCP object so that it is available for ALL the other classes and its methods?
Many thanks!
I'd suggest you keep the instance in your initialization code and pass it into each of the classes that needs it. That way, it's much easier to substitute a mock implementation for testing.
This sounds like a job for the Singleton design pattern.
The me sounds more for the right time to use Dependency Injection as i tend to avoid Singleton as much as i can (Singleton are just another way for accessing GLOBLAS, and its something to be avoided)
Singleton vs Dependency Injection has been already discussed on SO, check the "dependency injection" tag (sorry for not posting some links, but SO doens't allow me to post more than one link being a new user)
Wikipedia: Dependency Injection
As per your current code example, should be modified to allow injecting the Socket on the constructor of each Class:
class classA
{
private:
public:
classA(TCPSocket socket);
...
};
class classB
{
private:
public:
classB(TCPSocket socket);
...
};
Pass the socket into the constructor of each object. Then create a separate factory class which creates them and passes in the appropriate socket. All code uses the set of objects which are required to have the same socket should then create them via an instance of this factory object. This decouples the classes that should be using the single socket while still allowing the enforcement of the shared socket rule.
The best way to go about doing this is with a Singleton. Here is it's implementation in Java
Singleton Class:
public class SingletonTCPSocket {
private SingletonTCPSocket() {
// Private Constructor
}
private static class SingletonTCPSocketHolder {
private static final SingletonTCPSocket INSTANCE = new SingletonTCPSocket ();
}
public static SingletonTCPSocket getInstance() {
return SingletonTCPSocket.INSTANCE;
}
// Your Socket Specific Code Here
private TCPSocket mySocket;
public void OpenSocket();
}
The class that needs the socket:
public class ClassA {
public ClassA {
SingletonTCPSocket.getInstance().OpenSocket();
}
}
When you have an object which is unique in your program and used in a lot of places, you have several options:
pass a reference to the object everywhere
use a global more or less well hidden (singleton, mono-state, ...)
Each approach have its drawbacks. They are quite well commented and some have very strong opinions on those issues (do a search for "singleton anti-pattern"). I'll just give some of those, and not try to be complete.
passing a reference along is tedious and clutter the code; so you end up by keeping these references in some long lived object as well to reduce the number of parameters. When the time comes where the "unique" object is no more unique, you are ready? No: you have several paths to the unique object and you'll see that they now refer to different objects, and that they are used inconsistently. Debugging this can be a nightmare worse than modifying the code from a global approach to a passed along approach, and worse had not be planned in the schedules as the code was ready.
global like approach problem are even more well known. They introduce hidden dependencies (so reusing components is more difficult), unexpected side-effect (getting the right behaviour is more difficult, fixing a bug somewhere triggers a bug in another components), testing is more complicated, ...
In your case, having a socket is not something intrinsically unique. The possibility of having to use another one in your program or to reuse the components somewhere were that socket is no more unique seems quite high. I'd not go for a global approach but a parametrized one. Note that if your socket is intrinsically unique -- says it is for over the network logging -- you'd better encapsulate it in a object designed for that purpose. Logging for instance. And then it could make sense to use a global like feature.
As everyone has mentioned, globals are bad etc.
But to actually address the compile error that you have, I'm pretty sure it's because you're defining the global in a header file, which is being included in multiple files. What you want is this:
something.h
extern classTCP TCPSocket; //global is DECLARED here
class classA
{
private:
public:
classA();
...
};
something.cpp
classTCP TCPSocket; //global is DEFINED here