How to unit test a piece of code that relies on Asio? - c++

I have a class that wraps Asio. It is meant to simulate communication over domain and tcp sockets but I'm at a loss as to automate unit tests. I looked at FakeIt but it only tests virtual methods, GoogleMocks suggests templating my code so I can then pass a MockAsio implementation for unit tests and the real Asio in production.
Are there any other ways to unit test network code? Fake a domain and tcp socket instead of running the whole stack? And if I go with GoogleMock, why use a class that uses GoogleMock and not my own implementation that does whatever I need?

I have recently ran into the same problem. Since an Asio service's methods (socket's read_some for example) are generally not virtual, a simple dependency injection is out of question. As far as I understand, there are two possible approaches, both of them are discussed in the Google Mock Cook Book:
High performance dependency injection (duck typing)
Discussed here.
This is the option #ruipacheco had already mentioned in his question.
This option requires templatizing your class, but it introduces the least code overhead.
If, for example, your class uses an Asio tcp socket, constructing an instance of it would look something like:
asio::io_context io_context;
MyClass<asio::ip::tcp::socket> my_class(io_context);
Code to an interface
Discussed here.
This is more or less what #NiladriBose had suggested.
This option requires writing an Asio interface and an Asio concrete adapter, for every service type (socket, timer, etc...)!
Nevertheless, it's the most generic and robust one (and it does not require templatizing your class, as the previous option did).
If, for example, your class uses an Asio tcp socket, constructing an instance of it would look something like:
asio::io_context io_context;
AsioSocket socket(io_context);
MyClass my_class(socket);
Factory enhancement
If your class uses multiple instances of Asio services (multiple sockets, timers, etc...), it would probably be better to create an abstract Asio services factory.
This factory would receive an io_context in its constructor, and export make_socket, make_timer, etc... methods.
Then, constructing an instance of your class would look something like:
AsioFactory asio_factory(io_context);
MyClass my_class(asio_factory);
Finally, in regard to:
And if I go with GoogleMock, why use a class that uses GoogleMock and not my own implementation that does whatever I need?
See this to understand the difference between mock and fake objects. Generally speaking, I think that mock objects require much less effort.
Also see this to figure out how to incorporate a Google mock class with a fake class.

I am assuming you want to mock out the ASIO wrapper class which is used by your application. If my assumption is correct then say the wrapper has an interface (oversimplified - but most mock frameworks require a pure abstract including gmock)-
class Iasio
{
virtual ~Iasio()
{
}
virtual void send(std::vector<unsigned char> dataToSend) = 0;
virtual std::vector<unsigned char > rcv() = 0;
};
Then you have two options-
1) mock using a mocking framework and in your unit test use the mock ( inject the mock into the classes that are using it using contructor or accessor injection). For this for each unit test scenario you would need to setup the mock object to return your expected data.
2) The first option sometimes can be more cumbersome than writing you own test mock , in such circumstances it is perfectly acceptable to write your own test mock giving you more control. I say more control because mock frameworks are general purpose and they can help in most common scenarios but complex scenarios can demand a bespoke test dummy/mock.

Related

Unit Testing application interface to hardware - to mock or not

im very curious for peoples opinions on what way a software side application that interfaces with hardware should be unit tested.
For example, the main class of the software application "Connection" would be constructing a handle to a USB device.
I want to test the "Connection" class base function, say "OpenConnection" that would attempt to connect to a USB hardware device.
so far I have constructed a MOCK hardware device, and have included in my connection class a compiler flag, so if its built in unit test mode, it will use a mock object, otherwish it will use the actual hardware interface.
See example below
class TConnection
{
public:
static TConnection* GetConnection();
static void Shutdown();
bool DidInitialise();
bool Write(uint8_t* _pu8_buffer);
bool Read(uint8_t* _pu8_buffer);
protected:
TConnection();
virtual ~TConnection();
bool init();
private:
static TConnection* mp_padConnection;
static bool mb_DidInitialise;
#ifdef _UNIT_TEST_BUILD
static mock_device* mp_handle;
#else
static device* mp_handle;
#endif
};
then in the source file I include something like
#include "connection.h"
#ifdef _UNIT_TEST_BUILD
mock_device* TConnection::mp_handle = nullptr;
#else
device* TConnection::mp_handle = nullptr;
#endif // _UNIT_TEST_BUILD
TConnection::TConnection()
{
...
init();
...
}
bool TConnection::init()
{
mp_handle = hid_open( _VENDOR_ID, _PRODUCT_ID, nullptr );
if (mp_hidHandle == nullptr) {
return false;
}
if (hid_set_nonblocking(mp_hidHandle, _DISABLE_NB) == _ERROR_CODE) {
return false;
}
return true;
}
The only thing I really dislike about my code is that my actual connection class contains test code. I would much prefer them to be separate.
Saying that, I also dont agree with having an entirely new mocked connection class written solely for the purpose of unit testing, it makes me feel like im just writting something designed to work as expected.
So I ask, what would be a better approach to testing such a class
Thank you in advance for your time and advice
You can avoid adding test code to your class by using dependency injection. Create an interface IDevice and make class Device implement that interface. Then, in class TConnection, use pointers to this interface instead of a member of type Device. Also create a helper method that allows you to set a new device, something like:
void setDevice(IDevice *device);
Now, for your production code simple use an instance of class Device, while in your test code use setDevice to swap implementation of device with a mock object. This mock object will be an instance of class MockDevice which will also implement interface IDevice. That way you can change the implementation in tests and use the mock class. Since you are using gtest already, I would suggest you do not write the mock class yourself but use C++ mocking framework gmock (which is fully compatible with gtest) instead. This way, you will also need to create a separate class but almost everything will be handled by the mocking framework. All you need to do is define mocked methods. Creation of an additional interface and mock class seems like overkill at first, but it definitely pays off in the long run. If you want to do any serious test-driving of code, learning to use interfaces, dependency injection and mock classes is essential. Check the documentation for more details:
https://github.com/google/googlemock/blob/master/googlemock/docs/CheatSheet.md
Personally I would have the mock as either a separate class, or part of the test code. To differentiate between mock and actual library I would do the changes in the build script, which I assume would be to include the test file (and mock) or linking to a library.
Creating a separate class is not wasted effort. It should behave as expected, but this can be simplified to the bare minimum needed for the test. A more interesting thing is to make the class generate error events, to make sure your code handles these events correctly. The alternative is sometimes to wait for the error to occur, which I wouldn't recommend.
Two highly recommended books on the subject:
Test-Driven Development for Embedded C: Part 2 gives a good overview of test doubles (mocks being one example) and how to use them.
Modern C++ Programming with Test-Driven Development: Less on test doubles, but more geared towards C++.

How to write effective unit tests when class has external dependencies?

I am very new to Test Driven Development and cannot figure out how to write effective tests for a class I wrote. The class is as follows (Java):
public class MyServiceClassImpl implements MyService {
private someExternalClient client;
private anotherExternalClient anotherClient;
public MyServiceClassImpl() {
client = someExternalClient.getInstance();
anotherClient = anotherExternalClient(client);
}
public String methodWhichDoesSomething(String query) {
return anotherClient.getResponse(query);
}
}
For the test, I try a few queries and compare the response I get with the response I expect (I expect it because I know what anotherClient will return). It works alright but this is technically an integration test since I am calling an external dependency. I do not understand how to write "unit" tests in this case. More specifically, I don't know how to mock the dependencies since the fields are private, there are no setters and the constructor doesn't take any parameters. How would I "supply" the instance of the class with my mocks even if I created them? I wrote the class myself too so please let me know if I should re-design the class, maybe provide getters and setters?
This is a very common situation that most developers falls into. The questions how to make the code testable. Rule of thumb "If you don't have any security concerns, do not afraid to change design so your routines are testable." This is actually a very good thing, because you SUT (System Under Test) API is appealing to its clients and easier to make changes and extend.
In you case leave your Integration Test as it is because it tests the whole system with database interaction/config etc.
Generally what is important is the Unit Test. But looking at you code the method
methodWhichDoesSomething(String query)
hardly has any behavior at all. It only calls another client to return a response.
So you need to decide whether you need really write a Unit Test for this. I would not recommend as it does not have any behavior to Unit Test.
But if you really want to Unit Test, whether the GetResponse(..) method has been called with expected parameter type is a candidate.
In order to that Inject your dependency AnotherExternalClient into you SUT (System Under Test).
public MyServiceClassImpl(AnotherExternalClient externalClient)
{
In you test setup a mock on AnotherExternalClient and verify whether the method has been invoked. Use this constructor injection if your parameter is a mandatory type to your MyServiceClassImpl. If not simply use the property injection if the injection is an optional.
UPDATE
Reg. "Inject your dependency"
The instance you returning from anotherExternalClient(clent);, which is type of anotherExternalClient can be injected into your SUT (System Under Test) MyServiceClassImpl. The way you inject is either with a property or via constructor. I will explains this a bit later.
You don't have to worry about writing code like
client = someExternalClient.getInstance();
as this can be externalized and return the client which then used to return the anotherExternalClient.
In otherwords your SUT (System Under Test) MyServiceClassImpl should only care about anotherExternalClient not someExternalClient. Having less dependency like this simplifies your design and make it easier to Unit test.
Reg. "Property Injection vs Ctro Injection"
I would not repeat my self, here is another SO question has some information on this.
Hope this helps.
This is critical because when it comes to Unit testing you can easily provide you with the mock/fake implementation for testing.

How to Unit Test Existing Code that Relies on HttpWebRequest and HttpWebResponse?

I have seen the following questions already:
"Is it possible to mock out a .NET HttpWebResponse?"
"Mocking WebResponse's from a WebRequest"
These do not help me because I need:
To test existing code which uses WebRequest/WebResponse directly, with no interfaces
The existing code depends on HttpWebRequest / HttpWebResponse, so I can't use my own derived WebRequest and WebResponse classes.
I know about WebRequest.RegisterPrefix, and have a successful unit test which proves that it works (once you get the prefix right). But that test simply tests WebRequest and WebResponse.
I tried the following, but it gives a compile error (not a warning):
public class MockWebRequest : HttpWebRequest
{
public MockWebRequest()
// : base(new SerializationInfo(typeof (HttpWebRequest), new FormatterConverter()),
// new StreamingContext())
{
}
}
With the two lines commented-out, I get a compile error:
error CS0619: 'System.Net.HttpWebRequest.HttpWebRequest()' is obsolete: 'This API supports the .NET Framework infrastructure and is not intended to be used directly from your code.'
When I uncomment those lines, I get a 618 warning, but the code crashes at runtime:
System.Runtime.Serialization.SerializationException: Member '_HttpRequestHeaders' was not found.
at System.Runtime.Serialization.SerializationInfo.GetElement(String name, Type& foundType)
at System.Runtime.Serialization.SerializationInfo.GetValue(String name, Type type)
at System.Net.HttpWebRequest..ctor(SerializationInfo serializationInfo, StreamingContext streamingContext)
at UnitTests.MockWebRequest..ctor(String responseJson)
at UnitTests.MockWebRequestCreator.Create(Uri uri)
at System.Net.WebRequest.Create(Uri requestUri, Boolean useUriBase)
at System.Net.WebRequest.Create(Uri requestUri)
at code under test
I'm at a loss about how to proceed with this (other than the default, which is to just punt on these unit tests and wish the code had been implemented with testing in mind).
I could probably get "down and dirty" and do something nasty with ISerializable, or something like that, but this brings up the third requirement:
3. The code must be maintainable, and must not be too much more complex than the code being tested!
OBTW, I can't use TypeMock for this, and in fact there's a bit of a time limit. The project is almost done, so a new, purchased mocking framework is out of the question.
I just ran into the same problem. Frankly, the way Microsoft has set this up doesn't make much sense - in WebRequest.Create they have an abstract factory, but they neither provide interfaces for HttpWebRequest and HttpWebResponse that we can implement, nor do they provide a protected constructor that we can use to inherit from these classes.
The solution I have come up with requires some minor changes to the main program, but it works. Basically, I am doing what Microsoft should have done.
Create interfaces IHttpWebRequest and IHttpWebResponse
Create mock classes which implement the interfaces
Create wrappers for HttpWebRequest and HttpWebResponse which implement the interfaces
Replace calls to WebRequest.Create with my own factory method that returns the wrappers or the mocks, as needed
Depending on how your code is structured, the above may be relatively trivial to implement, or it may require making changes throughout your project. In my case, we have all of our HTTP requests going through a common service, so it hasn't been too bad.
Anyway, probably too late for your project, but maybe this will help someone else down the road.
I generally prefer to use a mocking framework like MOQ when mocking asp.net singletons like httpcontext.
There are other frameworks out there for mocking, but I find MOQ to be very flexible and intuitive
https://github.com/Moq/moq4/wiki/Quickstart
You can do something similar to this
http://www.syntaxsuccess.com/viewarticle/how-to-mock-httpcontext
Another approach is to abstract the call to httpcontext through a virtual method:
Similar to this technique:
http://unit-testing.net/CurrentArticle/How-To-Remove-Data-Dependencies-In-Unit-Tests.html

How can I refactor and unit test complex legacy Java EE5 EJB methods?

My colleagues and I are currently introducing unit tests to our legacy Java EE5 codebase. We use mostly JUnit and Mockito. In the process of writing tests, we have noticed that several methods in our EJBs were hard to test because they did a lot of things at once.
I'm fairly new to the whole testing business, and so I'm looking for insight in how to better structure the code or the tests. My goal is to write good tests without a headache.
This is an example of one of our methods and its logical steps in a service that manages a message queue:
consumeMessages
acknowledgePreviouslyDownloadedMessages
getNewUnreadMessages
addExtraMessages (depending on somewhat complex conditions)
markMessagesAsDownloaded
serializeMessageObjects
The top-level method is currently exposed in the interface, while all sub-methods are private. As far as I understand it, it would be bad practice to just start testing private methods, as only the public interface should matter.
My first reaction was to just make all the sub-methods public and test them in isolation, then in the top-level method just make sure that it calls the sub-methods. But then a colleague mentioned that it might not be a good idea to expose all those low-level methods at the same level as the other one, as it might cause confusion and other developers might start using when they should be using the top-level one. I can't fault his argument.
So here I am.
How do you reconcile exposing easily testable low-level methods versus avoiding to clutter the interfaces? In our case, the EJB interfaces.
I've read in other unit test questions that one should use dependency injection or follow the single responsibility principle, but I'm having trouble applying it in practice. Would anyone have pointers on how to apply that kind of pattern to the example method above?
Would you recommend other general OO patterns or Java EE patterns?
At first glance, I would say that we probably need to introduce a new class, which would 1) expose public methods that can be unit tested but 2) not be exposed in the public interface of your API.
As an example, let's imagine that you are designing an API for a car. To implement the API, you will need an engine (with complex behavior). You want to fully test your engine, but you don't want to expose details to the clients of the car API (all I know about my car is how to push the start button and how to switch the radio channel).
In that case, what I would do is something like that:
public class Engine {
public void doActionOnEngine() {}
public void doOtherActionOnEngine() {}
}
public class Car {
private Engine engine;
// the setter is used for dependency injection
public void setEngine(Engine engine) {
this.engine = engine;
}
// notice that there is no getter for engine
public void doActionOnCar() {
engine.doActionOnEngine();
}
public void doOtherActionOnCar() {
engine.doActionOnEngine();
engine.doOtherActionOnEngine(),
}
}
For the people using the Car API, there is no way to access the engine directly, so there is no risk to do harm. On the other hand, it is possible to fully unit test the engine.
Dependency Injection (DI) and Single Responsibility Principle (SRP) are highly related.
SRP is basicly stating that each class should only do one thing and delegate all other matters to separate classes. For instance, your serializeMessageObjects method should be extracted into its own class -- let's call it MessageObjectSerializer.
DI means injecting (passing) the MessageObjectSerializer object as an argument to your MessageQueue object -- either in the constructor or in the call to the consumeMessages method. You can use DI frameworks to do this for, but I recommend to do it manually, to get the concept.
Now, if you create an interface for the MessageObjectSerializer, you can pass that to the MessageQueue, and then you get the full value of the pattern, as you can create mocks/stubs for easy testing. Suddenly, consumeMessages doesn't have to pay attention to how serializeMessageObjects behaves.
Below, I have tried to illustrate the pattern. Note, that when you want to test consumeMessages, you don't have to use the the MessageObjectSerializer object. You can make a mock or stub, that does exactly what you want it to do, and pass it instead of the concrete class. This really makes testing so much easier. Please, forgive syntax errors. I did not have access to Visual Studio, so it is written in a text editor.
// THE MAIN CLASS
public class MyMessageQueue()
{
IMessageObjectSerializer _serializer;
//Constructor that takes the gets the serialization logic injected
public MyMessageQueue(IMessageObjectSerializer serializer)
{
_serializer = serializer;
//Also a lot of other injection
}
//Your main method. Now it calls an external object to serialize
public void consumeMessages()
{
//Do all the other stuff
_serializer.serializeMessageObjects()
}
}
//THE SERIALIZER CLASS
Public class MessageObjectSerializer : IMessageObjectSerializer
{
public List<MessageObject> serializeMessageObjects()
{
//DO THE SERILIZATION LOGIC HERE
}
}
//THE INTERFACE FOR THE SERIALIZER
Public interface MessageObjectSerializer
{
List<MessageObject> serializeMessageObjects();
}
EDIT: Sorry, my example is in C#. I hope you can use it anyway :-)
Well, as you have noticed, it's very hard to unit test a concrete, high-level program. You have also identified the two most common issues:
Usually the program is configured to use specific resources, such as a specific file, IP address, hostname etc. To counter this, you need to refactor the program to use dependency injection. This is usually done by adding parameters to the constructor that replace the ahrdcoded values.
It's also very hard to test large classes and methods. This is usually due to the combinatorical explosion in the number of tests required to test a complex piece of logic. To counter this, you will usually refactor first to get lots more (but shorter) methods, then trying to make the code more generic and testable by extracting several classes from your original class that each have a single entry method (public) and several utility methods (private). This is essentially the single responsibility principle.
Now you can start working your way "up" by testing the new classes. This will be a lot easier, as the combinatoricals are much easier to handle at this point.
At some point along the way you will probably find that you can simplify your code greatly by using these design patterns: Command, Composite, Adaptor, Factory, Builder and Facade. These are the most common patterns that cut down on clutter.
Some parts of the old program will probably be largely untestable, either because they are just too crufty, or because it's not worth the trouble. Here you can settle for a simple test that just checks that the output from known input has not changed. Essentially a regression test.

TDD, Unit Test and architectural changes

I'm writing an RPC middleware in C++. I have a class named RPCClientProxy that contains a socket client inside:
class RPCClientProxy {
...
private:
Socket* pSocket;
...
}
The constructor:
RPCClientProxy::RPCClientProxy(host, port) {
pSocket = new Socket(host, port);
}
As you can see, I don't need to tell the user that I have a socket inside.
Although, to make unit tests for my proxies it would be necessary to create mocks for sockets and pass them to the proxies, and to do so I must use a setter or pass a factory to the sockets in the proxies's constructors.
My question: According to TDD, is it acceptable to do it ONLY because the tests? As you can see, these changes would change the way the library is used by a programmer.
I don't adhere to a certain canon i would say if you think you would benefit from testing through a mock socket the do it, you could implement a parallel constructor
RPCClientProxy::RPCClientProxy(Socket* socket)
{
pSocket = socket
}
Another option would be to implement a host to connect to for testing that you can configure to expect certain messages
What you describe is a perfectly normal situation, and there are established patterns that can help you implement your tests in a way that won't affect your production code.
One way to solve this is to use a Test Specific Subclass where you could add a setter for the socket member and use a mock socket in the case of a test. Of-course you would need to make the variable protected rather than private but that's probably no biggie. For example:
class RPCClientProxy
{
...
protected:
Socket* pSocket;
...
};
class TestableClientProxy : public RPCClientProxy
{
TestableClientProxy(Socket *pSocket)
{
this->pSocket = pSocket;
}
};
void SomeTest()
{
MockSocket *pMockSocket = new MockSocket(); // or however you do this in your world.
TestableClientProxy proxy(pMockSocket);
....
assert pMockSocket->foo;
}
In the end it comes down to the fact that you often (more often than not in C++) have to design your code in such a way as to make it testable and there is nothing wrong with that. If you can avoid these decisions leaking out into the public interfaces that may be better sometimes, but in other cases it can be better to choose, for example, dependency inject through constructor parameters above say, using a singleton to provide access to a specific instance.
Side note: It's probably worth taking a look through the rest of the xunitpatterns.com site: there are a whole load of well established unit-testing patterns to understand and hopefully you can gain from the knowledge of those who have been there before you :)
Your issue is more a problem of design.
If you ever with to implement another behavior for Socket, you're toasted, as it involves rewriting all the code that created sockets.
The usual idea is to use an abstract base class (interface) Socket and then use an Abstract Factory to create the socket you wish depending on the circumstances. The factory itself could be either a Singleton (though I prefer Monoid) or passed down as arguments (according to the tenants of Dependency Injection). Note that the latter means no global variable, which is much better for testing, of course.
So I would advise something along the lines of:
int main(int argc, char* argv[])
{
SocketsFactoryMock sf;
std::string host, port;
// initialize them
std::unique_ptr<Socket> socket = sf.create(host,port);
RPCClientProxy rpc(socket);
}
It has an impact on the client: you no longer hide the fact that you use sockets behind the scenes. On the other hand, it gives control to the client who may wish to develop some custom sockets (to log, to trigger actions, etc..)
So it IS a design change, but it is not caused by TDD itself. TDD just takes advantage of the higher degree of control.
Also note the clear resource ownership expressed by the use of unique_ptr.
As others have pointed out, a factory architecture or a test-specific subclass are both good options in this situation. For completeness, one other possibility is to use a default argument:
RGCClientProxy::RPCClientProxy(Socket *socket = NULL)
{
if(socket == NULL) {
socket = new Socket();
}
//...
}
This is, perhaps somewhere between the factory paradigm (which is ultimately the most flexible, but more painful for the user) and newing up a socket inside your constructor. It has the benefit that existing client code doesn't need to be modified.