How do you mock classes that use RAII in c++ - c++

Here's my issue, I'd like to mock a class that creates a thread at initialization and closes it at destruction. There's no reason for my mock class to actually create and close threads. But, to mock a class, I have inherit from it. When I create a new instance of my mock class, the base classes constructor is called, creating the thread. When my mock object is destroyed, the base classes destructor is called, attempting to close the thread.
How does one mock an RAII class without having to deal with the actual resource?

You instead make an interface that describes the type, and have both the real class and the mock class inherit from that. So if you had:
class RAIIClass {
public:
RAIIClass(Foo* f);
~RAIIClass();
bool DoOperation();
private:
...
};
You would make an interface like:
class MockableInterface {
public:
MockableInterface(Foo* f);
virtual ~MockableInterface();
virtual bool DoOperation() = 0;
};
And go from there.

First of all, it is not necessarily an unreasonable thing that your classes might be well designed for their use, but poorly designed for testing. Not everything is easy to test.
Presumably you want to use another function or class which makes use of the class which you want to mock (otherwise the solution is trivial). Lets call the former "User" and the latter "Mocked". Here are some possibilities:
Change User to use an abstract version of Mocked (you get to choose what kind of abstraction to use: inheritance, callback, templates, etc....).
Compile a different version of Mocked for your testing code (for example, #def out the RAII code when you compile your tests).
Have Mocked accept a constructor flag to turn off its behavior. I personally would avoid doing this.
Just suck up the cost of allocating the resource.
Skip the test.
The last two may be your only recourse if you can not modify User or Mocked. If you can modify User and you believe that designing your code to be testable is important, then you should explore the first option before any of the others. Note that there can be a trade off between making your code generic/flexible and keeping it simple, both of which are admirable qualities.

The pimpl idiom might suit you as well. Create your Thread class, with a concrete implementation that it brings in underneath. If you put in the right #defines and #ifdefs your implementation can change when you enable unit testing, which means that you can switch between a real implementation and a mocked one depending on what you are trying to accomplish.

One technique I've used is to use some form of decorator. Your final code has a method which creates its instance on the stack and then calls the same method, but on a member which is a pointer to your base class. When that call returns, your method returns destroying the instance you created.
At test time, you swap in a mock which doesn't create any threads, but just forwards to the method you want to test.
class Base{
protected:
Base* decorated;
public:
virtual void method(void)=0;
};
class Final: public Base{
void method(void) { Thread athread; decorated->method(); } // I expect Final to do something with athread
};
class TestBase: public Base{
void method(void) { decorated->method(); }
};

Related

how to add a function to a lib class without overriding it

I've a case in which I need to add some functions to a game engine class I'm using for a VR project without overriding the class it self:
The engine class name is AnnwaynPlayer that contains many useful methods to control the player, now I'm in the networking phase so I need to add 2 extra methods to this lib class which are setActive() and setConnected(), what is the best way to do this ?
If you can't touch the class itself then you probably want to use inheritance. This is one of the main goals of object-oriented programming -- to be able to add/change the behavior of an existing class without altering it. So you want something like:
class MyAnnwaynPlayer : public AnnwaynPlayer {
public:
void setActive();
void setConnected();
// ...
}
Now, things will be fine if AnnwaynPlayer has a virtual destructor. If it doesn't and your MyAnnwaynPlayer class has a non-trivial destructor then you have to wary of using an instance of MyAnnwaynPlayer through a pointer (be it raw or smart) of base class AnnwaynPlayer. When a pointer of the type is deleted, it will not chain through a call to your MyAnnwaynPlayer destructor.
Also consider ADL if you only need access to the public API of the base class. It's safer than inheritance, because you don't necessarily know the right class to inherit from in cases where the implementation returns something ultimately unspecified (like an internal derived class).
In essence, this would look like this:
namespace AnnwaynNamespace {
void setActive(AnnwaynPlayer& p);
void setConnected(AnnwaynPlayer& p);
};
And you could call them without using those functions (or the namespace), because ADL.
void wherever(AnnwaynNamespace::AnnwaynPlayer& p) {
setActive(p);
}
So setActive, etc, become part of the actual public API of the class, without involving any inheritance.

Is it okay when a base class has only one derived class?

I am creating a password module using OOD and design patterns. The module will keep log of recordable events and read/write to a file. I created the interface in the base class and implementation in derived class. Now I am wondering if this is sort of bad smell if a base class has only one derived class. Is this kind of class hierarchy unnecessary? Now to eliminate the class hierarchy I can of course just do everything in one class and not derive at all, here is my code.
class CLogFile
{
public:
CLogFile(void);
virtual ~CLogFile(void);
virtual void Read(CString strLog) = 0;
virtual void Write(CString strNewMsg) = 0;
};
The derived class is:
class CLogFileImpl :
public CLogFile
{
public:
CLogFileImpl(CString strLogFileName, CString & strLog);
virtual ~CLogFileImpl(void);
virtual void Read(CString strLog);
virtual void Write(CString strNewMsg);
protected:
CString & m_strLog; // the log file data
CString m_strLogFileName; // file name
};
Now in the code
CLogFile * m_LogFile = new CLogFileImpl( m_strLogPath, m_strLog );
m_LogFile->Write("Log file created");
My question is that on one hand I am following OOD principal and creating interface first and implementation in a derived class. On the other hand is it an overkill and does it complicate things? My code is simple enough not to use any design patterns but it does get clues from it in terms of general data encapsulation through a derived class.
Ultimately is the above class hierarchy good or should it be done in one class instead?
No, in fact I believe your design is good. You may later need to add a mock or test implementation for your class and your design makes this easier.
The answer depends on how likely it is that you'll have more than one behavior for that interface.
Read and write operations for a file system might make perfect sense now. What if you decide to write to something remote, like a database? In that case, a new implementation still works perfectly without affecting clients.
I'd say this is a fine example of how to do an interface.
Shouldn't you make the destructor pure virtual? If I recall correctly, that's the recommended idiom for creating a C++ interface according to Scott Myers.
Yes, this is acceptable, even with only 1 implementation of your interface, but it may be slower at run time (slightly) than a single class. (virtual dispatch has roughly the cost of following 1-2 function pointers)
This can be used as a way of preventing dependencies on clients on the implementation details. As an example, clients of your interface do not need to be recompiled just because your implementation gets a new data field under your above pattern.
You can also look at the pImpl pattern, which is a way to hide implementation details without using inheritance.
Your model works well with the factory model where you work with a lot of shared-pointers and you call some factory method to "get you" a shared pointer to an abstract interface.
The downside of using pImpl is managing the pointer itself. With C++11 however the pImpl will work well with being movable so will be more workable. At present though, if you want to return an instance of your class from a "factory" function it has copy semantic issues with its internal pointer.
This leads to implementers either returning a shared pointer to the outer class, which is made non-copyable. That means you have a shared pointer to one class holding a pointer to an inner class so function calls go through that extra level of indirection and you get two "new"s per construction. If you have only a small number of these objects that isn't a major concern, but it can be a bit clumsy.
C++11 has the advantage of both having unique_ptr which supports forward declaration of its underlying and move semantics. Thus pImpl will become more feasible where you really do know you are going to have just one implementation.
Incidentally I would get rid of those CStrings and replace them with std::string, and not put C as a prefix to every class. I would also make the data members of the implementation private, not protected.
An alternative model you could have, as defined by Composition over Inheritance and Single Responsibility Principle, both referenced by Stephane Rolland, implemented the following model.
First, you need three different classes:
class CLog {
CLogReader* m_Reader;
CLogWriter* m_Writer;
public:
void Read(CString& strLog) {
m_Reader->Read(strLog);
}
void Write(const CString& strNewMsg) {
m_Writer->Write(strNewMsg);
}
void setReader(CLogReader* reader) {
m_Reader = reader;
}
void setWriter(CLogWriter* writer) {
m_Writer = writer;
}
};
CLogReader handles the Single Responsibility of reading logs:
class CLogReader {
public:
virtual void Read(CString& strLog) {
//read to the string.
}
};
CLogWriter handles the Single Responsibility of writing logs:
class CLogWriter {
public:
virtual void Write(const CString& strNewMsg) {
//Write the string;
}
};
Then, if you wanted your CLog to, say, write to a socket, you would derive CLogWriter:
class CLogSocketWriter : public CLogWriter {
public:
void Write(const CString& strNewMsg) {
//Write to socket?
}
};
And then set your CLog instance's Writer to an instance of CLogSocketWriter:
CLog* log = new CLog();
log->setWriter(new CLogSocketWriter());
log->Write("Write something to a socket");
Pros
The pros to this method are that you follow the Single Responsibility Principle in that every class has a single purpose. It gives you the ability to expand a single purpose without having to drag along code which you would not modify anyways. It also allows you to swap out components as you see fit, without having to create an entire new CLog class for that purpose. For instance, you could have a Writer that writes to a socket, but a reader that reads a local file. Etc.
Cons
Memory management becomes a huge concern here. You have to keep track of when to delete your pointers. In this case, you'd need to delete them on destruction of CLog, as well as when setting a different Writer or Reader. Doing this, if references are stored elsewhere, could lead to dangling pointers. This would be a great opportunity to learn about Strong and Weak references, which are reference counter containers, which automatically delete their pointer when all references to it are lost.
No. If there's no polymorphism in action there's no reason for inheritance and you should use the refactoring rule to put the two classes into one. "Prefer composition over inheritance".
Edit: as #crush commented, "prefer composition over inheritance" may not be the adequate quotation here. So let's say: if you think you need to use inheritance, think twice. And if ever you are really sure you need to use it, think about it once again.

Dependency injection ; good practices to reduce boilerplate code

I have a simple question, and I'm not even sure it has an answer but let's try.
I'm coding in C++, and using dependency injection to avoid global state. This works quite well, and I don't run in unexpected/undefined behaviours very often.
However I realise that, as my project grows I'm writing a lot of code which I consider boilerplate. Worse : the fact there is more boilerplate code, than actual code makes it sometimes hard to understand.
Nothing beats a good example so let's go :
I have a class called TimeFactory which creates Time objects.
For more details (not sure it's relevant) : Time objects are quite complex because the Time can have different formats, and conversion between them is neither linear, nor straightforward. Each "Time" contains a Synchronizer to handle conversions, and to make sure they have the same, properly initialized, synchronizer, I use a TimeFactory. The TimeFactory has only one instance and is application wide, so it would qualify for singleton but, because it's mutable, I don't want to make it a singleton
In my app, a lot of classes need to create Time objects. Sometimes those classes are deeply nested.
Let's say I have a class A which contains instances of class B, and so on up to class D. Class D need to create Time objects.
In my naive implementation, I pass the TimeFactory to the constructor of class A, which passes it to the constructor of class B and so on until class D.
Now, imagine I have a couple of classes like TimeFactory and a couple of class hierarchies like the one above : I loose all the flexibility and readability I'm suppose to get using dependency injection.
I'm starting to wonder if there isn't a major design flaw in my app ...
Or is this a necessary evil of using dependency injection ?
What do you think ?
In my naive implementation, I pass the TimeFactory to the constructor
of class A, which passes it to the constructor of class B and so on
until class D.
This is a common misapplication of dependency injection. Unless class A directly uses the TimeFactory, it should not ever see, know about, or have access to the TimeFactory. The D instance should be constructed with the TimeFactory. Then the C instance should be constructed with the D instance you just constructed. Then the B with the C, and finally the A with the B. Now you have an A instance which, indirectly, owns a D instance with access to a TimeFactory, and the A instance never saw the TimeFactory directly passed to it.
Miško Hevery talks about this in this video.
Global state isn't always evil, if it's truly global. There are often engineering trade-offs, and your use of dependency injection already introduces more coupling than using a singleton interface or a global variable would: class A now knows than class B requires a TimeFactory, which is often more detail about class B than class A requires. The same goes for classes B and C, and for classes C and D.
Consider the following solution that uses a singleton pattern:
Have the TimeFactory (abstract) base class provide singleton access to the application's `TimeFactory:
Set that singleton once, to a concrete subclass of TimeFactory
Have all of your accesses to TimeFactory use that singleton.
This creates global state, but decouples clients of that global state from any knowledge of its implementation.
Here's a sketch of a potential implementation:
class TimeFactory
{
public:
// ...
static TimeFactory* getSingleton(void) { return singleton; }
// ...
protected:
void setAsSingleton(void)
{
if (singleton != NULL) {
// handle case where multiple TimeFactory implementations are created
throw std::exception(); // possibly by throwing
}
singleton = this;
}
private:
static TimeFactory* singleton = NULL;
};
Each time a subclass of TimeFactory is instantiated, you can have it call setAsSingleton, either in its constructor or elsewhere.

Factory Pattern in C++ -- doing this correctly?

I am relatively new to "design patterns" as they are referred to in a formal sense. I've not been a professional for very long, so I'm pretty new to this.
We've got a pure virtual interface base class. This interface class is obviously to provide the definition of what functionality its derived children are supposed to do. The current use and situation in the software dictates what type of derived child we want to use, so I recommended creating a wrapper that will communicate which type of derived child we want and return a Base pointer that points to a new derived object. This wrapper, to my understanding, is a factory.
Well, a colleague of mine created a static function in the Base class to act as the factory. This causes me trouble for two reasons. First, it seems to break the interface nature of the Base class. It feels wrong to me that the interface would itself need to have knowledge of the children derived from it.
Secondly, it causes more problems when I try to re-use the Base class across two different Qt projects. One project is where I am implementing the first (and probably only real implementation for this one class... though i want to use the same method for two other features that will have several different derived classes) derived class and the second is the actual application where my code will eventually be used. My colleague has created a derived class to act as a tester for the real application while I code my part. This means that I've got to add his headers and cpp files to my project, and that just seems wrong since I'm not even using his code for the project while I implement my part (but he will use mine when it is finished).
Am I correct in thinking that the factory really needs to be a wrapper around the Base class rather than the Base acting as the factory?
You do NOT want to use your interface class as the factory class. For one, if it is a true interface class, there is no implementation. Second, if the interface class does have some implementation defined (in addition to the pure virtual functions), making a static factory method now forces the base class to be recompiled every time you add a child class implementation.
The best way to implement the factory pattern is to have your interface class separate from your factory.
A very simple (and incomplete) example is below:
class MyInterface
{
public:
virtual void MyFunc() = 0;
};
class MyImplementation : public MyInterface
{
public:
virtual void MyFunc() {}
};
class MyFactory
{
public:
static MyInterface* CreateImplementation(...);
};
I'd have to agree with you. Probably one of the most important principles of object oriented programming is to have a single responsibility for the scope of a piece of code (whether it's a method, class or namespace). In your case, your base class serves the purpose of defining an interface. Adding a factory method to that class, violates that principle, opening the door to a world of shi... trouble.
Yes, a static factory method in the interface (base class) requires it to have knowledge of all possible instantiations. That way, you don't get any of the flexibility the Factory Method pattern is intended to bring.
The Factory should be an independent piece of code, used by client code to create instances. You have to decide somewhere in your program what concrete instance to create. Factory Method allows you to avoid having the same decision spread out through your client code. If later you want to change the implementation (or e.g. for testing), you have just one place to edit: this may be e.g. a simple global change, through conditional compilation (usually for tests), or even via a dependency injection configuration file.
Be careful about how client code communicates what kind of implementation it wants: that's not an uncommon way of reintroducing the dependencies factories are meant to hide.
It's not uncommon to see factory member functions in a class, but it makes my eyes bleed. Often their use have been mixed up with the functionality of the named constructor idiom. Moving the creation function(s) to a separate factory class will buy you more flexibility also to swap factories during testing.
When the interface is just for hiding the implementation details and there will be only one implementation of the Base interface ever, it could be ok to couple them. In that case, the factory function is just a new name for the constructor of the actual implementation.
However, that case is rare. Except when explicit designed having only one implementation ever, you are better off to assume that multiple implementations will exist at some point in time, if only for testing (as you discovered).
So usually it is better to split the Factory part into a separate class.

How do I add code automatically to a derived function in C++

I have code that's meant to manage operations on both a networked client and a server, since there is significant overlap between the two. However, there are a few functions here and there that are meant to be exclusively called by the client or server, and accidentally calling a client function on the server (or vice versa) is a significant source of bugs.
To reduce these sorts of programming errors, I'm trying to tag functions so that they'll raise a ruckus if they're misused. My current solution is a simple macro at the start of each function that calls an assert if the client or server accesses members they shouldn't. However, this runs into problems when there are multiple derived instances of classes, in that I have to tag the implementation as client or server side in EVERY child class.
What I'd like to be able to do is put a tag in the virtual member's signature in the base class, so that I only have to tag it once and not run into errors by forgetting to do it repeatedly. I've considered putting a check in a base class implementation and then referring to it with something like base::functionName, but that runs into the same issue as far as needing to manually add the function call to every implementation. Ideally, I'd be able to have parent versions of the function called automatically like default constructors do.
Does anybody know how to achieve something like this in C++? Is there an alternate approach I should be considering?
Thanks!
Another approach might be to override a different method than the one your callers actually call:
class Base {
public:
void doit(const Something &);
protected:
virtual void real_doit(const Something &);
};
class Derived: public Base {
protected:
virtual void real_doit(const Something &);
};
The implementation of Base::doit() could do the check to make sure that it's being called in the right environment, and then call the virtual real_doit() function. Derived classes would override the protected virtual function, and users of either class wouldn't be able to call the protected function.
The Base::doit() function is not virtual so that derived classes can't accidentally override the wrong one. (People can try, but hopefully they'll notice soon enough when it's not called.)
What you've proposed is incredibly complex. It sounds like a simpler solution would be
class CommonStuff {
// all common code that anybody can safely call
};
class ServerBase : public CommonStuff {
// only what the server is allowed to call; can safely be overwritten
};
class ClientBase : public CommonStuff {
// only what the client is allowed to call; can safely be overwritten
};
Compile-time enforcements are much better than any sort of runtime enforcement.
There's not a way within the language (that I know of) to do what you're asking without redesigning your classes. The simplest solution may be to have a Client interface (pure virtual) class that does not declare server functions, and a Server interface class that doesn't declare client functions, and have your consolidated code inherit (publicly) from both interfaces. Then in your client program, use a reference (or pointer) to the Client interface, which does not allow access to any methods not declared in the Client interface. On the server, use the Server interface.
This will also allow you to use derived classes as Server or Client as well.
I would consider splitting this library into three libraries: A base library that has most everything, a server-only library, and a client-only library. As long as the client doesn't use the server library, you're good. You may end up adding a few extra classes (class Processor might split into BaseProcessor, ClientProcessor, and ServerProcessor, where each subclass has one additional function that the base doesn't.)
If that won't work, could you put the server/client check in the class constructor, and call the assertion there? (That would only work if the server-only or client-only is granular to the class, not to the method.)
If that won't work, would it make any sense to actually compile different versions of your library, based on whether it's a server or client build? Surround the methods, and their declarations, with #ifdef SERVERBUILD and #ifdef CLIENTBUILD, and include some checks to make sure they aren't both defined (#if defined(SERVERBUILD) && defined(CLIENTBUILD), #error Can't define both!).
I voted up Greg Hewgill's answer, but it got me thinking about ways to add "aspects" such as you request. I used his naming convention here (class Base and method doit):
class Base {
protected:
class Aspect {
public:
Aspect(int x) {
std::cout << "aspect" << std::endl;
}
};
public:
virtual void doit(const Something &arg, const Aspect hook = 0)
{
std::cout << "doit(" << arg << ")" << std::endl;
}
};
Callers can just say base.doit(arg) since Aspect is a default argument. Its constructor runs before doit and its destructor (not pictured) runs after. Sadly my first idea to make the default argument hook = this is not allowed.
Children can override doit with the same signature and get the same effect.