I'm comparing GoogleMock vs FakeIt for writing unit tests. I like FakeIt over GoogleMock because I'm from Java background and FakeIt sticks close to Mockito/JMock syntax which make using the library much easier to write & maintain.
But FakeIt GIT home (https://github.com/eranpeer/FakeIt) says it doesn't support MultipleInheritance and the application im testing has code with multiple inheritance. I dont have to support diamond inheritance, so I would like to know if its just that aspect of multiple inheritance thats not supported or are there other aspects thats not supported as well?
Unfortunately it seems that any type of multiple inheritance is not supported, even if it's just an "interface" that unifies several other "interfaces", e.g.:
struct IA { virtual void a() = 0; };
struct IB { virtual void b() = 0; };
struct IC : public IA, public IB {};
fakeit::Mock<IC> mock; // error :(
(The check is done using std::is_simple_inheritance_layout<T>)
I did, however, find a little workaround for this problem, at least for simple scenarios:
class MockC : public IC {
public:
MockC(IA& a, IB& b) : m_a(a), m_b(b) {}
void a() override { return m_a.a(); };
void b() override { return m_b.b(); };
private:
IA& m_a;
IB& m_b;
};
fakeit::Mock<IA> mockA;
fakeit::Mock<IB> mockB;
MockC mockC(mockA.get(), mockB.get());
// Use mockA and mockB to set up the mock behavior the way you want it.
// Just make sure not to use mockC after they go out of scope!
Here's another workaround that doesn't require you to make a special mock class. All you need to do is to mock each of the base classes, and apply it to an instance of the deriving class by casting the deriving class to a reference of each interface. You do need to apply the mock to an instance of the class. Here is an example:
class I1
{
public:
virtual int one() = 0;
};
class I2
{
public:
virtual int two() = 0;
};
class Both : public I1, public I2
{
public:
virtual int one()
{
return 0;
}
virtual int two()
{
return 0;
}
virtual int three()
{
return one() + two();
}
};
We have pure interfaces I1 and I2 with pure virtual methods one() and two() respectively, all implemented by Both. As you might guess, Both is deliberately designed to produce an incorrect answer to demonstrate the mock. Here is the mock inside a Google Test test:
TEST(both_mock, three)
{
Both both;
Mock<I1> mock1((I1&)both);
Mock<I2> mock2((I2&)both);
When(Method(mock1, one)).Return(1);
When(Method(mock2, two)).Return(2);
ASSERT_EQ(both.three(), 3);
}
And this works and passes. The advantage of this is that you don't need to create a special mock class, and you can use the actual class that inherits multiple classes. The disadvantages are...
The deriving class (both in this case) must be instantiable (e.g., you can't do this with an abstract class or interface that inherits from other abstract classes or interfaces).
If you further subclass the subclass of both interfaces (e.g., class More : public Both), you still need one mock for each interface/base class, and you cannot mock any member declared by Both, More, or any further deriving class.
Related
Let's assume you have an base class for the different states of a State Machine that has methods for different inputs like mouse, keyboard, joystick, etc. Now not every derived state is going to use all possible types of inputs. If the base class methods are pure virtual every derived state class needs to always implement every single one of them. To avoid this i declared them with an empty body in the base class and just override the ones that are used by the particular derived class. In case the class doesn't use a certain input type the empty base class method get's called. I am storing the currentState in a base class pointer and just feed it with the input without having to know which particular derived state it actually is to avoid unnessecary casts.
class Base
{
public:
virtual void keyboardInput() {}
virtual void mouseInput() {}
};
class Derived : public Base
{
public:
void keyboardInput()
{
// do something
}
// Derived doesnt use mouseInput so it doesn't implement it
};
void foo(Base& base)
{
base.keyboardInput();
base.mouseInput();
}
int main()
{
Derived der;
foo(der);
}
Is this considered a good practice?
Your question is opinion based, but I'd rather follow this approach to use an interface:
struct IBase {
virtual void keyboardInput() = 0;
virtual void mouseInput() = 0;
virtual ~IBase() {}
};
class Base : public IBase {
public:
virtual void keyboardInput() override {}
virtual void mouseInput() override {}
};
class Derived : public Base {
public:
void keyboardInput() override {
// do something
}
// Derived doesnt use mouseInput so it doesn't implement it
};
int main() {
std::unique_ptr<IBase> foo = new Derived();
foo->keyboardInput();
foo->mouseInput();
return 0;
}
Some arguments why that's better practice added from the comments:
The idea is that interface should contain as little assertions as possible, making it less likely to change, making it more dependable for those who inherit from it. Implementing the methods, albeit empty, is already an assertion, however small.
It would be less pain for refactorings coming later, which introduce more interfaces with multiple inheritance.
It really depends on what you want from the methods. When declaring an interface, usually the methods are left pure virtual because they are required to be implemented for the class to work at all. Marking them pure virtual signals "You have to implement this.".
However, sometimes there are methods that may do nothing and it's valid for all possible implementations for them to do nothing. It is not very common, but it is possible.
I don't think that your interface is the case though, and you should follow #πάντα ῥεῖ's answer. Or do it through multiple inheritance:
class MouseInput {
public:
virtual void mouseInput() = 0;
}
class KeyboardInput {
public:
virtual void keyboardInput() = 0;
}
class Derived : public KeyboardInput
{
public:
virtual void keyboardInput() override
{
// do something
}
};
class AllInput : public KeyboardInput, public MouseInput
{
public:
virtual void keyboardInput() override
{
// do something
}
virtual void mouseInput() override
{
// do something
}
};
That has the benefit that you can have methods that explicitly say that they work with one kind of input:
void doSomethingMouseIsh(MouseInput* input);
The disadvantage is that methods that combine mouse and keyboard input get weird unless you have InterfaceAllInput as interface and use it for all "all input methods"
Final note: as long as you try to write clean code, considering each use case is more important than some best practices.
If you going to be strict about it this does violate ISP (https://en.wikipedia.org/wiki/Interface_segregation_principle) as your forcing a subclass to depend on a method it doesn't use - but generally its not too bad in practice if the alternative adds more complexity.
In C++ is it reasonable having an interface class inherits from a concrete class? Is this bad deign smell?
I have an interface which defines a behaviour, and a inherited class which defines an implementation.
Now I have another class and I want to state "Whoever inherits from this class need to specify behavior X, and defines this behavior Y which alread has a valid default implementation".
Is it correct practice this kind of implementation? I'm confused about this abstract/concrete mixing.
class IBase
{
public:
virtual ~IBase(){}
virtual void method1() = 0;
}
class ConcreteBase : IBase
{
public:
ConcreteBase(){}
void method1() override { // some impl; }
}
class ISpecialized : ConcreteBase
{
public:
// Here I don't need virtual destructor
void method2() = 0;
}
class ConcreteSpecialized : ISpecialized
{
public:
ConcreteSpecialized(){}
void method2() override { // some impl. }
}
Instead of having ISpecialized extend IBase, you should consider having it stand on its own. Then you can use multiple inheritance to make your concrete class derive from both interfaces.
class ISpecialized
{
public:
~ISpecialized(){}
void method2() = 0;
}
class ConcreteSpecialized : ConcreteBase, ISpecialized
{
public:
ConcreteSpecialized(){}
void method2() override { // some impl. }
}
The language allows it, it's clear and to the point, and you can use the override annotation which will help you should you ever refactor to another approach. I'd consider renaming ISpecialized though, if you're using I to specify an interface.
So other than the unnecessary ~IBase(){} there's nothing wrong at all with your approach.
If you want a virtual destructor then use
virtual ~IBase() = default;
i.e. don't define it explicitly.
Strictly from a software design point of view, I would not recommend on doing so.
There is no added benefit to ISpecialized inheriting from the ConcreteBase rather than the IBase class, since in both cases the interface will define the exact same set of methods. An interface is something you'd expect to be abstract.
If there is no direct relationship between the two interfaces, I would also recommend considering Mark's answer. In such case, it's better to separate them, thus allowing oneself to inherit a concrete implementation of the first one, while using only the definitions from the other.
I stumbled upon the following problem: I have two packages A and B working fine for each own. Each has its own interface and his own implementation. Now i made a package C combining a adapter of A with a concrete Implemenation of B. C actually only implements the Interface of A and is only inheritating and using the Interface of B internally for now. Most of the times that was enough to have only access to the Interface A from a Container, but now i need the methods from B accessible too. Here is the simple example:
//----Package A----
class IA
{virtual void foo() = 0;};
// I cant add simply bar() here, it would make totally no sense here...
class A : public IA
{virtual void foo() {doBasicWork();} };
//----Package B----
class IB
{virtual void bar() = 0;};
class B1 : public IB
{
//Some special implementation
virtual void bar() {}
};
class B2 : public IB
{
//Some special implementation
virtual void bar() {}
};
// + several additional B classes , with each a different implementation of bar()
//---- Mixed Classes
class AB1 : public B1, public A
{
void foo() {A::foo(); B1::bar();}
};
class AB2 : public B2, public A
{
void foo() {A::foo(); B2::bar();}
};
// One Container to rule them all:
std::vector<IA*> aVec;
AB1 obj1;
AB2 obj2;
int main(){
iAvector.push_back(&obj1);
iAvector.push_back(&obj2);
for (std::vector<IA>::iterator it = aVec.begin(); it != aVec.end(); it++)
{
it->for(); // That one is okay, works fine so far, but i want also :
// it->bar(); // This one is not accessible because the interface IA
// doesnt know it.
}
return 0;
}
/* I thought about this solution: to inherit from IAB instead of A for the mixed
classes, but it doesnt compile,
stating "the following virtual functions are pure within AB1: virtual void IB::bar()"
which is inherited through B1 though, and i cant figure out where to add the virtual
inheritence. Example:
class IAB : public A, public IB
{
// virtual void foo () = 0; // I actually dont need them to be declared here again,
// virtual void bar () = 0; // do i?
};
class AB1 : public B1, public IAB
{
void foo() {A::foo(); B1::bar();}
};
*/
The question is, how to achieve a combination of both Packages A and B, so that both Interfaces are accessible from one Container, while all the implementation details from A and B still get inherited?
The obvious solution is to create a combined interface:
class IAB : public virtual IA, public virtual IB
{
};
, have your AB1 and AB2 derive from it (in addition to their
current derivations), and keep IAB* in the vector.
This means that B1 and B2 must also derive virtually from
IB; given the direction things seem to be going, A should
probably also derive virtually from IA.
There are strong arguments that inheritance of an interface
should always be virtual. Without going that far: if a class is
designed to be derived from, and it has bases, those bases
should be virtual (and arguably, if a class is not designed to
be derived from, you shouldn't derive from it). In your case,
you're using the classic mixin technique, and generally, the
simplest solution is for all inheritance in a mixin to be
virtual.
Let's say I have pure abstract class IHandler and my class that derives from it:
class IHandler
{
public:
virtual int process_input(char input) = 0;
};
class MyEngine : protected IHandler
{
public:
virtual int process_input(char input) { /* implementation */ }
};
I want to inherit that class in my MyEngine so that I can pass MyEngine* to anyone expecting IHandler* and for them to be able to use process_input.
However I don't want to allow access through MyEngine* as I don't want to expose implementation details.
MyEngine* ptr = new MyEngine();
ptr->process_input('a'); //NOT POSSIBLE
static_cast<IHandler*>(ptr)->process_input('a'); //OK
IHandler* ptr2 = ptr; //OK
ptr2->process_input('a'); //OK
Can this be done via protected inheritance and implicit casting?
I only managed to get:
conversion from 'MyEngine *' to 'IHandler *' exists, but is inaccessible
Since I come from C# background, this is basically explicit interface implementation in C#.
Is this a valid approach in C++?
Additional:
To give a better idea why I want to do this, consider following:
Class TcpConnection implements communication over TCP, and in its constructor expects pointer to interface ITcpEventHandler.
When TcpConnection gets some data on a socket, it passes that data to its ITcpEventHandler using ITcpEventHandler::incomingData, or when it polls for outgoing data it uses ITcpEventHandler::getOutgoingData.
My class HttpClient uses TcpConnection (aggregation) and passes itself to TcpConnection constructor, and does processing in those interface methods.
So TcpConnection has to implement those methods, but I don't want users using HttpClient to have direct access to ITcpEventHandler methods (incomingData, getOutgoingData). They should not be able to call incomingData or getOutgoingData directly.
Hope this clarifies my use case.
Deriving with protected makes the members of the base class inaccessible through a pointer to the derived class, and disallows the implicit conversion.
It seems to me that what you want is not to forbid access through the base class (interface), but rather through the derived class (concrete implementation):
class IHandler
{
public:
virtual int process_input(char input) = 0; //pure virtual
virtual std::string name() { return "IHandler"; } //simple implementation
};
class MyEngine : public IHandler
// ^^^^^^
{
protected: // <== Make the functions inaccessible from a pointer
// or reference to `MyEngine`.
virtual int process_input(char input) { return 0; } //override pure virtual
using IHandler::name; //use IHandler version
};
Here, in the derived class you basically override the visibility of the process_input function, so that clients can only call them through a pointer or reference to the base class.
This way you will make this impossible:
MyEngine* ptr = new MyEngine();
ptr->process_input('a'); // ERROR!
std::cout << ptr->name(); // ERROR!
But this will be possible:
IHandler* ptr = new MyEngine();
ptr->process_input('a'); // OK
std::cout << ptr->name(); // OK
In C++ protected and private inheritance serve the use of inheritance of the implementation. This is, you define a class with methods, a template class for example and when you want to use its functionality but not its interface, you inherit protected or private. So actually your base class would need to define the methods you want to use in the sub-class.
Here is a link on this topic. It really is difficult, I agree.
It's slightly hard to understand the real goal you hope to achieve here, because whether you call the method on the parent or child, as long as it's virtual the same one will be called.
That said you have a couple options.
You could make it so the user can't get a pointer (or object) of the child type by forcing a create call that returns an interface. Then you don't have to worry about artificial restrictions, they just can't get a child at all:
class Concrete : public Interface
{
public:
static Interface* create() { return new Concrete; }
private:
Concrete() { }
};
You could override the interface as protected as shown in a different answer.
You could utilize the non-virtual interface pattern to make the entire accessible public interface defined in the parent. Then it doesn't matter what object they have, they always get the public API from the interface class:
class Interface
{
public:
void foo() { foo_impl(); }
private:
virtual void foo_impl() = 0;
};
class Concrete
{
private:
virtual void foo_impl() { }
};
suppose this construct
struct InterfaceForFoo
{
virtual void GetItDone() = 0;
};
class APoliticallyCorrectImplementationOfFooRelatedThings : private InterfaceForFoo
{
public:
void GetItDone() { /*do the thing already*/ };
};
Now, i'm wondering if inheriting privately from an interface in this way do have any useful scenarios.
Huh, everyone here says "no". I say "yes, it does make sense."
class VirtualBase {
public:
virtual void vmethod() = 0;
// If "global" is an instance of Concrete, then you can still access
// VirtualBase's public members, even though they're private members for Concrete
static VirtualBase *global;
};
// This can also access all of VirtualBase's public members,
// even if an instance of Concrete is passed in,
void someComplicatedFunction(VirtualBase &obj, ...);
class Concrete : private VirtualBase {
private:
virtual void vmethod();
public:
void cmethod() {
// This assignment can only be done by Concrete or friends of Concrete
VirtualBase::global = this;
// This can also only be done by Concrete and friends
someComplicatedFunction(*this);
}
};
Making inheritance private doesn't mean that you can't access the members of VirtualBase from outside the class, it only means that you can't access those members through a reference to Concrete. However, Concrete and its friends can cast instances of Concrete to VirtualBase, and then anybody can access public members. Simply,
Concrete *obj = new Concrete;
obj->vmethod(); // error, vmethod is private
VirtualBase *obj = VirtualBase::global;
obj->vmethod(); // OK, even if "obj" is really an instance of Concrete
The question is why should it matter that the base class has only pure virtual methods?
The two things are almost unrelated. Private means that it is an implementation detail of your class, and not part of the public interface, but you might want to implement an interface as an implementation detail. Consider that you write a class, and that you decide to implement the functionality by means of a library that requires you to implement an interface. That is an implementation detail, there is no need to make the inheritance public just because the interface has only pure virtual functions.
On object oriented aspect there is no use case for such private inheritance for an abstract class.
However, if you want to mandate that you child class must derive certain methods then you can use this. For example:
struct implement_size
{
virtual size_t size () = 0;
};
class MyVector : private implement_size
{
public:
size_t size () { ... } // mandatory to implement size()
}
class MyString : private implement_size
{
public:
size_t size () { ... } // mandatory to implement size()
};
So, it just helps to maintain the personal coding discipline. Message with this example is that, inheritance is not just meant for object oriented purpose. You can even use inheritance for stopping inheritance chain (something like Java final).
Eh? No, that makes absolutely no sense, since the reason you provide an interface is that you want other to use your class through that interface. How would that work if they don't know you implement it?
#include <vector>
class Fooable{
public:
virtual void foo() = 0;
};
class DoesFoo
: private Fooable
{
void foo();
};
int main(){
std::vector<Fooable*> vf;
vf.push_back(new DoesFoo()); // nope, doesn't work
vf[0]->foo();
}
The above example doesn't work because the outside world doesn't know that DoesFoo is a Fooable, as such you cannot new an instance of it and assign it to a Fooable*.
Not really. If you need a function, you implement it. It makes no sense to force a function that cannot be used by other classes.
Why you would inherit privately from an interface, I don't know; that kind of defeats the purpose of interfaces.
If it's not an interface, but instead a class, it makes sense:
class A {
virtual void foo() = 0;
void bar() {
foo();
}
};
class B : private A {
virtual void foo() {
}
};