So I'm trying to move some common code in a few classes that implement an interface* to an abstract base class. However, the abstract base class needs to know a little bit about how the derived classes want to do things in order to determine exactly what to do. However, I'm not entirely sure if I should implement this using a pure virtual function or a protected member variable.
I'll give a simple example describing what I'm trying to do.
The interface:
class SomeInterface
{
public:
void DoSomething() = 0;
// ...
};
The abstract base class I'm trying to implement using pure virtual function:
class AbstractBase : public SomeInterface
{
public:
virtual void DoSomething()
{
for (int i = 0; i < GetNumIterations(); i++)
{
// Call implementation in derived classes, for example
DoSomethingImpl();
}
}
protected:
virtual void DoSomethingImpl() = 0;
virtual int GetNumIterations() = 0;
};
A derived class:
class Derived1 : public AbstractBase
{
protected:
virtual void DoSomethingImpl()
{
// Do actual work.
}
virtual int GetNumIterations()
{
return 5;
}
};
Another derived class:
class Derived2 : public AbstractBase
{
protected:
virtual void DoSomethingImpl()
{
// Do actual work.
}
virtual int GetNumIterations()
{
return 1;
}
};
Or the other way would be using a protected variable:
class AbstractBase
{
public:
virtual void DoSomething()
{
for (int i = 0; i < numIterations; i++)
{
// Call implementation in derived classes, for example
DoSomethingImpl();
}
}
protected:
virtual void DoSomethingImpl() = 0;
int numIterations;
};
And the derived would be like:
class Derived1 : public AbstractBase
{
public:
Derived1()
: numIterations(5)
{
}
protected:
virtual void DoSomethingImpl()
{
// Do actual work.
}
};
Same thing for Derived2.
I know there's some overhead related to virtual methods (probably insignificant, but still), and that the protected variable might not be great for encapsulation, or that it could be forgotten and left uninitialized. So my question is basically, which of these is preferrable and why, or should I avoid this scenario altogether and try to handle it differently?
Note: my actual code is a bit more complicated. I havent't actually tested to see if this code works, so forgive me if it's incorrect.
*When I say interface, I mean a class containing only pure virtual functions.
In fact, what you are trying to do is very common. However, there is one more approach to implement this which explicitly defines the contract for derivatives of AbstractBase. Modifying your example it would look as follows:
class AbstractBase : public SomeInterface
{
public:
explicit AbstractBase(int numIterations) : numIterations(numIterations) {}
virtual void DoSomething()
{
for (int i = 0; i < numIterations; i++)
{
// Call implementation in derived classes, for example
DoSomethingImpl();
}
}
protected:
virtual void DoSomethingImpl() = 0;
// Can omit it, if not needed by derivatives
int GetNumIterations() { return numIterations; }
private:
int numIterations;
};
class Derived1 : public AbstractBase
{
public:
Derived1() : AbstractBase(5) {}
protected:
virtual void DoSomethingImpl()
{
// Do actual work.
}
};
class Derived2 : public AbstractBase
{
public:
Derived2() : AbstractBase(1) {}
protected:
virtual void DoSomethingImpl()
{
// Do actual work.
}
};
As you probably understand now, by contract I meant the constructor which now explicitly forces derivatives of AbstractBase to initialize it properly, so you would never mess it. The downside of this approach is that it introduces the additional field which would be possibly duplicated among numerous copies of say Derived1 in case if this 5 never changes in your situation. So if you care about memory footprint, then I'd not go for this one. However, if numIterations can be changed, then this approach is certainly the best of the 3 proposed ones. All you'd have to do is add proper setters for it into AbstractBase.
NOTE: My approach is sort of safer and better alternative to your 2nd one as it exactly addresses the issues you've mentioned, namely encapsulation hole (redundant exposure of implementation details) and contract weakness (when you might forget to initialize numIterations because you are not forced to). Therefore, you do not want to use your 2nd approach in your current situation.
The first approach that you've proposed is good too. Its advantage over mine is that it does not introduce any memory overhead. And as long as "number of iterations" does not change you don't need to introduce a field to store it. As a result, you have to override this GetNumIterations method in every derivative, but it's OK since this is a part of (strong) contract (pure virtual method) and you can never mess it too.
To conclude, as you can see, these 2 approaches are mutually exclusive, and it is very easy to decide which one to use by simply applying their pros and cons to your particular situation.
Related
I am sorry but I have to ask a stupid question.
I understand the benefit of implementing an abstract class as such. If I have virtual function with a basic implementation that is always called in cases when the derived classes don't have a specific implementation there is definitely a benefit, e.g.
virtual void ImplementedVirtFunc() {//do something basic}
What I don't quite get is what is the benefit of implementing a purely virtual function such as
virtual void VirtFunc() = 0;
In this case my derived classes need to implement the specialisized function anyhow if they need it. But I could straight forwardly just implement it there and omit the virtual void VirtFunc() = 0 line in my abstract class.
So is there a specific benefit for implementing virtual void VirtFunc() = 0 that I don't see?
Please forgive me this stupid question. I started to learn C++ this January and I am still have a long way to go to understand all the subtleties...
But I could straight forwardly just implement it there and omit the virtual void VirtFunc() = 0 line in my abstract class.
Sure, you could. But you wouldn't be able to call that method from your base class, since your base class doesn't know anything about its existence at all.
Consider the following example. Every Shape definitely has an area, even though not known for a general shape. And every subclass of Shape inherits the Print() method.
class Shape {
// ...
public:
virtual int Area() = 0; // there is no formula for the area of a "general" shape, but it definitely has one ...
virtual void Print() {
std::cout << "Area: " << Area() << std::endl;
}
}
class Circle : public Shape {
// ...
public:
virtual int Area() {
// calculate and return circle area
}
}
class Square : public Shape {
// ...
public:
virtual int Area() {
// calculate and return square area
}
}
virtual void f(); // virtual member function
virtual void g() = 0; // pure abstract member function
A class with at least one pure virtual member function is an abstract class, and cannot be constructed itself, which is often desired (only non-abstract, "concrete" if you will, derive classes should be able to be constructed);
struct Abstract {
virtual void g() = 0;
};
struct NonAbstract {
virtual void f() {}
};
int main() {
NonAbstract na{}; // OK
Abstract a{}; // Error: cannot declare variable 'a'
// to be of abstract type 'Abstract'
}
Abstract classes are typically used polymorphically, to allow dynamic dispatch to derived object methods:
struct Derived : public Abstract {
void g() override {} // #1
}
void h(Abstract const& obj) {
obj.g(); // dynamic dispatch
}
int main() {
Derived d{};
h(d); // Will result in invoke #1
}
There are two reasons.
One is to force the derived concrete class to implement your virtual function.
The second is to make your class an abstract class, which cannot be instantiated by itself.
Let's assume you have an base class for the different states of a State Machine that has methods for different inputs like mouse, keyboard, joystick, etc. Now not every derived state is going to use all possible types of inputs. If the base class methods are pure virtual every derived state class needs to always implement every single one of them. To avoid this i declared them with an empty body in the base class and just override the ones that are used by the particular derived class. In case the class doesn't use a certain input type the empty base class method get's called. I am storing the currentState in a base class pointer and just feed it with the input without having to know which particular derived state it actually is to avoid unnessecary casts.
class Base
{
public:
virtual void keyboardInput() {}
virtual void mouseInput() {}
};
class Derived : public Base
{
public:
void keyboardInput()
{
// do something
}
// Derived doesnt use mouseInput so it doesn't implement it
};
void foo(Base& base)
{
base.keyboardInput();
base.mouseInput();
}
int main()
{
Derived der;
foo(der);
}
Is this considered a good practice?
Your question is opinion based, but I'd rather follow this approach to use an interface:
struct IBase {
virtual void keyboardInput() = 0;
virtual void mouseInput() = 0;
virtual ~IBase() {}
};
class Base : public IBase {
public:
virtual void keyboardInput() override {}
virtual void mouseInput() override {}
};
class Derived : public Base {
public:
void keyboardInput() override {
// do something
}
// Derived doesnt use mouseInput so it doesn't implement it
};
int main() {
std::unique_ptr<IBase> foo = new Derived();
foo->keyboardInput();
foo->mouseInput();
return 0;
}
Some arguments why that's better practice added from the comments:
The idea is that interface should contain as little assertions as possible, making it less likely to change, making it more dependable for those who inherit from it. Implementing the methods, albeit empty, is already an assertion, however small.
It would be less pain for refactorings coming later, which introduce more interfaces with multiple inheritance.
It really depends on what you want from the methods. When declaring an interface, usually the methods are left pure virtual because they are required to be implemented for the class to work at all. Marking them pure virtual signals "You have to implement this.".
However, sometimes there are methods that may do nothing and it's valid for all possible implementations for them to do nothing. It is not very common, but it is possible.
I don't think that your interface is the case though, and you should follow #πάντα ῥεῖ's answer. Or do it through multiple inheritance:
class MouseInput {
public:
virtual void mouseInput() = 0;
}
class KeyboardInput {
public:
virtual void keyboardInput() = 0;
}
class Derived : public KeyboardInput
{
public:
virtual void keyboardInput() override
{
// do something
}
};
class AllInput : public KeyboardInput, public MouseInput
{
public:
virtual void keyboardInput() override
{
// do something
}
virtual void mouseInput() override
{
// do something
}
};
That has the benefit that you can have methods that explicitly say that they work with one kind of input:
void doSomethingMouseIsh(MouseInput* input);
The disadvantage is that methods that combine mouse and keyboard input get weird unless you have InterfaceAllInput as interface and use it for all "all input methods"
Final note: as long as you try to write clean code, considering each use case is more important than some best practices.
If you going to be strict about it this does violate ISP (https://en.wikipedia.org/wiki/Interface_segregation_principle) as your forcing a subclass to depend on a method it doesn't use - but generally its not too bad in practice if the alternative adds more complexity.
I have an abstract base class which declares a pure virtual function (virtual method() = 0;). Some of the inherited classes specialize and use this method but there's one of those inherited classes in which I don't want to make this method usable. How do I do it? Is making it private the only choice?
Well, you could throw that will make tacking where it is called easier.
void method() override { throw /* whatever */ ; }
Dynamic polymorphism is a runtime property. Hence a runtime error. If you look after something that will trigger at compile time, you need static polymorphism.
template<typename Child>
struct Parent {
void callMe() {
static_cast<Child*>(this)->callMeImpl();
}
};
struct SomeChild : Parent<SomeChild> {
};
Now, if you try to call callMe form the parent that is extended by SomeChild, it will be a compile time error.
You can also hold pointer to the parent just like dynamic polymorphism, as the parent will call the child function
Is making it private the only choice?
No, that's not a choice at all since you can still access the method if it's public or protected in the base classes.
Other than implementing the method in the class and resorting to run-time failures, there's not a lot you can do. You could port the whole thing to templates and use static polymorphism which, with further trickey, you could contrive a compile-time failure in certain instances, but that could be design overkill.
I guess you could make it a normal virtual function instead of a pure virtual function like this:
virtual void method() { /* code */ }
If this function is not being used in another class, you will be able to catch that. For example you could warn yourself:
virtual void method() { error = true; } //or whatever
As others have said there is no way of enforcing this at compile time. If you are referring to a pointer to a base class there is no way the compiler can know if that pointer is referring to one of the derived classes that does implement this method or one that doesn't.
So the case will have to be handled at runtime. One option is to just throw an exception. Another option is to introduce a level of indirection so that you can ask your base class if it implements a certain function before you call it.
Say you have a Base class with three methods foo, bar and doit and some derived classes do not want to implement foo then you could split up the Base class into two base classes:
class Base1 {
public:
virtual void foo() = 0;
};
class Base2 {
public:
virtual void bar() = 0;
virtual void doit() = 0;
};
Then in places where you are currently using Base you instead use a BaseSource:
class BaseSource {
public:
virtual Base1* getBase1() = 0;
virtual Base2* getBase2() = 0;
};
where getBase1 and getBase2 can return nullptr if a BaseSource does not offer that interface:
class Derived : public BaseSource, public Base2 {
public:
// don't implement foo();
// Implemementation of Base2
void bar() override;
void doit() override;
Base1* getBase1() override { return nullptr; } // Doesn't implement Base1
Base2* getBase2() override { return this; }
};
int main() {
std::vector<std::unique_ptr<BaseSource>> objects;
objects.push_back(std::make_unique<Derived>());
for (auto& o : objects) {
auto b1 = o->getBase1();
if (b1)
b1->foo();
auto b2 = o->getBase2();
if (b2)
b2->bar();
}
}
Live demo.
Please look at this code. It just reflects basic concept of what I want to do:
#include <iostream>
using namespace std;
class Base
{
public:
Base()
{
/* Some code I want to reuse */
Redefined();
}
virtual ~Base() {}
void Redefined() { val = 10; }
int val;
};
class Derived : public Base
{
public:
Derived() : Base() {}
~Derived() {}
void Redefined() { val = 25; }
};
int main()
{
Base* check = new Derived();
cout << check->val << endl;
system("pause");
return 0;
}
I want the val property of check object to be 25 instead of 10.
As you can see I have two classes. Base class constructor have some complex functionality, which I want Derived class to have in it's constructor as well. How can I change derived function Redefined so that I won't have to rewrite Derived constructor completely (in fact just copy-pasting the whole base class constructor code and replacing one single line of code - updated version of Redefined function)?
You can't really override a function that way. Normally you could use a virtual functions, but that doesn't work the way you want in the constructor.
A better way is to pass the value you want to the Base constructor:
class Base
{
public:
Base(int init_val = 10)
{
/* Some code I want to reuse */
val = init_val;
}
virtual ~Base() {}
int val;
};
class Derived : public Base
{
public:
Derived() : Base(25) {}
~Derived() {}
};
That way any derived class can pass its choice of value to the base class.
Based on comments above:
I would actually think that the correct solution is to have a "interface" type baseclass (that is, a baseclass with pure virtual functions, and the derived class actually implements the correct behaviour), and then let each class deal with constructing its own DirectX buffers. You may find that you need, say, 2-3 different derived classes that construct buffers in different ways, and then derive from those the classes that actually do the real work. I hope that makes sense.
Alternatively, you would be passing enough parameters to the base-class, such that the buffers can be constructed. But I think the first suggestion is a better choice.
I'm trying to solve a problem where I have some classes in which I need to do some common work and then a bunch of problem specific work and when this is finished do some more processing common to all these classes.
I have a Base and Derived class that both have a function called Execute. When I call the derived version of this function, I'd like to be able to do some processing common to all my derived classes in the Base and then continue executing in my Derived::Execute and going back to Base::Execute to finish off with some common work.
Is this possible in C++ and how would one best go about doing that?
This is the idea, however it's probably not very workable like this:
class Base
{
public:
virtual void Execute();
};
Base::Execute() {
// do some pre work
Derived::Execute(); //Possible????
// do some more common work...
}
class Derived : public Base
{
public:
void Execute();
};
void Derived::Execute()
{
Base::Execute();
//Do some derived specific work...
}
int main()
{
Base * b = new Derived();
b.Execute(); //Call derived, to call into base and back into derived then back into base
}
Use a pure virtual function from base..
class Base
{
public:
void Execute();
private:
virtual void _exec() = 0;
};
Base::Execute() {
// do some common pre work
// do derived specific work
_exec();
// do some more common work...
}
class Derived : public Base
{
private:
void _exec() {
// do stuff
}
};
int main()
{
Base * b = new Derived();
b.Execute();
}
EDIT: changed the flow slightly after reading the question some more.. :) The above mechanism should match exactly what you require now -
i.e.
Base Common Stuff
Derived specific stuff
Base Common stuff again
This is called the NVI (Non-Virtual Interface, from Herb Sutter here) idiom in C++, and basically says that you should not have public virtual functions, but rather protected/private virtual functions. User code will have to call your public non-virtual function in the base class, and that will dispatch through to the protected/private virtual method.
From a design perspective the rationale is that a base class has two different interfaces, on one side the user interface, determined by the public subset of the class, and on the other end the extensibility interface or how the class can be extended. By using NVI you are decoupling both interfaces and allowing greater control in the base class.
class base {
virtual void _foo(); // interface to extensions
public:
void foo() { // interface to users
// do some ops
_foo();
}
};
Turn the problem from its head to its feet. What you actually want to have is a base class algorithm that derived classes can plug into:
class Base {
public:
void Execute()
{
// do something
execute();
// do some more things
}
private:
virtual void execute() = 0;
};
class Derived : public Base {
public:
// whatever
private:
virtual void execute()
{
//do some fancy stuff
}
};
Letting derived classes plug into base class algorithms is often called "template method" pattern (which has nothing to do with template. Having no public virtual functions in the base class interface is often called "non-virtual interface" pattern.
I'm sure google can find you a lot on those two.
Move that Base::Execute internally in two functions and then use RAII to implement that easily.
class Base{
protected:
void PreExecute(){
// stuff before Derived::Execute
}
void PostExecute(){
// stuff after Derived::Execute
}
public:
virtual void Execute() = 0;
};
struct ScopedBaseExecute{
typedef void(Base::*base_func)();
ScopedBaseExecute(Base* p)
: ptr_(p)
{ ptr_->PreExecute() }
~ScopedBaseExecute()
{ ptr_->PostExecute(); }
Base* ptr_;
};
class Derived : public Base{
public:
void Execute{
ScopedBaseExecute exec(this);
// do whatever you want...
}
};