In java, we can define different interfaces and then later we can implement multiple interface for a concrete class.
// Simulate Java Interface in C++
/*
interface IOne {
void MethodOne(int i);
.... more functions
}
interface ITwo {
double MethodTwo();
... more functions
}
class ABC implements IOne, ITwo {
// implement MethodOne and MethodTwo
}
*/
In C++, generally speaking, we should avoid the usage of multiple inheritance, although multi-inheritance does have its edge on some situations.
class ABC {
public:
virtual void MethodOne(int /*i*/) = 0 {}
virtual double MethodTwo() = 0 {}
virtual ~ABC() = 0 {}
protected:
ABC() {} // ONLY ABC or subclass can access it
};
Question1> Based on the design of ABC, should I improve any other things in order to make it a decent ABC?
Question2> Is it true that a good ABC should not contain member variables and instead variables should be kept in the subclasses?
Question3> As I indicated in the comments, what if ABC has to contain too many pure functions? Is there a better way?
Do not provide an implementation for pure virtual methods unless it is necessary.
Do not make your destructor pure virtual.
Do not make your constructor protected. You cannot create an instance of an abstract class.
Better hide an implementation of constructor and destructor inside a source file not to pollute other object files.
Make your interface non-copyable.
If this is an interface, better do not have any variables there. Otherwise it would be an abstract base class and not an interface.
Too many pure functions is OK unless you can do it with less pure functions.
In C++, generally speaking, we should avoid the usage of multiple inheritance
Like any other language feature, you should use multiple inheritance wherever it is appropriate. Interfaces are generally considered an appropriate use of multiple inheritance (see, for example, COM).
The constructor of ABC needs not be protected--it cannot be constructed directly because it is abstract.
The ABC destructor should not be declared as pure virtual (it should be declared as virtual, of course). You should not require derived classes to implement a user-declared constructor if they do not need one.
An interface should not have any state, and thus should not have any member variables, because an interface only defines how something is to be used, not how it is to be implemented.
ABC should never have too many member functions; it should have exactly the number that are required. If there are too many, you should obviously remove the ones that are not used or not needed, or refactor the interface into several more specific interfaces.
Based on the design of ABC, should I improve any other things in order to make it a decent ABC?
You've got a couple of syntax errors. For some reason, you're not allowed to put a definition of a pure virtual function inside a class definition; and in any case, you almost certainly don't want to define them in the ABC. So the declarations would usually be:
virtual void MethodOne(int /*i*/) = 0; // ";" not "{}" - just a declaration
There's not really any point in making the destructor pure, although it should be virtual (or, in some cases, non-virtual and protected - but it's safest to make it virtual).
virtual ~ABC() {} // no "= 0"
There's no need for the protected constructor - the fact that it is abstract already prevents instantiation except as a base class.
Is it true that a good ABC should not contain member variables and instead variables should be kept in the subclasses?
Usually, yes. That gives a clean separation between interface and implementation.
As I indicated in the comments, what if ABC has to contain too many pure functions? Is there a better way?
The interface should be as complex as it needs to be, and no more. There are only "too many" functions if some are unnecessary; in which case, get rid of them. If the interface looks too complicated, it may be trying to do more than one thing; in that case, you should be able to break it up into smaller interfaces, each with a single purpose.
First: why should we avoid multiple inheritance in C++? I've never seen
a largish application which didn't use it extensively. Inheriting from
multiple interfaces is a good example of where it is used.
Note that Java's interface is broken—as soon as you want to use
programming by contract, you're stuck with using abstract classes, and
they don't allow multiple inheritance. In C++, however, it's easy:
class One : boost::noncopyable
{
virtual void doFunctionOne( int i ) = 0;
public:
virtual ~One() {}
void functionOne( int i )
{
// assert pre-conditions...
doFunctionOne( i );
// assert post-conditions...
}
};
class Two : boost::noncopyable
{
virtual double doFunctionTwo() = 0;
public:
virtual ~Two() {}
double functionTwo()
{
// assert pre-conditions...
double results = doFunctionTwo();
// assert post-conditions...
return results;
}
};
class ImplementsOneAndTwo : public One, public Two
{
virtual void doFunctionOne( int i );
virtual double doFunctionTwo();
public:
};
Alternatively, you could have a compound interface:
class OneAndTwo : public One, public Two
{
};
class ImplementsOneAndTwo : public OneAndTwo
{
virtual void doFunctionOne( int i );
virtual double doFunctionTwo();
public:
};
and inherit from it, which ever makes the most sense.
This is the more or less standard idiom; in cases where there cannot
conceivably be any pre- or post-conditions in the interface (typically
call inversion), the virtual functions may be public, but in general,
they will be private, so that you can enforce the pre- and
post-conditions.
Finally, note that in a lot of cases (especially if the class
represents a value), you will just implement it directly, without the
interface. Unlike Java, you don't need a separate interface to maintain
the implementation in a different file from the class
definition—that's the way C++ works by default (with the class
definition in a header, but the implementation code in a source file).
Related
By an interface (C# terminology) I mean an abstract class with no data members. Thus, such a class only specifies a contract (a set of methods) that sub-classes must implement. My question is: How to implement such a class correctly in modern C++?
The C++ core guidelines [1] encourage the use of abstract class with no data members as interfaces [I.25 and C.121]. Interfaces should normally be composed entirely of public pure virtual functions and a default/empty virtual destructor [from C.121]. Hence I guess it should be declared with the struct keyword, since it only contains public members anyway.
To enable use and deletion of sub-class objects via pointers to the abstract class, the abstract class needs a public default virtual destructor [C.127]. "A polymorphic class should suppress copying" [C.67] by deleting the copy operations (copy assignment operator, copy constructor) to prevent slicing. I assume that this also extends to the move constructor and the move assignment operator, since those can also be used for slicing. For actual cloning, the abstract class may define a virtual clone method. (It's not completely clear how this should be done. Via smart pointers or owner<T*> from the Guidelines Support Library. The method using owner<T> makes no sense to me, since the examples should not compile: the derived function still does not override anything!?).
In C.129, the example uses interfaces with virtual inheritance only. If I understand correctly, it makes no difference if interfaces are derived (perhaps better: "implemented"?) using class Impl : public Interface {...}; or class Impl : public virtual Interface {...};, since they have no data that could be duplicated. The diamond problem (and related problems) don't exist for interfaces (which, I think, is the reason why languages such as C# don't allow/need multiple inheritance for classes). Is the virtual inheritance here done just for clarity? Is it good practice?
In summary, it seems that:
An interface should consist only of public methods. It should declare a public defaulted virtual destructor. It should explicitly delete copy assignment, copy construction, move assignment and move construction. It may define a polymorphic clone method. I should be derived using public virtual.
One more thing that confuses me:
An apparent contradiction: "An abstract class typically doesn't need a constructor" [C.126]. However, if one implements the rule of five by deleting all copy operations (following [C.67]), the class no longer has a default constructor. Hence sub-classes can never be instantiated (since sub-class constructors call base-class constructors) and thus the abstract base-class always needs to declare a default constructor?! Am I misunderstanding something?
Below is an example. Do you agree with this way to define and use an abstract class without members (interface)?
// C++17
/// An interface describing a source of random bits.
// The type `BitVector` could be something like std::vector<bool>.
#include <memory>
struct RandomSource { // `struct` is used for interfaces throughout core guidelines (e.g. C.122)
virtual BitVector get_random_bits(std::size_t num_bits) = 0; // interface is just one method
// rule of 5 (or 6?):
RandomSource() = default; // needed to instantiate sub-classes !?
virtual ~RandomSource() = default; // Needed to delete polymorphic objects (C.127)
// Copy operations deleted to avoid slicing. (C.67)
RandomSource(const RandomSource &) = delete;
RandomSource &operator=(const RandomSource &) = delete;
RandomSource(RandomSource &&) = delete;
RandomSource &operator=(RandomSource &&) = delete;
// To implement copying, would need to implement a virtual clone method:
// Either return a smart pointer to base class in all cases:
virtual std::unique_ptr<RandomSource> clone() = 0;
// or use `owner`, an alias for raw pointer from the Guidelines Support Library (GSL):
// virtual owner<RandomSource*> clone() = 0;
// Since GSL is not in the standard library, I wouldn't use it right now.
};
// Example use (class implementing the interface)
class PRNG : public virtual RandomSource { // virtual inheritance just for clarity?
// ...
BitVector get_random_bits(std::size_t num_bits) override;
// may the subclass ever define copy operations? I guess no.
// implemented clone method:
// owner<PRNG*> clone() override; // for the alternative owner method...
// Problem: multiple identical methods if several interfaces are inherited,
// each of which requires a `clone` method?
//Maybe the std. library should provide an interface
// (e.g. `Clonable`) to unify this requirement?
std::unique_ptr<RandomSource> clone() override;
//
// ... private data members, more methods, etc...
};
[1]: https://github.com/isocpp/CppCoreGuidelines, commit 2c95a33fefae87c2222f7ce49923e7841faca482
You ask a lot of questions, but I'll give it a shot.
By an interface (C# terminology) I mean an abstract class with no data members.
Nothing specifically like a C# interface exists. A C++ abstract base class comes the closest, but there are differences (for example, you will need to define a body for the virtual destructor).
Thus, such a class only specifies a contract (a set of methods) that sub-classes must implement. My question is: How to implement such a class correctly in modern C++?
As a virtual base class.
Example:
class OutputSink
{
public:
~OutputSink() = 0;
// contract:
virtual void put(std::vector<std::byte> const& bytes) = 0;
};
OutputSink::~OutputSink() = default;
Hence I guess it should be declared with the struct keyword, since it only contains public members anyway.
There are multiple conventions for when to use a structure versus a class. The guideline I recommend (hey, you asked for opinions :D) is to use structures when you have no invariants on their data. For a base class, please use the class keyword.
"A polymorphic class should suppress copying"
Mostly true. I have written code where the client code didn't perform copies of the inherited classes, and the code worked just fine (without prohibiting them). The base classes didn't forbid it explicitly, but that was code I was writing in my own hobby project. When working in a team, it is good practice to specifically restrict copying.
As a rule, don't bother with cloning, until you find an actual use case for it in your code. Then, implement cloning with the following signature (example for my class above):
virtual std::unique_ptr<OutputSink> OutputSink::clone() = 0;
If this doesn't work for some reason, use another signature (return a shared_ptr for example). owner<T> is a useful abstraction, but that should be used only in corner cases (when you have a code base that imposes on you the use of raw pointers).
An interface should consist only of public methods. It should declare [...]. It should [...]. It should be derived using public virtual.
Don't try to represent the perfect C# interface in C++. C++ is more flexible than that, and rarely will you need to add a 1-to-1 implementation of a C# concept in C++.
For example, in base classes in C++ I sometimes add public non-virtual function implementations, with virtual implementations:
class OutputSink
{
public:
void put(const ObjWithHeaderAndData& o) // non-virtual
{
put(o.header());
put(o.data());
}
protected:
virtual void put(ObjectHeader const& h) = 0; // specialize in implementations
virtual void put(ObjectData const& d) = 0; // specialize in implementations
};
thus the abstract base-class always needs to declare a default constructor?! Am I misunderstanding something?
Define the rule of 5 as needed. If code doesn't compile because you are missing a default constructor, then add a default constructor (use the guidelines only when they make sense).
Edit: (addressing comment)
as soon as you declare a virtual destructor, you have to declare some constructor for the class to be usable in any way
Not necessarily. It is better (but actually "better" depends on what you agree with your team) to understand the defaults the compiler adds for you and only add construction code when it differs from that. For example, in modern C++ you can initialize members inline, often removing the need for a default constructor completely.
While the majority of the question has been answered, I thought I'd share some thoughts on the default constructor and the virtual inheritance.
The the class must always have a public (Or at least protected) constructor to assure that sub-classes can still call the super-constructor. Even though there is nothing to construct in the base class, this is a necessity of the syntax of C++ and conceptually makes no real difference.
I like Java as an example for interfaces and super-classes. People often wonder why Java separated abstract classes and interfaces into different syntactical types. As you probably already know though, this is due to the diamond inheritance problem, where two super-class both have the same base class and therefore copy data from the base class. Java makes this impossible be forcing data-carrying classes to be classes, not interfaces and forcing sub-classes to only inherit from one class (not interface which doesn't carry data).
We have following situation:
struct A {
int someData;
A(): someData(0) {}
};
struct B : public A {
virtual void modifyData() = 0;
};
struct C : public A {
virtual void alsoModifyData() = 0;
};
struct D : public B, public C {
virtual void modifyData() { someData += 10; }
virtual void alsoModifyData() { someData -= 10; }
};
When modifyData and alsoModifyData are called on an instance of D, they will not modify the same variable as one might expect due to the compiler which will create two copies of someData for classes B and C.
To counter this problem, the concept of virtual inheritance was introduced. This means that the compiler will not just brute-force recursively build up a derived class from the super-classes members but instead see if the virtual super-classes derive from a common ancestor. Very similarly, Java has the concept of an interface, which is not allowed to own data, just functions.
But interfaces can strictly inherit from other interfaces, excluding the diamond problem to begin with. This is where Java of course differs from C++. These C++ "Interfaces" are still allowed to inherit from data-owning classes, whereas this is impossible in java.
The idea of having a "virtual inheritance", which signals that the class should be sub-classed and that data from ancestors is to be merged in case of diamond inheritance makes the necessity (or at least the idiom) of using virtual inheritance on "Interfaces" clear.
I hope this answer was (although more conceptual) helpful to you!
I have a base class Base that I declare several polymorphic subclasses of. Some of the base class's functions are pure virtual while others are used directly by the subclass.
(This is all in C++)
So for instance:
class Base
{
protected:
float my_float;
public:
virtual void Function() = 0;
void SetFloat(float value){ my_float = value}
}
class subclass : public Base
{
void Function(){ std::cout<<"Hello, world!"<<std::endl; }
}
class subclass2 : public Base
{
void Function(){ std::cout<<"Hello, mars!"<<std::endl; }
}
So as you can see, the subclasses would rely on the base class for the function that sets "my_float", but would be polymorphic with regards to the other function.
So I'm wondering if this is good practice. If you have an abstract base class, should you make it completely abstract or is it okay to do this sort of hybrid approach?
This is a common practice. In fact, some well-known design patterns rely on this, such as the Template Method Pattern. In a nutshell, this allows you to specify some aspects of the behavior you're describing through your class hierarchy as invariant, while letting other aspects of that behavior vary based on the specific type of instance you are referring to at a given point.
Whether or not it is a good or not depends on your precise use case: does it make sense for you to share the implementation of your float member data storage among all your base classes ? This is a bit hard to answer with the example you posted as the derived classes do not rely on my_float in any way, but there are tons of cases where this makes sense and is a good way to split the responsibilities of your class hierarchy.
Even in cases where it does make sense to share implementation of details across classes, you have several other options, such as using composition to share functionality. Sharing functionality through a base class often allows you to be less verbose compared to sharing this functionality via composition, because it allows you to share both the implementation and the interface. To illustrate, your solution has less duplicated code than this alternative that uses composition:
class DataStorage {
private:
float data_;
public:
DataStorage()
: data_(0.f) {
}
void setFloat(float data) {
data_ = data;
}
};
class NotASubclass1 {
private:
DataStorage data_;
public:
void SetFloat(float value){ data_.setFloat(value); }
...
}
class NotASubclass2 {
private:
DataStorage data_;
public:
void SetFloat(float value){ data_.setFloat(value); }
...
}
Being able to have some functions non-virtual has certain benefits, many strongly related:
you can modify them, knowing invocations via a Base*/Base& will use your modified code regardless of what actual derived type the Base* points to
for example, you can collect performance measurements for all Base*/&s, regardless of their derivation
the Non-Virtual Interface (NVI) approach aims for "best of both worlds" - non-virtual functions call non-public virtual functions, giving you a single place to intercept calls via a Base*/& in Base as well as customisability
calls to the non-virtual functions will likely be faster - if inline, up to around an order of magnitude faster for trivial functions like get/set for few-byte fields
you can ensure invariants for all objects derived from Base, selectively encapsulating some private data and the functions that affect it (the final keyword introduced in C++11 lets you do this further down the hierarchy)
having data/functionality "finalised" in the Base class aids understanding and reasoning about class behaviour, and the factoring makes for more concise code overall, but necessarily at the cost of frustrating flexibility and unforeseen reuse - tune to taste
Is it good practice to make a base class constructor protected if I want to avoid instances of it? I know that I could as well have a pure virtual dummy method, but that seems odd...
Please consider the following code:
#include <iostream>
using std::cout;
using std::endl;
class A
{
protected:
A(){};
public:
virtual void foo(){cout << "A\n";};
};
class B : public A
{
public:
void foo(){cout << "B\n";}
};
int main()
{
B b;
b.foo();
A *pa = new B;
pa->foo();
// this does (and should) not compile:
//A a;
//a.foo();
return 0;
}
Is there a disadvantage or side-effect, that I don't see?
It is common practice to make base class constructors protected. When you have a pure-virtual function in your base class, this is not required, as you wouldn't be able to instantiate it.
However, defining a non-pure virtual function in a base class is not considered good practice, but heavily depends on your use case and does not harm.
There isn't any disadvantage or side-effect. With a protected constructor you just tell other developers that your class is only intended to be used as a base.
What you want to achieve is normally done via the destructor, instead of the constructors, just beacause you can steer the behavior you need with that one function instead of having to deal with it with every new constructor you write. It's a common coding style/guideline, to
make the destructor public, if you want to allow instances of the class. Often those classes are not meant to be inherited from.
make the destructor pure virtual and public, if you want to use and delete the class in a polymorphic context but don't want to allow instances of the class. In other words, for base classes that are deleted polymorphcally.
make the destructor nonvirtual and protected, if you don't want to allow instances of the class and don't want to delete its derivates polymorphically. Normally, you then don't want to use them polymorphically at all, i.e. they have no virtual functions.
Which of the latter two you chose is a design decision and cannot be answered from your question.
It does what you're asking.
However I'm not sure what you are gaining with it. Someone could just write
struct B : A {
// Just to workaround limitation imposed by A's author
};
Normally it's not that one adds nonsense pure-virtual functions to the base class... it's that there are pure virtual functions for which no meaningful implementation can be provided at the base level and that's why they end up being pure virtual.
Not being able to instantiate that class comes as a nice extra property.
Make the destructor pure virtual. Every class has a destructor, and a base class should usually have a virtual destructor, so you are not adding a useless dummy function.
Take note that a pure virtual destructor must have a function body.
class AbstractBase
{
public:
virtual ~AbstractBase() = 0;
};
inline AbstractBase::~AbstractBase() {}
If you don't wish to put the destructor body in the header file, put it in the source file and remove inline keyword.
Why would I want to define a C++ interface that contains private methods?
Even in the case where the methods in the public scope will technically suppose to act like template methods that use the private methods upon the interface implementation, even so, we're telling the technical specs. right from the interface.
Isn't this a deviation from the original usage of an interface, ie a public contract between the outside and the interior?
You could also define a friend class, which will make use of some private methods from our class, and so force implementation through the interface. This could be an argument.
What other arguments are for defining a private methods within an interface in C++?
The common OO view is that an interface establishes a single contract that defines how objects that conform to that interface are used and how they behave. The NVI idiom or pattern, I never know when one becomes the other, proposes a change in that mentality by dividing the interface into two separate contracts:
how the interface is to be used
what deriving classes must offer
This is in some sense particular to C++ (in fact to any language with multiple inheritance), where the interface can in fact contain code that adapts from the outer interface --how users see me-- and the inner interface --how I am implemented.
This can be useful in different cases, first when the behavior is common but can be parametrized in only specific ways, with a common algorithm skeleton. Then the algorithm can be implemented in the base class and the extension points in derived elements. In languages without multiple inheritance this has to be implemented by splitting into a class that implements the algorithm based in some parameters that comply with a different 'private' interface. I am using here 'private' in the sense that only your class will use that interface.
The second common usage is that by using the NVI idiom, it is simple to instrument the code by only modifying at the base level:
class Base {
public:
void foo() {
foo_impl();
}
private:
virtual void foo_impl() = 0;
};
The extra cost of having to write the dispatcher foo() { foo_impl(); } is rather small and it allows you to later add a locking mechanism if you convert the code into a multithreaded application, add logging to each call, or a timer to verify how much different implementations take in each function... Since the actual method that is implemented in derived classes is private at this level, you are guaranteed that all polymorphic calls can be instrumented at a single point: the base (this does not block extending classes from making foo_impl public thought)
void Base::foo() {
scoped_log log( "calling foo" ); // we can add traces
lock l(mutex); // thread safety
foo_impl();
}
If the virtual methods were public, then you could not intercept all calls to the methods and would have to add that logging and thread safety to all the derived classes that implement the interface.
You can declare a private virtual method whose purpose is to be derivated. Example :
class CharacterDrawer {
public:
virtual ~CharacterDrawer() = 0;
// draws the character after calling getPosition(), getAnimation(), etc.
void draw(GraphicsContext&);
// other methods
void setLightPosition(const Vector&);
enum Animation {
...
};
private:
virtual Vector getPosition() = 0;
virtual Quaternion getRotation() = 0;
virtual Animation getAnimation() = 0;
virtual float getAnimationPercent() = 0;
};
This object can provide drawing utility for a character, but has to be derivated by an object which provides movement, animation handling, etc.
The advantage of doing like this instead of provinding "setPosition", "setAnimation", etc. is that you don't have to "push" the value at each frame, instead you "pull" it.
I think this can be considered as an interface since these methods have nothing to do with actual implementation of all the drawing-related stuff.
Why would I want to define a C++
interface that contains private
methods?
The question is a bit ambiguous/contradictory: if you define (purely) an interface, that means you define the public access of anything that connects to it. In that sense, you do not define an interface that contains private methods.
I think your question comes from confusing an abstract base class with an interface (please correct me if I'm wrong).
An abstract base class can be a partial (or even complete) functionality implementation, that has at least an abstract member. In this case, it makes as much sense to have private members as it makes for any other class.
In practice it is rarely needed to have pure virtual base classes with no implementation at all (i.e. base classes that only define a list of pure virtual functions and nothing else). One case where that is required is COM/DCOM/XPCOM programming (and there are others). In most cases though it makes sense to add some private implementation to your abstract base class.
In a template method implementation, it can be used to add a specialization constraint: you can't call the virtual method of the base class from the derived class (otherwise, the method would be declared as protected in the base class):
class Base
{
private:
virtual void V() { /*some logic here, not accessible directly from Derived*/}
};
class Derived: public Base
{
private:
virtual void V()
{
Base::V(); // Not allowed: Base::V is not visible from Derived
}
};
I know that it's OK for a pure virtual function to have an implementation. However, why it is like this? Is there conflict for the two concepts? What's the usage? Can any one offer any example?
In Effective C++, Scott Meyers gives the example that it is useful when you are reusing code through inheritance. He starts with this:
struct Airplane {
virtual void fly() {
// fly the plane
}
...
};
struct ModelA : Airplane { ... };
struct ModelB : Airplane { ... };
Now, ModelA and ModelB are flown the same way, and that's believed to be a common way to fly a plane, so the code is in the base class. However, not all planes are flown that way, and we intend planes to be polymorphic, so it's virtual.
Now we add ModelC, which must be flown differently, but we make a mistake:
struct ModelC : Airplane { ... (no fly function) };
Oops. ModelC is going to crash. Meyers would prefer the compiler to warn us of our mistake.
So, he makes fly pure virtual in Airplane with an implementation, and then in ModelA and ModelB, put:
void fly() { Airplane::fly(); }
Now unless we explictly state in our derived class that we want the default flying behaviour, we don't get it. So instead of just the documentation telling us all the things we need to check about our new model of plane, the compiler tells us too.
This does the job, but I think it's a bit weak. Ideally we instead have a BoringlyFlyable mixin containing the default implementation of fly, and reuse code that way, rather than putting code in a base class that assumes certain things about airplanes which are not requirements of airplanes. But that requires CRTP if the fly function actually does anything significant:
#include <iostream>
struct Wings {
void flap() { std::cout << "flapping\n"; }
};
struct Airplane {
Wings wings;
virtual void fly() = 0;
};
template <typename T>
struct BoringlyFlyable {
void fly() {
// planes fly by flapping their wings, right? Same as birds?
// (This code may need tweaking after consulting the domain expert)
static_cast<T*>(this)->wings.flap();
}
};
struct PlaneA : Airplane, BoringlyFlyable<PlaneA> {
void fly() { BoringlyFlyable<PlaneA>::fly(); }
};
int main() {
PlaneA p;
p.fly();
}
When PlaneA declares inheritance from BoringlyFlyable, it is asserting via interface that it is valid to fly it in the default way. Note that BoringlyFlyable could define pure virtual functions of its own: perhaps getWings would be a good abstraction. But since it's a template it doesn't have to.
I've a feeling that this pattern can replace all cases where you would have provided a pure virtual function with an implementation - the implementation can instead go in a mixin, which classes can inherit if they want it. But I can't immediately prove that (for instance if Airplane::fly uses private members then it requires considerable redesign to do it this way), and arguably CRTP is a bit high-powered for the beginner anyway. Also it's slightly more code that doesn't actually add functionality or type safety, it just makes explicit what is already implicit in Meyer's design, that some things can fly just by flapping their wings whereas others need to do other stuff instead. So my version is by no means a total shoo-in.
Was addressed in GotW #31. Summary:
There are three main reasons you might
do this. #1 is commonplace, #2 is
pretty rare, and #3 is a workaround
used occasionally by advanced
programmers working with weaker
compilers.
Most programmers should only ever use #1.
... Which is for pure virtual destructors.
There is no conflict with the two concepts, although they are rarely used together (as OO purists can't reconcile it, but that's beyond the scope of this question/answer).
The idea is that the pure virtual function is given an implementation while at the same time forcing subclasses to override that implementation. The subclasses may invoke the base class function to provide some default behavior. The base cannot be instantiated (it is "abstract") because the virtual function(s) is pure even though it may have an implementation.
Wikipedia sums this up pretty well:
Although pure virtual methods
typically have no implementation in
the class that declares them, pure
virtual methods in C++ are permitted
to contain an implementation in their
declaring class, providing fallback or
default behaviour that a derived class
can delegate to if appropriate.
Typically you don't need to provide base class implementations for pure virtuals. But there is one exception: pure virtual destructors. In fact if your base class has a pure virtual destructor, it must have an implementation. Why would you need a pure virtual destructor instead of just a virtual one? Typically, in order to make a base class abstract without requiring the implementation of any other method. For example, in a class where you might reasonably use the default implementation for any method, but you still don't want people to instantiate the base class, you can mark only the destructor as pure virtual.
EDIT:
Here's some code that illustrates a few ways to call the base implementation:
#include <iostream>
using namespace std;
class Base
{
public:
virtual void DoIt() = 0;
};
class Der : public Base
{
public:
void DoIt();
};
void Base::DoIt()
{
cout << "Base" << endl;
}
void Der::DoIt()
{
cout << "Der" << endl;
Base::DoIt();
}
int main()
{
Der d;
Base* b = &d;
d.DoIt();
b->DoIt(); // note that Der::DoIt is still called
b->Base::DoIt();
return 0;
}
That way you can provide a working implementation but still require the child class implementer to explicitely call that implementation.
Well, we have some great answers already.. I'm to slow at writing..
My thought would be for instance an init function that has try{} catch{}, meaning it shouldn't be placed in a constructor:
class A {
public:
virtual bool init() = 0 {
... // initiate stuff that couldn't be made in constructor
}
};
class B : public A{
public:
bool init(){
...
A::init();
}
};