By an interface (C# terminology) I mean an abstract class with no data members. Thus, such a class only specifies a contract (a set of methods) that sub-classes must implement. My question is: How to implement such a class correctly in modern C++?
The C++ core guidelines [1] encourage the use of abstract class with no data members as interfaces [I.25 and C.121]. Interfaces should normally be composed entirely of public pure virtual functions and a default/empty virtual destructor [from C.121]. Hence I guess it should be declared with the struct keyword, since it only contains public members anyway.
To enable use and deletion of sub-class objects via pointers to the abstract class, the abstract class needs a public default virtual destructor [C.127]. "A polymorphic class should suppress copying" [C.67] by deleting the copy operations (copy assignment operator, copy constructor) to prevent slicing. I assume that this also extends to the move constructor and the move assignment operator, since those can also be used for slicing. For actual cloning, the abstract class may define a virtual clone method. (It's not completely clear how this should be done. Via smart pointers or owner<T*> from the Guidelines Support Library. The method using owner<T> makes no sense to me, since the examples should not compile: the derived function still does not override anything!?).
In C.129, the example uses interfaces with virtual inheritance only. If I understand correctly, it makes no difference if interfaces are derived (perhaps better: "implemented"?) using class Impl : public Interface {...}; or class Impl : public virtual Interface {...};, since they have no data that could be duplicated. The diamond problem (and related problems) don't exist for interfaces (which, I think, is the reason why languages such as C# don't allow/need multiple inheritance for classes). Is the virtual inheritance here done just for clarity? Is it good practice?
In summary, it seems that:
An interface should consist only of public methods. It should declare a public defaulted virtual destructor. It should explicitly delete copy assignment, copy construction, move assignment and move construction. It may define a polymorphic clone method. I should be derived using public virtual.
One more thing that confuses me:
An apparent contradiction: "An abstract class typically doesn't need a constructor" [C.126]. However, if one implements the rule of five by deleting all copy operations (following [C.67]), the class no longer has a default constructor. Hence sub-classes can never be instantiated (since sub-class constructors call base-class constructors) and thus the abstract base-class always needs to declare a default constructor?! Am I misunderstanding something?
Below is an example. Do you agree with this way to define and use an abstract class without members (interface)?
// C++17
/// An interface describing a source of random bits.
// The type `BitVector` could be something like std::vector<bool>.
#include <memory>
struct RandomSource { // `struct` is used for interfaces throughout core guidelines (e.g. C.122)
virtual BitVector get_random_bits(std::size_t num_bits) = 0; // interface is just one method
// rule of 5 (or 6?):
RandomSource() = default; // needed to instantiate sub-classes !?
virtual ~RandomSource() = default; // Needed to delete polymorphic objects (C.127)
// Copy operations deleted to avoid slicing. (C.67)
RandomSource(const RandomSource &) = delete;
RandomSource &operator=(const RandomSource &) = delete;
RandomSource(RandomSource &&) = delete;
RandomSource &operator=(RandomSource &&) = delete;
// To implement copying, would need to implement a virtual clone method:
// Either return a smart pointer to base class in all cases:
virtual std::unique_ptr<RandomSource> clone() = 0;
// or use `owner`, an alias for raw pointer from the Guidelines Support Library (GSL):
// virtual owner<RandomSource*> clone() = 0;
// Since GSL is not in the standard library, I wouldn't use it right now.
};
// Example use (class implementing the interface)
class PRNG : public virtual RandomSource { // virtual inheritance just for clarity?
// ...
BitVector get_random_bits(std::size_t num_bits) override;
// may the subclass ever define copy operations? I guess no.
// implemented clone method:
// owner<PRNG*> clone() override; // for the alternative owner method...
// Problem: multiple identical methods if several interfaces are inherited,
// each of which requires a `clone` method?
//Maybe the std. library should provide an interface
// (e.g. `Clonable`) to unify this requirement?
std::unique_ptr<RandomSource> clone() override;
//
// ... private data members, more methods, etc...
};
[1]: https://github.com/isocpp/CppCoreGuidelines, commit 2c95a33fefae87c2222f7ce49923e7841faca482
You ask a lot of questions, but I'll give it a shot.
By an interface (C# terminology) I mean an abstract class with no data members.
Nothing specifically like a C# interface exists. A C++ abstract base class comes the closest, but there are differences (for example, you will need to define a body for the virtual destructor).
Thus, such a class only specifies a contract (a set of methods) that sub-classes must implement. My question is: How to implement such a class correctly in modern C++?
As a virtual base class.
Example:
class OutputSink
{
public:
~OutputSink() = 0;
// contract:
virtual void put(std::vector<std::byte> const& bytes) = 0;
};
OutputSink::~OutputSink() = default;
Hence I guess it should be declared with the struct keyword, since it only contains public members anyway.
There are multiple conventions for when to use a structure versus a class. The guideline I recommend (hey, you asked for opinions :D) is to use structures when you have no invariants on their data. For a base class, please use the class keyword.
"A polymorphic class should suppress copying"
Mostly true. I have written code where the client code didn't perform copies of the inherited classes, and the code worked just fine (without prohibiting them). The base classes didn't forbid it explicitly, but that was code I was writing in my own hobby project. When working in a team, it is good practice to specifically restrict copying.
As a rule, don't bother with cloning, until you find an actual use case for it in your code. Then, implement cloning with the following signature (example for my class above):
virtual std::unique_ptr<OutputSink> OutputSink::clone() = 0;
If this doesn't work for some reason, use another signature (return a shared_ptr for example). owner<T> is a useful abstraction, but that should be used only in corner cases (when you have a code base that imposes on you the use of raw pointers).
An interface should consist only of public methods. It should declare [...]. It should [...]. It should be derived using public virtual.
Don't try to represent the perfect C# interface in C++. C++ is more flexible than that, and rarely will you need to add a 1-to-1 implementation of a C# concept in C++.
For example, in base classes in C++ I sometimes add public non-virtual function implementations, with virtual implementations:
class OutputSink
{
public:
void put(const ObjWithHeaderAndData& o) // non-virtual
{
put(o.header());
put(o.data());
}
protected:
virtual void put(ObjectHeader const& h) = 0; // specialize in implementations
virtual void put(ObjectData const& d) = 0; // specialize in implementations
};
thus the abstract base-class always needs to declare a default constructor?! Am I misunderstanding something?
Define the rule of 5 as needed. If code doesn't compile because you are missing a default constructor, then add a default constructor (use the guidelines only when they make sense).
Edit: (addressing comment)
as soon as you declare a virtual destructor, you have to declare some constructor for the class to be usable in any way
Not necessarily. It is better (but actually "better" depends on what you agree with your team) to understand the defaults the compiler adds for you and only add construction code when it differs from that. For example, in modern C++ you can initialize members inline, often removing the need for a default constructor completely.
While the majority of the question has been answered, I thought I'd share some thoughts on the default constructor and the virtual inheritance.
The the class must always have a public (Or at least protected) constructor to assure that sub-classes can still call the super-constructor. Even though there is nothing to construct in the base class, this is a necessity of the syntax of C++ and conceptually makes no real difference.
I like Java as an example for interfaces and super-classes. People often wonder why Java separated abstract classes and interfaces into different syntactical types. As you probably already know though, this is due to the diamond inheritance problem, where two super-class both have the same base class and therefore copy data from the base class. Java makes this impossible be forcing data-carrying classes to be classes, not interfaces and forcing sub-classes to only inherit from one class (not interface which doesn't carry data).
We have following situation:
struct A {
int someData;
A(): someData(0) {}
};
struct B : public A {
virtual void modifyData() = 0;
};
struct C : public A {
virtual void alsoModifyData() = 0;
};
struct D : public B, public C {
virtual void modifyData() { someData += 10; }
virtual void alsoModifyData() { someData -= 10; }
};
When modifyData and alsoModifyData are called on an instance of D, they will not modify the same variable as one might expect due to the compiler which will create two copies of someData for classes B and C.
To counter this problem, the concept of virtual inheritance was introduced. This means that the compiler will not just brute-force recursively build up a derived class from the super-classes members but instead see if the virtual super-classes derive from a common ancestor. Very similarly, Java has the concept of an interface, which is not allowed to own data, just functions.
But interfaces can strictly inherit from other interfaces, excluding the diamond problem to begin with. This is where Java of course differs from C++. These C++ "Interfaces" are still allowed to inherit from data-owning classes, whereas this is impossible in java.
The idea of having a "virtual inheritance", which signals that the class should be sub-classed and that data from ancestors is to be merged in case of diamond inheritance makes the necessity (or at least the idiom) of using virtual inheritance on "Interfaces" clear.
I hope this answer was (although more conceptual) helpful to you!
Related
I couldn't find examples of is-a relationship without virtual functions. Is a following pattern ok?
class Base {
public:
void doSomethingWithX() {/*implementation here*/}
protected:
~Base(){}
private:
int x_;
};
class Derived : public Base {
//Add some other functionality but inherit doSomethingWithX and its implementaion
public:
void doSomethingWithY();
~Derived(); //And document that nobody should inherit further from this class.
private:
int y_;
};
foo(Base* ptr) {
//Do something via Base interface
}
Derived d;
foo(&d);
Edit: I was asked to clarify what I mean by "is this pattern ok".
Does this kind of inheritance satisfies what is usually needed from is-a relationship? Liskov substitution principle etc.
Is it safe to use Derived objects via pointer to Base? (or I miss some problem here).
I'm asking this because there is often written that base class destructor should be either public and virtual or protected and non-virtual. But I never met real examples of public inheritance without non-virtual functions.
It's okay for what you're doing here; You can pass a pointer to Derived and it can bind to a pointer of Base just fine. It's not possible to say whether it satisfies the Liskov subtitution principle because we do not know the invariants of your classes.
Just recognize that without any virtual functions, you cannot use polymorphism. This goes beyond simply overriding function behavior; you'll never be able to perform a dynamic_cast of a pointer to Base to a pointer to Derived.
Additionally, if nobody should derive from Derived, then mark it final, which is available since C++11
There are two types of polymorphism, both implementable in C++: static and dynamic. The latter is what you get with virtual functions and pointers to base classes, in which the behaviour is specialized depending on the real type of object pointed to.
The former can be achieved by writing a template, in which your template type is assumed to have certain interfaces. The compiler will then enforce that when instantiating the template, at compile time. You can provide additional enforcement using SFINAE and/or static_asserts to ensure the type used "is a" or rather conforms to the templated interface used by your code. Note there is not really a straightforward way of defining this interface as with a base interface class, aside from the aforementioned methods.
Note that static polymorphism is what you get at compile time. No dynamically chosen types at runtime. You'll need some form of base class for that.
I've a question regarding a concept. First, I'm a mechanical engineer and not a programmer, thus I have some C++ knowledge but not much experience. I use the finite element method (FEM) to solve partial differential equations.
I have a base class Solver and two child linSolver, for linear FEM, and nlinSolver for non-linear FEM. The members and methods that both children share are in the base class. The base class members are all protected. Thus using inheritance makes the child classes "easy to use", like there weren't any inheritance or other boundaries. The base class itself, Solver, is incomplete, meaning only the children are of any use to me.
The concept works actually pretty good - but I think that having an unusable class is a bad design. In addition I read that protected inheritance is not preferred and should be avoided if possible. I think the last point don't really apply to my specific use, since I will never use it allow and any attempt to do so will fail (since it is incomplete).
The questions are:
Is it common to use inheritance to reduce double code even if the base class will be unusable?
What are alternatives or better solutions to such a problem?
Is protected inheritance really bad?
Thank you for your time.
Dnaiel
Having "unusable" base classes is actually very common. You can have the base class to define a common interface usable by the classes that inherits the base-class. And if you declare those interface-functions virtual you can use e.g. references or pointers to the base-class and the correct function in the inherited class object will be called.
Like this:
class Base
{
public:
virtual ~Base() {}
virtual void someFunction() = 0; // Declares an abstract function
};
class ChildA : public Base
{
public:
void someFunction() { /* implementation here */ }
};
class ChildB : public Base
{
public:
void someFunction() { /* other implementation here */ }
};
With the above classes, you can do
Base* ptr1 = new ChildA;
Base* ptr2 = new ChildB;
ptr1->someFunction(); // Calls `ChildA::someFunction`
ptr2->someFunction(); // Calls `ChildB::someFunction`
However this will not work:
Base baseObject; // Compilation error! Base class is "unusable" by itself
While the (working) example above is simple, think about what you could do when passing the pointers to a function. Instead of having two overloaded functions each taking the actual class, you can have a single function which takes a pointer to the base class, and the compiler and runtime-system will make sure that the correct (virtual) functions are called:
void aGlobalFunction(Base* ptr)
{
// Will call either `ChildA::someFunction` or `ChildB::someFunction`
// depending on which pointer is passed as argument
ptr->someFunction();
}
...
aGlobalFunction(ptr1);
aGlobalFunction(ptr2);
Even though the base-class is "unusable" directly, it still provides some functionality that is part of the core of how C++ can be (and is) used.
Of course, the base class doesn't have to be all interface, it can contain other common (protected) helper or utility functions that can be used from all classes that inherits the base class. Remember that inheritance is a "is-a" relationship between classes. If you have two different classes that both "is-a" something, then using inheritance is probably a very good solution.
You should check the concept of Abstract class.
It's designed to provide base class that cannot be instantiated.
To do so you provide at least one method in the base class like this
virtual void f()=0;
Each child have to override the f function (or any pure virtual function from the base class) in order to be instantiable.
Don't think of the BaseClass as a class in its own right, but as an interface contract and some implementation help. Therefore, it should be abstract, if neccessary by declaring the dtor pure virtual but providing an implementation anyway. Some OO purists may frown upon any non-private element, but purity is not a good target.
Consider the following code:
#include <iostream>
#include <type_traits>
// Abstract base class
template<class Crtp>
class Base
{
// Lifecycle
public: // MARKER 1
Base(const int x) : _x(x) {}
protected: // MARKER 2
~Base() {}
// Functions
public:
int get() {return _x;}
Crtp& set(const int x) {_x = x; return static_cast<Crtp&>(*this);}
// Data members
protected:
int _x;
};
// Derived class
class Derived
: public Base<Derived>
{
// Lifecycle
public:
Derived(const int x) : Base<Derived>(x) {}
~Derived() {}
};
// Main
int main()
{
Derived d(5);
std::cout<<d.set(42).get()<<std::endl;
return 0;
}
If I want a public inheritance of Derived from Base, and if I don't want a virtual destructor in the base class, what would be the best keywords for the constructor (MARKER 1) and the destructor (MARKER 2) of Base to guarantee that nothing bad can happen ?
Whatever programming style you use, you can alwyas do something bad: even if you follow the best of the bestest guideline practice. That's something physical behind it (and relate to the impossibility to reduce the global entrophy)
That said, don't confuse "classic OOP" (a methodology) with C++ (a language), OOP inheritache (a relation) with C++ inheritance (an aggregation mechanism) and OOP polymorphism (a model) with C++ runtime and static polymorphism (a dispatching mechanism).
Although names sometime matches, the C++-things don't have to necessarily sevicing OOP-things.
Public inheritance from a base with some non-virtual methods is normal. and destructor is not special: just dont call delete on the CRTP base.
Unlike with classic OOP, a CRTP-base has different type for each of the deriveds, so having a "pointer to a base" is clueless since there is no "pointer to a common type". And hence the risk to call "delete pbase" is very limited.
The "protected-dtor paradigm" is valid only if you are programming OOP-inheritance using C++ inheritance for object managed (and deleted) though pointer-based polymorphism. If you are following other paradigms, those rules should not be treated in a literal way.
In your case, the proteced-dtor just deny you to create a Base<Derived> on the stack and to call delete on a Base*. Something you will never do, since Base with no "Dervied" has no sense to exist, and having a Base<Derived>* makes no sense since you can have just a Derived*, hence having both public ctor and dtor makes no particular mess.
But you can even do the opposite choice to have both ctor and dtor protected, since you will never construct a Base alone, since it always needs a Derived type to be known.
Because of the particular construction of CRTP, all the classical OOP stuff leads to a sort of "indifferent equilibrium", since there is no more the "dangerous usecase".
You can use them or not, but no particular bad-thing can happen. Not if you use object the way they had been designed to be used.
While your code works I find it odd to mark the destructor rather than the constructor as protected. Normally my reasoning would be that you want to prevent the programmer from accidentally creating a CRTP base object. It all comes down to the same of course, but this is hardly canonical code.
The only thing that your code prevents is the accidental deletion of a CRTP object via a base pointer – i.e. a case like this:
Base<Derived>* base = new Derived;
delete base;
But that is a highly artificial situation that won’t arise in real code since CRTP simply isn’t supposed to be used that way. The CRTP base is an implementation detail that should be completely hidden from the client code.
So my recipe for this situation would be:
Define a protected constructor.
Don’t define a destructor – or, if required for the CRTP semantic, define it as public (and non-virtual).
There's no problem, since the destructor is protected it means that client code can't delete a pointer to Base, so there's no problem with Base's destructor being non-virtual.
In java, we can define different interfaces and then later we can implement multiple interface for a concrete class.
// Simulate Java Interface in C++
/*
interface IOne {
void MethodOne(int i);
.... more functions
}
interface ITwo {
double MethodTwo();
... more functions
}
class ABC implements IOne, ITwo {
// implement MethodOne and MethodTwo
}
*/
In C++, generally speaking, we should avoid the usage of multiple inheritance, although multi-inheritance does have its edge on some situations.
class ABC {
public:
virtual void MethodOne(int /*i*/) = 0 {}
virtual double MethodTwo() = 0 {}
virtual ~ABC() = 0 {}
protected:
ABC() {} // ONLY ABC or subclass can access it
};
Question1> Based on the design of ABC, should I improve any other things in order to make it a decent ABC?
Question2> Is it true that a good ABC should not contain member variables and instead variables should be kept in the subclasses?
Question3> As I indicated in the comments, what if ABC has to contain too many pure functions? Is there a better way?
Do not provide an implementation for pure virtual methods unless it is necessary.
Do not make your destructor pure virtual.
Do not make your constructor protected. You cannot create an instance of an abstract class.
Better hide an implementation of constructor and destructor inside a source file not to pollute other object files.
Make your interface non-copyable.
If this is an interface, better do not have any variables there. Otherwise it would be an abstract base class and not an interface.
Too many pure functions is OK unless you can do it with less pure functions.
In C++, generally speaking, we should avoid the usage of multiple inheritance
Like any other language feature, you should use multiple inheritance wherever it is appropriate. Interfaces are generally considered an appropriate use of multiple inheritance (see, for example, COM).
The constructor of ABC needs not be protected--it cannot be constructed directly because it is abstract.
The ABC destructor should not be declared as pure virtual (it should be declared as virtual, of course). You should not require derived classes to implement a user-declared constructor if they do not need one.
An interface should not have any state, and thus should not have any member variables, because an interface only defines how something is to be used, not how it is to be implemented.
ABC should never have too many member functions; it should have exactly the number that are required. If there are too many, you should obviously remove the ones that are not used or not needed, or refactor the interface into several more specific interfaces.
Based on the design of ABC, should I improve any other things in order to make it a decent ABC?
You've got a couple of syntax errors. For some reason, you're not allowed to put a definition of a pure virtual function inside a class definition; and in any case, you almost certainly don't want to define them in the ABC. So the declarations would usually be:
virtual void MethodOne(int /*i*/) = 0; // ";" not "{}" - just a declaration
There's not really any point in making the destructor pure, although it should be virtual (or, in some cases, non-virtual and protected - but it's safest to make it virtual).
virtual ~ABC() {} // no "= 0"
There's no need for the protected constructor - the fact that it is abstract already prevents instantiation except as a base class.
Is it true that a good ABC should not contain member variables and instead variables should be kept in the subclasses?
Usually, yes. That gives a clean separation between interface and implementation.
As I indicated in the comments, what if ABC has to contain too many pure functions? Is there a better way?
The interface should be as complex as it needs to be, and no more. There are only "too many" functions if some are unnecessary; in which case, get rid of them. If the interface looks too complicated, it may be trying to do more than one thing; in that case, you should be able to break it up into smaller interfaces, each with a single purpose.
First: why should we avoid multiple inheritance in C++? I've never seen
a largish application which didn't use it extensively. Inheriting from
multiple interfaces is a good example of where it is used.
Note that Java's interface is broken—as soon as you want to use
programming by contract, you're stuck with using abstract classes, and
they don't allow multiple inheritance. In C++, however, it's easy:
class One : boost::noncopyable
{
virtual void doFunctionOne( int i ) = 0;
public:
virtual ~One() {}
void functionOne( int i )
{
// assert pre-conditions...
doFunctionOne( i );
// assert post-conditions...
}
};
class Two : boost::noncopyable
{
virtual double doFunctionTwo() = 0;
public:
virtual ~Two() {}
double functionTwo()
{
// assert pre-conditions...
double results = doFunctionTwo();
// assert post-conditions...
return results;
}
};
class ImplementsOneAndTwo : public One, public Two
{
virtual void doFunctionOne( int i );
virtual double doFunctionTwo();
public:
};
Alternatively, you could have a compound interface:
class OneAndTwo : public One, public Two
{
};
class ImplementsOneAndTwo : public OneAndTwo
{
virtual void doFunctionOne( int i );
virtual double doFunctionTwo();
public:
};
and inherit from it, which ever makes the most sense.
This is the more or less standard idiom; in cases where there cannot
conceivably be any pre- or post-conditions in the interface (typically
call inversion), the virtual functions may be public, but in general,
they will be private, so that you can enforce the pre- and
post-conditions.
Finally, note that in a lot of cases (especially if the class
represents a value), you will just implement it directly, without the
interface. Unlike Java, you don't need a separate interface to maintain
the implementation in a different file from the class
definition—that's the way C++ works by default (with the class
definition in a header, but the implementation code in a source file).
Why would I want to define a C++ interface that contains private methods?
Even in the case where the methods in the public scope will technically suppose to act like template methods that use the private methods upon the interface implementation, even so, we're telling the technical specs. right from the interface.
Isn't this a deviation from the original usage of an interface, ie a public contract between the outside and the interior?
You could also define a friend class, which will make use of some private methods from our class, and so force implementation through the interface. This could be an argument.
What other arguments are for defining a private methods within an interface in C++?
The common OO view is that an interface establishes a single contract that defines how objects that conform to that interface are used and how they behave. The NVI idiom or pattern, I never know when one becomes the other, proposes a change in that mentality by dividing the interface into two separate contracts:
how the interface is to be used
what deriving classes must offer
This is in some sense particular to C++ (in fact to any language with multiple inheritance), where the interface can in fact contain code that adapts from the outer interface --how users see me-- and the inner interface --how I am implemented.
This can be useful in different cases, first when the behavior is common but can be parametrized in only specific ways, with a common algorithm skeleton. Then the algorithm can be implemented in the base class and the extension points in derived elements. In languages without multiple inheritance this has to be implemented by splitting into a class that implements the algorithm based in some parameters that comply with a different 'private' interface. I am using here 'private' in the sense that only your class will use that interface.
The second common usage is that by using the NVI idiom, it is simple to instrument the code by only modifying at the base level:
class Base {
public:
void foo() {
foo_impl();
}
private:
virtual void foo_impl() = 0;
};
The extra cost of having to write the dispatcher foo() { foo_impl(); } is rather small and it allows you to later add a locking mechanism if you convert the code into a multithreaded application, add logging to each call, or a timer to verify how much different implementations take in each function... Since the actual method that is implemented in derived classes is private at this level, you are guaranteed that all polymorphic calls can be instrumented at a single point: the base (this does not block extending classes from making foo_impl public thought)
void Base::foo() {
scoped_log log( "calling foo" ); // we can add traces
lock l(mutex); // thread safety
foo_impl();
}
If the virtual methods were public, then you could not intercept all calls to the methods and would have to add that logging and thread safety to all the derived classes that implement the interface.
You can declare a private virtual method whose purpose is to be derivated. Example :
class CharacterDrawer {
public:
virtual ~CharacterDrawer() = 0;
// draws the character after calling getPosition(), getAnimation(), etc.
void draw(GraphicsContext&);
// other methods
void setLightPosition(const Vector&);
enum Animation {
...
};
private:
virtual Vector getPosition() = 0;
virtual Quaternion getRotation() = 0;
virtual Animation getAnimation() = 0;
virtual float getAnimationPercent() = 0;
};
This object can provide drawing utility for a character, but has to be derivated by an object which provides movement, animation handling, etc.
The advantage of doing like this instead of provinding "setPosition", "setAnimation", etc. is that you don't have to "push" the value at each frame, instead you "pull" it.
I think this can be considered as an interface since these methods have nothing to do with actual implementation of all the drawing-related stuff.
Why would I want to define a C++
interface that contains private
methods?
The question is a bit ambiguous/contradictory: if you define (purely) an interface, that means you define the public access of anything that connects to it. In that sense, you do not define an interface that contains private methods.
I think your question comes from confusing an abstract base class with an interface (please correct me if I'm wrong).
An abstract base class can be a partial (or even complete) functionality implementation, that has at least an abstract member. In this case, it makes as much sense to have private members as it makes for any other class.
In practice it is rarely needed to have pure virtual base classes with no implementation at all (i.e. base classes that only define a list of pure virtual functions and nothing else). One case where that is required is COM/DCOM/XPCOM programming (and there are others). In most cases though it makes sense to add some private implementation to your abstract base class.
In a template method implementation, it can be used to add a specialization constraint: you can't call the virtual method of the base class from the derived class (otherwise, the method would be declared as protected in the base class):
class Base
{
private:
virtual void V() { /*some logic here, not accessible directly from Derived*/}
};
class Derived: public Base
{
private:
virtual void V()
{
Base::V(); // Not allowed: Base::V is not visible from Derived
}
};