Is it good practice to make a base class constructor protected if I want to avoid instances of it? I know that I could as well have a pure virtual dummy method, but that seems odd...
Please consider the following code:
#include <iostream>
using std::cout;
using std::endl;
class A
{
protected:
A(){};
public:
virtual void foo(){cout << "A\n";};
};
class B : public A
{
public:
void foo(){cout << "B\n";}
};
int main()
{
B b;
b.foo();
A *pa = new B;
pa->foo();
// this does (and should) not compile:
//A a;
//a.foo();
return 0;
}
Is there a disadvantage or side-effect, that I don't see?
It is common practice to make base class constructors protected. When you have a pure-virtual function in your base class, this is not required, as you wouldn't be able to instantiate it.
However, defining a non-pure virtual function in a base class is not considered good practice, but heavily depends on your use case and does not harm.
There isn't any disadvantage or side-effect. With a protected constructor you just tell other developers that your class is only intended to be used as a base.
What you want to achieve is normally done via the destructor, instead of the constructors, just beacause you can steer the behavior you need with that one function instead of having to deal with it with every new constructor you write. It's a common coding style/guideline, to
make the destructor public, if you want to allow instances of the class. Often those classes are not meant to be inherited from.
make the destructor pure virtual and public, if you want to use and delete the class in a polymorphic context but don't want to allow instances of the class. In other words, for base classes that are deleted polymorphcally.
make the destructor nonvirtual and protected, if you don't want to allow instances of the class and don't want to delete its derivates polymorphically. Normally, you then don't want to use them polymorphically at all, i.e. they have no virtual functions.
Which of the latter two you chose is a design decision and cannot be answered from your question.
It does what you're asking.
However I'm not sure what you are gaining with it. Someone could just write
struct B : A {
// Just to workaround limitation imposed by A's author
};
Normally it's not that one adds nonsense pure-virtual functions to the base class... it's that there are pure virtual functions for which no meaningful implementation can be provided at the base level and that's why they end up being pure virtual.
Not being able to instantiate that class comes as a nice extra property.
Make the destructor pure virtual. Every class has a destructor, and a base class should usually have a virtual destructor, so you are not adding a useless dummy function.
Take note that a pure virtual destructor must have a function body.
class AbstractBase
{
public:
virtual ~AbstractBase() = 0;
};
inline AbstractBase::~AbstractBase() {}
If you don't wish to put the destructor body in the header file, put it in the source file and remove inline keyword.
Related
Consider the following code:
class Base
{
public:
virtual void Foo() {}
};
class Derived : public Base
{
private:
void Foo() {}
};
void func()
{
Base* a = new Derived;
a->Foo(); //fine, calls Derived::Foo()
Derived* b = new Derived;
// b->Foo(); //error
static_cast<Base*>(b)->Foo(); //fine, calls Derived::Foo()
}
I've heard two different schools of thought on the matter:
Leave accessibility the same as the base class, since users could use static_cast<Derived*> to gain access anyway.
Make functions as private as possible. If users require a->Foo() but not b->Foo(), then Derived::Foo should be private. It can always be made public if and when that's required.
Is there a reason to prefer one or the other?
Restricting access to a member in a subtype breaks the Liskov substitution principle (the L in SOLID). I would advice against it in general.
Update: It may "work," as in the code compiles and runs and produces the expected output, but if you are hiding a member your intention is probably making the subtype less general than the original. This is what breaks the principle. If instead your intention is to clean up the subtype interface by leaving only what's interesting to the user of the API, go ahead and do it.
Not an answer to your original question, but if we are talking about classes design...
As Alexandrescu and Sutter recommend in their 39th rule, you should prefer using public non-virtual functions and private/protected virtual:
class Base {
public:
void Foo(); // fixed API function
private:
virtual void FooImpl(); // implementation details
};
class Derived {
private:
virtual void FooImpl(); // special implementation
};
This depends on your design, whether you want to access the virtual function with a derived class object or not.
If not then yes, it's always better to make them private or protected.
There is no code execution difference based on access specifier, but the code becomes cleaner.
Once you have restricted the access of the class's virtual function; the reader of that class can be sure that this is not going to be called with any object or pointer of derived class.
In C++ why the pure virtual method mandates its compulsory overriding only to its immediate children (for object creation), but not to the grand children and so on ?
struct B {
virtual void foo () = 0;
};
struct D : B {
virtual void foo () { ... };
};
struct DD : D {
// ok! ... if 'B::foo' is not overridden; it will use 'D::foo' implicitly
};
I don't see any big deal in leaving this feature out.
For example, at language design point of view, it could have been possible that, struct DD is allowed to use D::foo only if it has some explicit statement like using D::foo;. Otherwise it has to override foo compulsory.
Is there any practical way of having this effect in C++?
I found one mechanism, where at least we are prompted to announce the overridden method explicitly. It's not the perfect way though.
Suppose, we have few pure virtual methods in the base class B:
class B {
virtual void foo () = 0;
virtual void bar (int) = 0;
};
Among them, suppose we want only foo() to be overridden by the whole hierarchy. For simplicity, we have to have a virtual base class, which contains that particular method. It has a template constructor, which just accepts the type same as that method.
class Register_foo {
virtual void foo () = 0; // declare here
template<typename T> // this matches the signature of 'foo'
Register_foo (void (T::*)()) {}
};
class B : public virtual Register_foo { // <---- virtual inheritance
virtual void bar (int) = 0;
Base () : Register_foo(&Base::foo) {} // <--- explicitly pass the function name
};
Every subsequent child class in the hierarchy would have to register a foo inside its every constructor explicitly. e.g.:
struct D : B {
D () : Register_foo(&D::foo) {}
virtual void foo () {};
};
This registration mechanism has nothing to do with the business logic. Though, the child class can choose to register using its own foo or its parent's foo or even some similar syntax method, but at least that is announced explicitly.
In your example, you have not declared D::foo pure; that is why it does not need to be overridden. If you want to require that it be overridden again, then declare it pure.
If you want to be able to instantiate D, but force any further derived classes to override foo, then you can't. However, you could derive yet another class from D that redeclares it pure, and then classes derived from that must override it again.
What you're basically asking for is to require that the most derived
class implement the functiom. And my question is: why? About the only
time I can imagine this to be relevant is a function like clone() or
another(), which returns a new instance of the same type. And that's
what you really want to enforce, that the new instance has the same
type; even there, where the function is actually implemented is
irrelevant. And you can enforce that:
class Base
{
virtual Base* doClone() const = 0;
public:
Base* clone() const
{
Base* results = doClone();
assert( typeid(*results) == typeid(*this) );
return results;
}
}
(In practice, I've never found people forgetting to override clone to
be a real problem, so I've never bothered with something like the above.
It's a generally useful technique, however, anytime you want to enforce
post-conditions.)
A pure virtual means that to be instantiated, the pure virtual must be overridden in some descendant of the class that declares the pure virtual function. That can be in the class being instantiated or any intermediate class between the base that declares the pure virtual, and the one being instantiated.
It's still possible, however, to have intermediate classes that derive from one with a pure virtual without overriding that pure virtual. Like the class that declares the pure virtual, those classes can only be used as based classes; you can't create instances of those classes, only of classes that derive from them, in which every pure virtual has been implemented.
As far as requiring that a descendant override a virtual, even if an intermediate class has already done so, the answer is no, C++ doesn't provide anything that's at least intended to do that. It almost seems like you might be able to hack something together using multiple (probably virtual) inheritance so the implementation in the intermediate class would be present but attempting to use it would be ambiguous, but I haven't thought that through enough to be sure how (or if) it would work -- and even if it did, it would only do its trick when trying to call the function in question, not just instantiate an object.
Is there any practical way of having this effect in C++?
No, and for good reason. Imagine maintenance in a large project if this were part of the standard. Some base class or intermediate base class needs to add some public interface, an abstract interface. Now, every single child and grandchild thereof would need to changed and recompiled (even if it were as simple as adding using D::foo() as you suggested), you probably see where this is heading, hells kitchen.
If you really want to enforce implementation you can force implementation of some other pure virtual in the child class(s). This can also be done using the CRTP pattern as well.
I understand the need for a virtual destructor. But why do we need a pure virtual destructor? In one of the C++ articles, the author has mentioned that we use pure virtual destructor when we want to make a class abstract.
But we can make a class abstract by making any of the member functions as pure virtual.
So my questions are
When do we really make a destructor pure virtual? Can anybody give a good real time example?
When we are creating abstract classes is it a good practice to make the destructor also pure virtual? If yes..then why?
Probably the real reason that pure virtual destructors are allowed is that to prohibit them would mean adding another rule to the language and there's no need for this rule since no ill-effects can come from allowing a pure virtual destructor.
Nope, plain old virtual is enough.
If you create an object with default implementations for its virtual methods and want to make it abstract without forcing anyone to override any specific method, you can make the destructor pure virtual. I don't see much point in it but it's possible.
Note that since the compiler will generate an implicit destructor for derived classes, if the class's author does not do so, any derived classes will not be abstract. Therefore having the pure virtual destructor in the base class will not make any difference for the derived classes. It will only make the base class abstract (thanks for #kappa's comment).
One may also assume that every deriving class would probably need to have specific clean-up code and use the pure virtual destructor as a reminder to write one but this seems contrived (and unenforced).
Note: The destructor is the only method that even if it is pure virtual has to have an implementation in order to instantiate derived classes (yes pure virtual functions can have implementations, being pure virtual means derived classes must override this method, this is orthogonal to having an implementation).
struct foo {
virtual void bar() = 0;
};
void foo::bar() { /* default implementation */ }
class foof : public foo {
void bar() { foo::bar(); } // have to explicitly call default implementation.
};
All you need for an abstract class is at least one pure virtual function. Any function will do; but as it happens, the destructor is something that any class will have—so it's always there as a candidate. Furthermore, making the destructor pure virtual (as opposed to just virtual) has no behavioral side effects other than to make the class abstract. As such, a lot of style guides recommend that the pure virtual destuctor be used consistently to indicate that a class is abstract—if for no other reason than it provides a consistent place someone reading the code can look to see if the class is abstract.
If you want to create an abstract base class:
that can't be instantiated (yep, this is redundant with the term "abstract"!)
but needs virtual destructor behavior (you intend to carry around pointers to the ABC rather than pointers to the derived types, and delete through them)
but does not need any other virtual dispatch behavior for other methods (maybe there are no other methods? consider a simple protected "resource" container that needs a constructors/destructor/assignment but not much else)
...it's easiest to make the class abstract by making the destructor pure virtual and providing a definition (method body) for it.
For our hypothetical ABC:
You guarantee that it cannot be instantiated (even internal to the class itself, this is why private constructors may not be enough), you get the virtual behavior you want for the destructor, and you do not have to find and tag another method that doesn't need virtual dispatch as "virtual".
Here I want to tell when we need virtual destructor and when we need pure virtual destructor
class Base
{
public:
Base();
virtual ~Base() = 0; // Pure virtual, now no one can create the Base Object directly
};
Base::Base() { cout << "Base Constructor" << endl; }
Base::~Base() { cout << "Base Destructor" << endl; }
class Derived : public Base
{
public:
Derived();
~Derived();
};
Derived::Derived() { cout << "Derived Constructor" << endl; }
Derived::~Derived() { cout << "Derived Destructor" << endl; }
int _tmain(int argc, _TCHAR* argv[])
{
Base* pBase = new Derived();
delete pBase;
Base* pBase2 = new Base(); // Error 1 error C2259: 'Base' : cannot instantiate abstract class
}
When you want that no one should be able to create the object of Base class directly, use pure virtual destructor virtual ~Base() = 0. Usually at-least one pure virtual function is required, let's take virtual ~Base() = 0, as this function.
When you do not need above thing, only you need the safe destruction of Derived class object
Base* pBase = new Derived();
delete pBase;
pure virtual destructor is not required, only virtual destructor will do the job.
From the answers I have read to your question, I couldn't deduce a good reason to actually use a pure virtual destructor. For example, the following reason doesn't convince me at all:
Probably the real reason that pure virtual destructors are allowed is that to prohibit them would mean adding another rule to the language and there's no need for this rule since no ill-effects can come from allowing a pure virtual destructor.
In my opinion, pure virtual destructors can be useful. For example, assume you have two classes myClassA and myClassB in your code, and that myClassB inherits from myClassA. For the reasons mentioned by Scott Meyers in his book "More Effective C++", Item 33 "Making non-leaf classes abstract", it is better practice to actually create an abstract class myAbstractClass from which myClassA and myClassB inherit. This provides better abstraction and prevents some problems arising with, for example, object copies.
In the abstraction process (of creating class myAbstractClass), it can be that no method of myClassA or myClassB is a good candidate for being a pure virtual method (which is a prerequisite for myAbstractClass to be abstract). In this case, you define the abstract class's destructor pure virtual.
Hereafter a concrete example from some code I have myself written. I have two classes, Numerics/PhysicsParams which share common properties. I therefore let them inherit from the abstract class IParams. In this case, I had absolutely no method at hand that could be purely virtual. The setParameter method, for example, must have the same body for every subclass. The only choice that I have had was to make IParams' destructor pure virtual.
struct IParams
{
IParams(const ModelConfiguration& aModelConf);
virtual ~IParams() = 0;
void setParameter(const N_Configuration::Parameter& aParam);
std::map<std::string, std::string> m_Parameters;
};
struct NumericsParams : IParams
{
NumericsParams(const ModelConfiguration& aNumericsConf);
virtual ~NumericsParams();
double dt() const;
double ti() const;
double tf() const;
};
struct PhysicsParams : IParams
{
PhysicsParams(const N_Configuration::ModelConfiguration& aPhysicsConf);
virtual ~PhysicsParams();
double g() const;
double rho_i() const;
double rho_w() const;
};
If you want to stop instantiating of base class without making any change in your already implemented and tested derive class, you implement a pure virtual destructor in your base class.
You are getting into hypotheticals with these answers, so I will try to make a simpler, more down to earth explanation for clarity's sake.
The basic relationships of object oriented design are two:
IS-A and HAS-A. I did not make those up. That is what they are called.
IS-A indicates that a particular object identifies as being of the class that is above it in a class hierarchy. A banana object is a fruit object if it is a subclass of the fruit class. This means that anywhere a fruit class can be used, a banana can be used. It is not reflexive , though. You can not substitute a base class for a specific class if that specific class is called for.
Has-a indicated that an object is part of a composite class and that there is an ownership relationship. It means in C++ that it is a member object and as such the onus is on the owning class to dispose of it or hand ownership off before destructing itself.
These two concepts are easier to realize in single-inheritance languages than in a multiple inheritance model like c++, but the rules are essentially the same. The complication comes when the class identity is ambiguous, such as passing a Banana class pointer into a function that takes a Fruit class pointer.
Virtual functions are, firstly, a run-time thing. It is part of polymorphism in that it is used to decide which function to run at the time it is called in the running program.
The virtual keyword is a compiler directive to bind functions in a certain order if there is ambiguity about the class identity. Virtual functions are always in parent classes (as far as I know) and indicate to the compiler that binding of member functions to their names should take place with the subclass function first and the parent class function after.
A Fruit class could have a virtual function color() that returns "NONE" by default.
The Banana class color() function returns "YELLOW" or "BROWN".
But if the function taking a Fruit pointer calls color() on the Banana class sent to it -- which color() function gets invoked?
The function would normally call Fruit::color() for a Fruit object.
That would 99% of the time not be what was intended.
But if Fruit::color() was declared virtual then Banana:color() would be called for the object because the correct color() function would be bound to the Fruit pointer at the time of the call.
The runtime will check what object the pointer points to because it was marked virtual in the Fruit class definition.
This is different than overriding a function in a subclass. In that case
the Fruit pointer will call Fruit::color() if all it knows is that it IS-A pointer to Fruit.
So now to the idea of a "pure virtual function" comes up.
It is a rather unfortunate phrase as purity has nothing to do with it. It means that it is intended that the base class method is never to be called.
Indeed a pure virtual function can not be called. It must still be defined, however. A function signature must exist. Many coders make an empty implementation {} for completeness, but the compiler will generate one internally if not. In that case when the function is called even if the pointer is to Fruit , Banana::color() will be called as it is the only implementation of color() there is.
Now the final piece of the puzzle: constructors and destructors.
Pure virtual constructors are illegal, completely. That is just out.
But pure virtual destructors do work in the case that you want to forbid the creation of a base class instance. Only sub classes can be instantiated if the destructor of the base class is pure virtual.
the convention is to assign it to 0.
virtual ~Fruit() = 0; // pure virtual
Fruit::~Fruit(){} // destructor implementation
You do have to create an implementation in this case. The compiler knows this is what you are doing and makes sure you do it right, or it complains mightily that it can not link to all the functions it needs to compile. The errors can be confusing if you are not on the right track as to how you are modeling your class hierarchy.
So you are forbidden in this case to create instances of Fruit, but allowed to create instances of Banana.
A call to delete of the Fruit pointer that points to an instance of Banana
will call Banana::~Banana() first and then call Fuit::~Fruit(), always.
Because no matter what, when you call a subclass destructor, the base class destructor must follow.
Is it a bad model? It is more complicated in the design phase, yes, but it can ensure that correct linking is performed at run-time and that a subclass function is performed where there is ambiguity as to exactly which subclass is being accessed.
If you write C++ so that you only pass around exact class pointers with no generic nor ambiguous pointers, then virtual functions are not really needed.
But if you require run-time flexibility of types (as in Apple Banana Orange ==> Fruit ) functions become easier and more versatile with less redundant code.
You no longer have to write a function for each type of fruit, and you know that every fruit will respond to color() with its own correct function.
I hope this long-winded explanation solidifies the concept rather than confuses things. There are a lot of good examples out there to look at,
and look at enough and actually run them and mess with them and you will get it.
You asked for an example, and I believe the following provides a reason for a pure virtual destructor. I look forward to replies as to whether this is a good reason...
I do not want anyone to be able to throw the error_base type, but the exception types error_oh_shucks and error_oh_blast have identical functionality and I don't want to write it twice. The pImpl complexity is necessary to avoid exposing std::string to my clients, and the use of std::auto_ptr necessitates the copy constructor.
The public header contains the exception specifications that will be available to the client to distinguish different types of exception being thrown by my library:
// error.h
#include <exception>
#include <memory>
class exception_string;
class error_base : public std::exception {
public:
error_base(const char* error_message);
error_base(const error_base& other);
virtual ~error_base() = 0; // Not directly usable
virtual const char* what() const;
private:
std::auto_ptr<exception_string> error_message_;
};
template<class error_type>
class error : public error_base {
public:
error(const char* error_message) : error_base(error_message) {}
error(const error& other) : error_base(other) {}
~error() {}
};
// Neither should these classes be usable
class error_oh_shucks { virtual ~error_oh_shucks() = 0; }
class error_oh_blast { virtual ~error_oh_blast() = 0; }
And here is the shared implementation:
// error.cpp
#include "error.h"
#include "exception_string.h"
error_base::error_base(const char* error_message)
: error_message_(new exception_string(error_message)) {}
error_base::error_base(const error_base& other)
: error_message_(new exception_string(other.error_message_->get())) {}
error_base::~error_base() {}
const char* error_base::what() const {
return error_message_->get();
}
The exception_string class, kept private, hides std::string from my public interface:
// exception_string.h
#include <string>
class exception_string {
public:
exception_string(const char* message) : message_(message) {}
const char* get() const { return message_.c_str(); }
private:
std::string message_;
};
My code then throws an error as:
#include "error.h"
throw error<error_oh_shucks>("That didn't work");
The use of a template for error is a little gratuitous. It saves a bit of code at the expense of requiring clients to catch errors as:
// client.cpp
#include <error.h>
try {
} catch (const error<error_oh_shucks>&) {
} catch (const error<error_oh_blast>&) {
}
Maybe there is another REAL USE-CASE of pure virtual destructor which I actually can't see in other answers :)
At first, I completely agree with marked answer: It is because forbidding pure virtual destructor would need an extra rule in language specification. But it's still not the use case that Mark is calling for :)
First imagine this:
class Printable {
virtual void print() const = 0;
// virtual destructor should be here, but not to confuse with another problem
};
and something like:
class Printer {
void queDocument(unique_ptr<Printable> doc);
void printAll();
};
Simply - we have interface Printable and some "container" holding anything with this interface. I think here it is quite clear why print() method is pure virtual. It could have some body but in case there is no default implementation, pure virtual is an ideal "implementation" (="must be provided by a descendant class").
And now imagine exactly the same except it is not for printing but for destruction:
class Destroyable {
virtual ~Destroyable() = 0;
};
And also there could be a similar container:
class PostponedDestructor {
// Queues an object to be destroyed later.
void queObjectForDestruction(unique_ptr<Destroyable> obj);
// Destroys all already queued objects.
void destroyAll();
};
It's simplified use-case from my real application. The only difference here is that "special" method (destructor) was used instead of "normal" print(). But the reason why it is pure virtual is still the same - there is no default code for the method.
A bit confusing could be the fact that there MUST be some destructor effectively and compiler actually generates an empty code for it. But from the perspective of a programmer pure virtuality still means: "I don't have any default code, it must be provided by derived classes."
I think it's no any big idea here, just more explanation that pure virtuality works really uniformly - also for destructors.
This is a decade old topic :)
Read last 5 paragraphs of Item #7 on "Effective C++" book for details, starts from "Occasionally it can be convenient to give a class a pure virtual destructor...."
we need to make destructor virtual bacause of the fact that , if we dont make the destructor virtual then compiler will only destruct the contents of base class , n all the derived classes will remain un changed , bacuse compiler will not call the destructor of any other class except the base class.
When I declare a base class, should I declare all the functions in it as virtual, or should I have a set of virtual functions and a set of non-virtual functions which I am sure are not going to be inherited?
A function only needs to be virtual iff a derived class will implement that function in a different way.
For example:
class Base {
public:
void setI (int i) // No need for it to be virtual
{
m_i = i;
}
virtual ~Base () {} // Almost always a good idea
virtual bool isDerived1 () // Is overridden - so make it virtual
{
return false;
}
private:
int m_i;
};
class Derived1 : public Base {
public:
virtual ~Derived () {}
virtual bool isDerived1 () // Is overridden - so make it virtual
{
return true;
}
};
As a result, I would error the side of not having anything virtual unless you know in advance that you intend to override it or until you discover that you require the behaviour. The only exception to this is the destructor, for which its almost always the case that you want it to be virtual in a base class.
You should only make functions you intend and design to be overridden virtual. Making a method virtual is not free in terms of both maintenance and performance (maintenance being the much bigger issue IMHO).
Once a method is virtual it becomes harder to reason about any code which uses this method. Because instead of considering what one method call would do, you must consider what N method calls would do in that scenario. N represents the number of sub classes which override that method.
The one exception to this rule is destructors. They should be virtual in any class which is intended to be derived from. It's the only way to guarantee that the proper destructor is called during deallocation.
The non-virtual interface idiom (C++ Coding Standards item 39) says that a base class should have non-virtual interface methods, allowing the base class to guarantee invariants, and non-public virtual methods for customization of the base class behaviour by derived classes. The non-virtual interface methods call the virtual methods to provide the overridable behaviour.
I tend to make only the things I want to be overridable virtual. If my initial assumptions about what I will want to override turn out to be wrong, I go back and change the base class.
Oh, and obviously always make your destructor virtual if you're working on something that will be inherited from.
If you are creating a base class ( you are sure that somebody derives the class ) then you can do following things:
Make destructor virtual (a must for base class)
Define methods which should be
derived and make them virtual.
Define methods which need not be ( or
should not be) derived as
non-virtual.
If the functions are only for derived
class and not for base class then mark
them as protected.
Compiler wouldn't know which actual piece of code will be run when pointer of base type calls a virtual function. so the actual piece of code that would be run needs to be evaluated at run-time according to which object is pointed by base class pointer. So avoid the use of virtual function if the function is not gonne be overriden in an inherited class.
TLDR version:
"you should have a set of virtual functions and a set of non-virtual functions which you are sure are not going to be inherited." Because virtual functions causes a performance decrease at run-time.
The interface functions should be, in general, virtual. Functions that provide fixed functionality should not.
Why declare something virtual until you are really overriding it? I believe it's not a question of being sure or not. Follow the facts: is it overriden somewhere? No? Then it must not be virtual.
Let's say we have
class A {
public:
virtual int foo() { cout << "foo!"; }
}
class B : public A {
public:
virtual int foo() =0;
}
class C : public B {
public:
virtual int foo() { cout << "moo!"; }
}
Is this really overriding? I think this is actually overloading.
What is the meaning of making something like this, design-wise?
We got a base class A. Then we got an abstract derived class B which is derived from the concrete class A, and then a realization of B via C.
What are we doing here and does it make any sense?
Overloading would mean that you had two functions of the same name but with different parameters. That's not the case here.
Example:
int functionA() { /*...*/ };
int functionA(int someParameter) { /*...*/ };
Overriding means rewriting a function with the same parameters in a subclass. That is what you presented as an example.
That's the definition part. Now on to the design:
When you have a pure virtual function, concrete subclasses have to override it. So by adding a pure virtual function, you ensure that all subclasses provide the same set of functions (=interface). This seems to be the case in the sample code.
But it is not a very good example, as the concrete superclass already implements a default functionality for foo(). When there is an abstract subclass that redefines it to be purely virtual, it is a sign for me that the class hierarchy is flawed because a subclass (and the callers of foo()) usually should be able to fall back to the default implementation. It's a bit hard to explain with such an abstract example, but a hierarchy like this is a bit suspicious to my eye.
It seems to me that the only effect of the pure virtual method here is to make B an abstract class.
It's still overriding, because when you have a pointer p of type A* to instance of class C, p->foo() still calls C::foo()
Probably, the designer wanted to insert an abstract class in the hierarchy deriving from some concrete class, and force overriding of the method in subclasses. I don't think it's terribly wise.
You're saying that classes deriving from B must implement int foo();. You might want to do something like this to force other programmers to think about how they want foo() to behave, but I think it's a bad idea - in reality they're likely to implement by calling A::foo().
If you just want to make B abstract, give it a pure virtual destructor - you'll also need to provide an implementation of the destructor to avoid link error though.