When I write interface classes in C++, I choose either of the following 2 options
class Interface
{
public:
virtual R1 f1(p11, p12 , ...) = 0;
...
virtual Rn fn(pn1, pn2 , ...) = 0;
virtual ~Interface() {}
}
or
class Interface
{
public:
virtual R1 f1(p11, p12 , ...) = 0;
...
virtual Rn fn(pn1, pn2 , ...) = 0;
virtual ~Interface() = 0;
}
Interface::~Interface() {}
The first version is shorter to write
The second is attractive in that all functions of the interface are pure virtual
Is there any reason I should prefer one or the other method (or perhaps a third one)?
Thanks
As I understand, the purpose of making virtual function pure virtual is to force the derived classes to either provide implementation for it or choose the default implementation by explicitly writing Base::f() in the Derived::f().
So if that is true, then what is the purpose of making virtual destructor pure virtual? Does it force the derived classes to provide implementation for Base::~Base()? Can the derived classes implement Base::~Base()? No.
So that means, the first version with virtual destructor seems enough for almost all purposes. After all, the most common purpose of virtual destructor is that the clients can correctly delete objects of the derived classes through pointers of type Base*.
However, if you make all functions in Base virtual only, not pure virtual, and provide implementations for them (actually you've to provide), and at the same time you want to make Base abstract type, then having a pure virtual destructor in Base is the only solution:
class Base
{
public:
virtual void f() {}; //not pure virtual
virtual ~Base() = 0; //pure - makes Base abstract type!
};
Base::~Base() {} //yes, you have to do this as well.
Base *pBase = new Base(); // error - cannot create instance!
Hope that helps.
To me, the dtor is not part of the interface. The fi() would have analogues in other languages, not the dtor. Likewise, you could write pre- and post- conditions for the fi(), but not the dtor. This makes it just a C++ wart, and the first technique is the most comfortable way of dealing with it.
Okay, found a link and so thought I would mention it as an answer:
Are inline virtual functions really a non-sense?
I've seen compilers that don't emit
any v-table if no non-inline function
at all exists (and defined in one
implementation file instead of a
header then). They would throw errors
like missing vtable-for-class-A or
something similar, and you would be
confused as hell, as i was.
Indeed, that's not conformant with the
Standard, but it happens so consider
putting at least one virtual function
not in the header (if only the virtual
destructor), so that the compiler
could emit a vtable for the class at
that place. I know it happens with
some versions of gcc.
(Johannes Schaub)
Its slightly different from your second case (suggests actually taking the function out of the header file altogether so as not to fall victim to gcc problem) but I thought I would mention it. Quirks of gcc can occasionally bite.
In the first case, the derived class may choose whether or not to implement a destructor.
In the second case, the pure virtual destructor must be overridden, so the derived class is forced to implement a destructor.
Unless you have some reason why you want to force this, I would go with the first case.
Related
UPD. There is a mark that it is a duplicate of this question. But in that question OP asks HOW to use default to define pure virtual destructor. This question is about what the difference.
In C++ (latest standard if possible) what the real difference between defining pure virtual destructor with empty body implementation and just a empty body (or default)?
Variant 1:
class I1 {
public:
virtual ~I1() {}
};
Variant 2.1:
class I21 {
public:
virtual ~I21() = 0;
};
I21::~I21() {}
Variant 2.2:
class I22 {
public:
virtual ~I22() = 0;
};
I22::~I22() = default;
Update I found at least 1 difference between Variant 1 and Variants 2.1/2.2:
std::is_abstract::value is false for Variant 1, and true for Variants 2.1 and 2.2.
Demo
May be someone can found difference between 2.1 and 2.2?
The difference between I1 and I2*, as you pointed out, is that adding = 0 makes the class abstract. In fact, making the destructor pure virtual is a trick to make a class abstract when you don't have any other function to be pure virtual. And I said it's a trick because the destructor cannot be left undefined if you ever want to destruct any derived class of it (and here you will), then you still need to define the destructor, either empty or defaulted.
Now the difference between empty or defaulted destructor/constructor (I21 and I22) is way more obscure, there isn't much written out there. The recommended one is to use default, both as a new idiom to make your intentions clearer, and apparently, to give the compiler a chance for optimization. Quoting msdn
Because of the performance benefits of trivial special member functions, we recommend that you prefer automatically generated special member functions over empty function bodies when you want the default behavior.
There are no visible differences between the two, apart from this possible performance improvement. = default is the way to go from C++11 on.
All I could find was:
§12.4 (5.9)
A destructor can be declared virtual (10.3) or pure virtual (10.4); if any objects of that class or any
derived class are created in the program, the destructor shall be defined. If a class has a base class with a
virtual destructor, its destructor (whether user- or implicitly-declared) is virtual.
leading to:
§10.4 (the class is now abstract)
10.4 (2) says:
A pure virtual function need be defined only if called with, or as if with (12.4), the qualified-id syntax (5.1).
But the narrative on destructors in §12.4 talks about destructors always being called as if by their fully qualified name (in order to prevent ambiguity).
Which means that:
the destructor must be defined, even if pure virtual, and
the class is now abstract.
Variant 1 will allow you to have an instance of the class. Variant 2.1, 2.2 won't allow instances, but allows instances of descendants. This, for example works (and is able to confuse many people), while removing the marked line will make compile fail:
class I21 {
public:
virtual ~I21() = 0;
};
I21::~I21() {} // remove this and it'll not compile
class I22 : public I21
{
public:
virtual ~I22() {}
};
int main() {
I22 i;
return 0;
}
The reason behind, the destructor chain calls I21::~I21() directly and not via interface. That said, it's not clear what your goal is with pure virtual destructors. If you'd like to avoid instantiation (i.e., static class), you might consider deleting the constructor instead; if you'd like descendants that can be instantiated but not this class, perhaps you need a pure virtual member function that's implemented in descendants.
I have a class declared as follow:
class TestFoo {
public:
TestFoo();
virtual void virtualFunction();
void nonVirtualFunction();
};
that I try to implement this way
TestFoo::TestFoo(){}
void TestFoo::nonVirtualFunction(){}
which on compilation returns the error:
undefined reference to vtable for TestFoo
I tried :
TestFoo::TestFoo(){}
void TestFoo::nonVirtualFunction(){}
void TestFoo::virtualFunction(){}
which compiles ok which is consistent to the answers to these posts:
Undefined reference to vtable
undefined reference to vtable
What confuses me is that I thought the whole point of declaring a virtual function is that I would not need to define it. In this example, I am not planning to create any instance of TestFoo, but to create instances of (concrete) classes inheriting from TestFoo. But still, I want to define the function nonVirtualFunction for every subclasses of TestFoo.
Something I did not get right ?
Thanks !
the whole point of declaring a virtual function is that I would not
need to define it
Not quite, it says "I may want to replace the implementation of this function by something else in a derived class."
I may have misunderstood your question, but you seem to imply that you don't think you can define pure virtual member function in C++, which you can. You can declare one as follows.
virtual void virtualFunction() = 0;
Normally, a pure virtual function won't be defined, but of course you can. That says "There is no default implementation of this function because it won't always make sense, but I'll provide you with an implementation that you can opt into."
By the way, if a class has any virtual functions, you should also define a virtual destructor, as it is perfectly legal (and often recommended) to have a base class (smart) pointer to a derived class - without a virtual destructor, the object may not be deleted correctly.
... I thought the whole point of declaring a
virtual function is that I would not need to define it ...
For that facility you have a feature called pure virtual methods:
virtual void virtualFunction() = 0; // no linking error now
Note that, a virtual method cannot remain unimplemented. The reason is that for every virtual method declared inside a class body there has to be a vtable entry. Failing to find its body results in linking error.
Purpose of this restriction:
Unless a class is abstract - that is it has at least one virtual function - there is no way you can guarantee to the compiler that you are not going to declare an object of TestFoo. What happens when you do following:
DerivedOfTestFoo obj1;
TestFoo obj2 = obj1, *p = &obj2; // object slicing
p->virtualFunction(); // where is the body?
Other situation; in constructor there is no virtual mechanism:
TestFoo::TestFoo () {
this->virtualFunction(); // where is the body?
}
We can conclude that, compilers follow the rule, "Better to be safe than sorry". :)
Your description matches perfectly with the case of an abstract class. Declare your virtual function as:
virtual void VirtualFunction () = 0;
This means that you are not implementing the function in this class. As a result, the class becomes abstract. That is, no bare objects of this class can be instantiated.
Also, you should provide a virtual destructor.
Update: Some clarifications...
The language allows you to redefine a non-virtual function. Though, the wrong version might be called in some cases:
derived D; // rB is a reference to base class but it
base & rB=D; // points to an object of the derived class
rB.NonVirtualFunction (); // The base-class version is called
For this reason, redefining a non-virtual function is strongly discouraged nowadays. See Scott Meyers' "Effective C++, Third Edition: 55 Specific Ways to Improve Your Programs and Designs", item 36: "Never redefine an inherited non-virtual function."
See also item 7: "Declare destructors virtual in polymorphic base classes". An example:
base * pB = new derived;
delete pB; // If base's destructor is not virtual,
// ~derived() will not be called.
In case you wonder why isn't everything virtual by default, the reason is that calling a virtual function is slightly slower than calling a non-virtual one. Oh, and objects of classes with virtual functions occupy a few more bytes each.
If you want make this virtual function as pure virtual function,do not want to define it then, virtual void virtualFunction() = 0;
I just read about this in the C++ FAQ Lite
[25.10] What does it mean to "delegate to a sister class" via virtual inheritance?
class Base {
public:
virtual void foo() = 0;
virtual void bar() = 0;
};
class Der1 : public virtual Base {
public:
virtual void foo();
};
void Der1::foo()
{ bar(); }
class Der2 : public virtual Base {
public:
virtual void bar();
};
class Join : public Der1, public Der2 {
public:
...
};
int main()
{
Join* p1 = new Join();
Der1* p2 = p1;
Base* p3 = p1;
p1->foo();
p2->foo();
p3->foo();
}
"Believe it or not, when Der1::foo() calls this->bar(), it ends up calling Der2::bar(). Yes, that's right: a class that Der1 knows nothing about will supply the override of a virtual function invoked by Der1::foo(). This "cross delegation" can be a powerful technique for customizing the behavior of polymorphic classes. "
My question is:
What is happening behind the scene.
If I add a Der3 (virtual inherited from Base), what will happen? (I dont have a compiler here, couldn't test it right now.)
What is happening behind the scene.
The simple explanation is that, because inheritance from Base is virtual in both Der1 and Der2, there is a single instance of the object in the most derived object Join. At compile time, and assuming (which is the common case) virtual tables as dispatch mechanism, when compiling Der1::foo it will redirect the call to bar() through the vtable.
Now the question is how the compiler generates vtables for each of the objects, the vtable for Base will contain two null pointers, the vtable for Der1 will contain Der1::foo and a null pointer and the vtable for Der2 will contain a null pointer and Der2::bar [*]
Now, because of virtual inheritance in the previous level, when the compiler processes Join it will create a single Base object, and thus a single vtable for the Base subojbect of Join. It effectively merges the vtables of Der1 and Der2 and produces a vtable that contains pointers to Der1::foo and Der2::bar.
So the code in Der1::foo will dispatch through Join's vtable to the final overrider, which in this case is in a different branch of the virtual inheritance hierarchy.
If you add a Der3 class, and that class defines either of the virtual functions, the compiler will not be able to cleanly merge the three vtables and will complain, with some error relating to the ambiguity of the multiply defined method (none of the overriders can be considered to be the final overrider). If you add the same method to Join, then the ambiguity will no longer be a problem, as the final overrider will be the member function defined in Join, so the compiler is able to generate the virtual table.
[*] Most compilers will not write null pointers here, but rather a pointer to a generic function that will print an error message and terminate the application, allowing for better diagnostics than a plain segmentation fault.
If you add a Der3 what will happen depends on which class it inherits from.
As you know, instantiating a class is only possible when all virtual functions have been defined; otherwise you can only make pointers to them. This is to prevent constructing partially defined objects.
In your example you cannot instantiate Der1 nor Der2 directly because in Der1, bar() is still pure virtual and in Der2, foo() is pure virtual.
Your Join class can be instantiated because it inherits from both and has therefore no pure virtual function.
Once you have made an instance of a class, you can instantiate pointers to non-instantiable classes by dynamic_casting.
From the moment a class has been instantiated, the virtual function mechanism, that works with a table of pointer to functions, will still call the functions that have been defined at instantiation time.
So the key here is that when you create your object, you create an instance of Join. Its virtual functions are defined because you are able to create the object. From that moment, you can call the virtual functions with any pointer to a base class.
I see why this is interesting to explore. In real code this would probably be hardly useful however. As others pointed out, virtual inheritance is more of a fix-this-bad-design-to-work-somehow tool, than a valid desing tool.
Your code produces warnings in VS2010 - the compiler is making you know that dominance is being used. Of course thats not a show stopper, but another discouragement to use this.
If you introduce Der3 like this
class Der3 : public virtual Base {
public:
void bar() {}
};
class Join : public Der1, public Der2, public Der3 {}
the code fails to compile because of ambiguous inheritance of 'void Base::bar(void)'
One point is missing in the discussion ( none-the-less this is quite informative and thanks to all ).
When you 'virtually inherit' a class. What happens is: a pointer to the virtual base class is kept by most of the compilers ( it can be implemented in different ways by different compilers). So if you take the size of Der1 and Der2, it would be atleast 4 bytes on 32 bit and 8 bytes on 64 bit. Because they have a pointer to the virtual base class and therefore, no ambiguity. That is why when you create the object of Join, it first calls the constructor of Virtual Base class ( not really the first call, but it initializes the pointer which came to it through Der1 and Der2 first in its construtor ). In Join compiler can check the pointer name / type and then it makes sure that only one pointer of virtual base class comes to it from Der1 and Der2. You can check even this by sizeof operator. As we know that compiler puts the calls in the constructor silently. Therefore, it first calls the Virtual Base class's constructor in Depth First way. ( can be checked using all the base classes as virtual derivation ). Rest is already explained
This is a pretty stupid example imo and a perfect example of academics making themselves look clever. If this situation ever came up, it would almost CERTAINLY be because of a bug, specifically forgetting to make Der1::foo() virtual.
Edit:
I misread the class definitions. Which is exactly the problem with this type of design. It takes a lot of thought to determine exactly what would happen in each of these cases, which is bad. Making your code readable is by far better than being "clever" like this.
I understand the need for a virtual destructor. But why do we need a pure virtual destructor? In one of the C++ articles, the author has mentioned that we use pure virtual destructor when we want to make a class abstract.
But we can make a class abstract by making any of the member functions as pure virtual.
So my questions are
When do we really make a destructor pure virtual? Can anybody give a good real time example?
When we are creating abstract classes is it a good practice to make the destructor also pure virtual? If yes..then why?
Probably the real reason that pure virtual destructors are allowed is that to prohibit them would mean adding another rule to the language and there's no need for this rule since no ill-effects can come from allowing a pure virtual destructor.
Nope, plain old virtual is enough.
If you create an object with default implementations for its virtual methods and want to make it abstract without forcing anyone to override any specific method, you can make the destructor pure virtual. I don't see much point in it but it's possible.
Note that since the compiler will generate an implicit destructor for derived classes, if the class's author does not do so, any derived classes will not be abstract. Therefore having the pure virtual destructor in the base class will not make any difference for the derived classes. It will only make the base class abstract (thanks for #kappa's comment).
One may also assume that every deriving class would probably need to have specific clean-up code and use the pure virtual destructor as a reminder to write one but this seems contrived (and unenforced).
Note: The destructor is the only method that even if it is pure virtual has to have an implementation in order to instantiate derived classes (yes pure virtual functions can have implementations, being pure virtual means derived classes must override this method, this is orthogonal to having an implementation).
struct foo {
virtual void bar() = 0;
};
void foo::bar() { /* default implementation */ }
class foof : public foo {
void bar() { foo::bar(); } // have to explicitly call default implementation.
};
All you need for an abstract class is at least one pure virtual function. Any function will do; but as it happens, the destructor is something that any class will have—so it's always there as a candidate. Furthermore, making the destructor pure virtual (as opposed to just virtual) has no behavioral side effects other than to make the class abstract. As such, a lot of style guides recommend that the pure virtual destuctor be used consistently to indicate that a class is abstract—if for no other reason than it provides a consistent place someone reading the code can look to see if the class is abstract.
If you want to create an abstract base class:
that can't be instantiated (yep, this is redundant with the term "abstract"!)
but needs virtual destructor behavior (you intend to carry around pointers to the ABC rather than pointers to the derived types, and delete through them)
but does not need any other virtual dispatch behavior for other methods (maybe there are no other methods? consider a simple protected "resource" container that needs a constructors/destructor/assignment but not much else)
...it's easiest to make the class abstract by making the destructor pure virtual and providing a definition (method body) for it.
For our hypothetical ABC:
You guarantee that it cannot be instantiated (even internal to the class itself, this is why private constructors may not be enough), you get the virtual behavior you want for the destructor, and you do not have to find and tag another method that doesn't need virtual dispatch as "virtual".
Here I want to tell when we need virtual destructor and when we need pure virtual destructor
class Base
{
public:
Base();
virtual ~Base() = 0; // Pure virtual, now no one can create the Base Object directly
};
Base::Base() { cout << "Base Constructor" << endl; }
Base::~Base() { cout << "Base Destructor" << endl; }
class Derived : public Base
{
public:
Derived();
~Derived();
};
Derived::Derived() { cout << "Derived Constructor" << endl; }
Derived::~Derived() { cout << "Derived Destructor" << endl; }
int _tmain(int argc, _TCHAR* argv[])
{
Base* pBase = new Derived();
delete pBase;
Base* pBase2 = new Base(); // Error 1 error C2259: 'Base' : cannot instantiate abstract class
}
When you want that no one should be able to create the object of Base class directly, use pure virtual destructor virtual ~Base() = 0. Usually at-least one pure virtual function is required, let's take virtual ~Base() = 0, as this function.
When you do not need above thing, only you need the safe destruction of Derived class object
Base* pBase = new Derived();
delete pBase;
pure virtual destructor is not required, only virtual destructor will do the job.
From the answers I have read to your question, I couldn't deduce a good reason to actually use a pure virtual destructor. For example, the following reason doesn't convince me at all:
Probably the real reason that pure virtual destructors are allowed is that to prohibit them would mean adding another rule to the language and there's no need for this rule since no ill-effects can come from allowing a pure virtual destructor.
In my opinion, pure virtual destructors can be useful. For example, assume you have two classes myClassA and myClassB in your code, and that myClassB inherits from myClassA. For the reasons mentioned by Scott Meyers in his book "More Effective C++", Item 33 "Making non-leaf classes abstract", it is better practice to actually create an abstract class myAbstractClass from which myClassA and myClassB inherit. This provides better abstraction and prevents some problems arising with, for example, object copies.
In the abstraction process (of creating class myAbstractClass), it can be that no method of myClassA or myClassB is a good candidate for being a pure virtual method (which is a prerequisite for myAbstractClass to be abstract). In this case, you define the abstract class's destructor pure virtual.
Hereafter a concrete example from some code I have myself written. I have two classes, Numerics/PhysicsParams which share common properties. I therefore let them inherit from the abstract class IParams. In this case, I had absolutely no method at hand that could be purely virtual. The setParameter method, for example, must have the same body for every subclass. The only choice that I have had was to make IParams' destructor pure virtual.
struct IParams
{
IParams(const ModelConfiguration& aModelConf);
virtual ~IParams() = 0;
void setParameter(const N_Configuration::Parameter& aParam);
std::map<std::string, std::string> m_Parameters;
};
struct NumericsParams : IParams
{
NumericsParams(const ModelConfiguration& aNumericsConf);
virtual ~NumericsParams();
double dt() const;
double ti() const;
double tf() const;
};
struct PhysicsParams : IParams
{
PhysicsParams(const N_Configuration::ModelConfiguration& aPhysicsConf);
virtual ~PhysicsParams();
double g() const;
double rho_i() const;
double rho_w() const;
};
If you want to stop instantiating of base class without making any change in your already implemented and tested derive class, you implement a pure virtual destructor in your base class.
You are getting into hypotheticals with these answers, so I will try to make a simpler, more down to earth explanation for clarity's sake.
The basic relationships of object oriented design are two:
IS-A and HAS-A. I did not make those up. That is what they are called.
IS-A indicates that a particular object identifies as being of the class that is above it in a class hierarchy. A banana object is a fruit object if it is a subclass of the fruit class. This means that anywhere a fruit class can be used, a banana can be used. It is not reflexive , though. You can not substitute a base class for a specific class if that specific class is called for.
Has-a indicated that an object is part of a composite class and that there is an ownership relationship. It means in C++ that it is a member object and as such the onus is on the owning class to dispose of it or hand ownership off before destructing itself.
These two concepts are easier to realize in single-inheritance languages than in a multiple inheritance model like c++, but the rules are essentially the same. The complication comes when the class identity is ambiguous, such as passing a Banana class pointer into a function that takes a Fruit class pointer.
Virtual functions are, firstly, a run-time thing. It is part of polymorphism in that it is used to decide which function to run at the time it is called in the running program.
The virtual keyword is a compiler directive to bind functions in a certain order if there is ambiguity about the class identity. Virtual functions are always in parent classes (as far as I know) and indicate to the compiler that binding of member functions to their names should take place with the subclass function first and the parent class function after.
A Fruit class could have a virtual function color() that returns "NONE" by default.
The Banana class color() function returns "YELLOW" or "BROWN".
But if the function taking a Fruit pointer calls color() on the Banana class sent to it -- which color() function gets invoked?
The function would normally call Fruit::color() for a Fruit object.
That would 99% of the time not be what was intended.
But if Fruit::color() was declared virtual then Banana:color() would be called for the object because the correct color() function would be bound to the Fruit pointer at the time of the call.
The runtime will check what object the pointer points to because it was marked virtual in the Fruit class definition.
This is different than overriding a function in a subclass. In that case
the Fruit pointer will call Fruit::color() if all it knows is that it IS-A pointer to Fruit.
So now to the idea of a "pure virtual function" comes up.
It is a rather unfortunate phrase as purity has nothing to do with it. It means that it is intended that the base class method is never to be called.
Indeed a pure virtual function can not be called. It must still be defined, however. A function signature must exist. Many coders make an empty implementation {} for completeness, but the compiler will generate one internally if not. In that case when the function is called even if the pointer is to Fruit , Banana::color() will be called as it is the only implementation of color() there is.
Now the final piece of the puzzle: constructors and destructors.
Pure virtual constructors are illegal, completely. That is just out.
But pure virtual destructors do work in the case that you want to forbid the creation of a base class instance. Only sub classes can be instantiated if the destructor of the base class is pure virtual.
the convention is to assign it to 0.
virtual ~Fruit() = 0; // pure virtual
Fruit::~Fruit(){} // destructor implementation
You do have to create an implementation in this case. The compiler knows this is what you are doing and makes sure you do it right, or it complains mightily that it can not link to all the functions it needs to compile. The errors can be confusing if you are not on the right track as to how you are modeling your class hierarchy.
So you are forbidden in this case to create instances of Fruit, but allowed to create instances of Banana.
A call to delete of the Fruit pointer that points to an instance of Banana
will call Banana::~Banana() first and then call Fuit::~Fruit(), always.
Because no matter what, when you call a subclass destructor, the base class destructor must follow.
Is it a bad model? It is more complicated in the design phase, yes, but it can ensure that correct linking is performed at run-time and that a subclass function is performed where there is ambiguity as to exactly which subclass is being accessed.
If you write C++ so that you only pass around exact class pointers with no generic nor ambiguous pointers, then virtual functions are not really needed.
But if you require run-time flexibility of types (as in Apple Banana Orange ==> Fruit ) functions become easier and more versatile with less redundant code.
You no longer have to write a function for each type of fruit, and you know that every fruit will respond to color() with its own correct function.
I hope this long-winded explanation solidifies the concept rather than confuses things. There are a lot of good examples out there to look at,
and look at enough and actually run them and mess with them and you will get it.
You asked for an example, and I believe the following provides a reason for a pure virtual destructor. I look forward to replies as to whether this is a good reason...
I do not want anyone to be able to throw the error_base type, but the exception types error_oh_shucks and error_oh_blast have identical functionality and I don't want to write it twice. The pImpl complexity is necessary to avoid exposing std::string to my clients, and the use of std::auto_ptr necessitates the copy constructor.
The public header contains the exception specifications that will be available to the client to distinguish different types of exception being thrown by my library:
// error.h
#include <exception>
#include <memory>
class exception_string;
class error_base : public std::exception {
public:
error_base(const char* error_message);
error_base(const error_base& other);
virtual ~error_base() = 0; // Not directly usable
virtual const char* what() const;
private:
std::auto_ptr<exception_string> error_message_;
};
template<class error_type>
class error : public error_base {
public:
error(const char* error_message) : error_base(error_message) {}
error(const error& other) : error_base(other) {}
~error() {}
};
// Neither should these classes be usable
class error_oh_shucks { virtual ~error_oh_shucks() = 0; }
class error_oh_blast { virtual ~error_oh_blast() = 0; }
And here is the shared implementation:
// error.cpp
#include "error.h"
#include "exception_string.h"
error_base::error_base(const char* error_message)
: error_message_(new exception_string(error_message)) {}
error_base::error_base(const error_base& other)
: error_message_(new exception_string(other.error_message_->get())) {}
error_base::~error_base() {}
const char* error_base::what() const {
return error_message_->get();
}
The exception_string class, kept private, hides std::string from my public interface:
// exception_string.h
#include <string>
class exception_string {
public:
exception_string(const char* message) : message_(message) {}
const char* get() const { return message_.c_str(); }
private:
std::string message_;
};
My code then throws an error as:
#include "error.h"
throw error<error_oh_shucks>("That didn't work");
The use of a template for error is a little gratuitous. It saves a bit of code at the expense of requiring clients to catch errors as:
// client.cpp
#include <error.h>
try {
} catch (const error<error_oh_shucks>&) {
} catch (const error<error_oh_blast>&) {
}
Maybe there is another REAL USE-CASE of pure virtual destructor which I actually can't see in other answers :)
At first, I completely agree with marked answer: It is because forbidding pure virtual destructor would need an extra rule in language specification. But it's still not the use case that Mark is calling for :)
First imagine this:
class Printable {
virtual void print() const = 0;
// virtual destructor should be here, but not to confuse with another problem
};
and something like:
class Printer {
void queDocument(unique_ptr<Printable> doc);
void printAll();
};
Simply - we have interface Printable and some "container" holding anything with this interface. I think here it is quite clear why print() method is pure virtual. It could have some body but in case there is no default implementation, pure virtual is an ideal "implementation" (="must be provided by a descendant class").
And now imagine exactly the same except it is not for printing but for destruction:
class Destroyable {
virtual ~Destroyable() = 0;
};
And also there could be a similar container:
class PostponedDestructor {
// Queues an object to be destroyed later.
void queObjectForDestruction(unique_ptr<Destroyable> obj);
// Destroys all already queued objects.
void destroyAll();
};
It's simplified use-case from my real application. The only difference here is that "special" method (destructor) was used instead of "normal" print(). But the reason why it is pure virtual is still the same - there is no default code for the method.
A bit confusing could be the fact that there MUST be some destructor effectively and compiler actually generates an empty code for it. But from the perspective of a programmer pure virtuality still means: "I don't have any default code, it must be provided by derived classes."
I think it's no any big idea here, just more explanation that pure virtuality works really uniformly - also for destructors.
This is a decade old topic :)
Read last 5 paragraphs of Item #7 on "Effective C++" book for details, starts from "Occasionally it can be convenient to give a class a pure virtual destructor...."
we need to make destructor virtual bacause of the fact that , if we dont make the destructor virtual then compiler will only destruct the contents of base class , n all the derived classes will remain un changed , bacuse compiler will not call the destructor of any other class except the base class.
When I declare a base class, should I declare all the functions in it as virtual, or should I have a set of virtual functions and a set of non-virtual functions which I am sure are not going to be inherited?
A function only needs to be virtual iff a derived class will implement that function in a different way.
For example:
class Base {
public:
void setI (int i) // No need for it to be virtual
{
m_i = i;
}
virtual ~Base () {} // Almost always a good idea
virtual bool isDerived1 () // Is overridden - so make it virtual
{
return false;
}
private:
int m_i;
};
class Derived1 : public Base {
public:
virtual ~Derived () {}
virtual bool isDerived1 () // Is overridden - so make it virtual
{
return true;
}
};
As a result, I would error the side of not having anything virtual unless you know in advance that you intend to override it or until you discover that you require the behaviour. The only exception to this is the destructor, for which its almost always the case that you want it to be virtual in a base class.
You should only make functions you intend and design to be overridden virtual. Making a method virtual is not free in terms of both maintenance and performance (maintenance being the much bigger issue IMHO).
Once a method is virtual it becomes harder to reason about any code which uses this method. Because instead of considering what one method call would do, you must consider what N method calls would do in that scenario. N represents the number of sub classes which override that method.
The one exception to this rule is destructors. They should be virtual in any class which is intended to be derived from. It's the only way to guarantee that the proper destructor is called during deallocation.
The non-virtual interface idiom (C++ Coding Standards item 39) says that a base class should have non-virtual interface methods, allowing the base class to guarantee invariants, and non-public virtual methods for customization of the base class behaviour by derived classes. The non-virtual interface methods call the virtual methods to provide the overridable behaviour.
I tend to make only the things I want to be overridable virtual. If my initial assumptions about what I will want to override turn out to be wrong, I go back and change the base class.
Oh, and obviously always make your destructor virtual if you're working on something that will be inherited from.
If you are creating a base class ( you are sure that somebody derives the class ) then you can do following things:
Make destructor virtual (a must for base class)
Define methods which should be
derived and make them virtual.
Define methods which need not be ( or
should not be) derived as
non-virtual.
If the functions are only for derived
class and not for base class then mark
them as protected.
Compiler wouldn't know which actual piece of code will be run when pointer of base type calls a virtual function. so the actual piece of code that would be run needs to be evaluated at run-time according to which object is pointed by base class pointer. So avoid the use of virtual function if the function is not gonne be overriden in an inherited class.
TLDR version:
"you should have a set of virtual functions and a set of non-virtual functions which you are sure are not going to be inherited." Because virtual functions causes a performance decrease at run-time.
The interface functions should be, in general, virtual. Functions that provide fixed functionality should not.
Why declare something virtual until you are really overriding it? I believe it's not a question of being sure or not. Follow the facts: is it overriden somewhere? No? Then it must not be virtual.