I've been having a discussion with my coworkers as to whether to prefix overridden methods with the virtual keyword, or only at the originating base class.
I tend to prefix all virtual methods (that is, methods involving a vtable lookup) with the virtual keyword. My rationale is threefold:
Given that C++ lacks an override
keyword, the presence of the virtual
keyword at least notifies you that
the method involves a lookup and
could theoretically be overridden by
further specializations, or could be
called through a pointer to a higher
base class.
Consistently using this style
means that, when you see a method
(at least within our code) without
the virtual keyword, you can
initially assume that it is neither
derived from a base nor specialized
in subclass.
If, through some error, the
virtual were removed from IFoo, all
children will still function
(CFooSpecialization::DoBar would
still override CFooBase::DoBar,
rather than simply hiding it).
The argument against the practice, as I understood it, was, "But that method isn't virtual" (which I believe is invalid, and borne from a misunderstanding of virtuality), and "When I see the virtual keyword, I expect that means someone is deriving from it, and go searching for them."
The hypothetical classes may be spread across several files, and there are several specializations.
class IFoo {
public:
virtual void DoBar() = 0;
void DoBaz();
};
class CFooBase : public IFoo {
public:
virtual void DoBar(); // Default implementation
void DoZap();
};
class CFooSpecialization : public CFooBase {
public:
virtual void DoBar(); // Specialized implementation
};
Stylistically, would you remove the virtual keyword from the two derived classes? If so, why? What are Stack Overflow's thoughts here?
I completely agree with your rationale. It's a good reminder that the method will have dynamic dispatch semantics when called. The "that method isn't virtual" argument that you co-worker is using is completely bogus. He's mixed up the concepts of virtual and pure-virtual.
A function once a virtual always a virtual.
So in any event if the virtual keyword is not used in the subsequent classes, it does not prevent the function/method from being 'virtual' i.e. be overridden. So one of the projects that I worked-in, had the following guideline which I somewhat liked :
If the function/method is supposed to
be overridden always use the
'virtual' keyword. This is especially
true when used in interface / base
classes.
If the derived class is supposed to
be sub-classed further explicity
state the 'virtual' keyword for every
function/method that can be
overridden. C++11 use the 'override' keyword
If the function/method in the derived
class is not supposed to be
sub-classed again, then the keyword
'virtual' is to be commented
indicating that the function/method
was overridden but there are no
further classes that override it
again. This ofcourse does not prevent
someone from overriding in the
derived class unless the class
is made final (non-derivable), but it
indicates that the method is not supposed to be
overridden.
Ex: /*virtual*/ void guiFocusEvent();
C++11, use the 'final' keyword along with the 'override'
Ex: void guiFocusEvent() override final;
Adding virtual does not have a significant impact either way. I tend to prefer it but it's really a subjective issue. However, if you make sure to use the override and sealed keywords in Visual C++, you'll gain a significant improvement in ability to catch errors at compile time.
I include the following lines in my PCH:
#if _MSC_VER >= 1400
#define OVERRIDE override
#define SEALED sealed
#else
#define OVERRIDE
#define SEALED
#endif
I would tend not to use any syntax that the compiler will allow me to omit. Having said that, part of the design of C# (in an attempt to improve over C++) was to require overrides of virtual methods to be labeled as "override", and that seems to be a reasonable idea. My concern is that, since it's completely optional, it's only a matter of time before someone omits it, and by then you'll have gotten into the habit of expecting overrides to be have "virtual" specified. Maybe it's best to just live within the limitations of the language, then.
I can think of one disadvantage:
When a class member function is not overridden and you declare it virtual, you add an uneccessary entry in the virtual table for that class definition.
Note: My answer regards C++03 which some of us are still stuck with. C++11 has the override and final keywords as #JustinTime suggests in the comments which should probably be used instead of the following suggestion.
There are plenty of answers already and two contrary opinions that stand out the most. I want to combine what #280Z28 mentioned in his answer with #StevenSudit's opinion and #Abhay's style guidelines.
I disagree with #280Z28 and wouldn't use Microsoft's language extensions unless you are certain that you will only ever use that code on Windows.
But I do like the keywords. So why not just use a #define-d keyword addition for clarity?
#define OVERRIDE
#define SEALED
or
#define OVERRIDE virtual
#define SEALED virtual
The difference being your decision on what you want to happen in the case you outline in your 3rd point.
3 - If, through some error, the virtual were removed from IFoo, all children will still function (CFooSpecialization::DoBar would still override CFooBase::DoBar, rather than simply hiding it).
Though I would argue that it is a programming error so there is no "fix" and you probably shouldn't even bother mitigating it but should ensure it crashes or notifies the programmer in some other way (though I can't think of one right now).
Should you chose the first option and don't like adding #define's then you can just use comments like:
/* override */
/* sealed */
And that should do the job for all cases where you want clarity, because I don't consider the word virtual to be clear enough for what you want it to do.
Related
Is there any reason to declare a method virtual if a class has no subclasses, and is always used directly?
For example:
class Foo {
public:
virtual void DoBar {
// Do something here.
}
}
I came across this in some code I was reading, and couldn't find any justification.
Thanks!
Well the essence of virtual keyword is directly related to inheritance. This is an extract from CPP Ref:-
Virtual members A virtual member is a member function that can be
redefined in a derived class, while preserving its calling properties
through references. The syntax for a function to become virtual is to
precede its declaration with the virtual keyword
So IMHO - the ans to your question is no - it makes no sense - unless the code has changed from initial implementation - and trust me that happens a lot!
It is useful when writing library code to keep the future programmer in mind who may want to extend the class and provide their own behaviour. For example it is common to have a virtual Paint() function or virtual mouse handling functions in GUI libraries. They provide default implementations, but they allow the possibility of extension.
If that class is meant to be derive from then yes it makes sense. These decisions should be made when deciding the architecture of a program, and defining what can be done with the interfaces. If they do not want this to be derived from then it should not be virtual. If they do want it to be derived from then it should be virtual (and it should also make the destructor virtual).
Here is what I am talking about
// some guy wrote this, used as a Policy with templates
struct MyWriter {
void write(std::vector<char> const& data) {
// ...
}
};
In some existing code, the people did not use templates, but interfaces+type-erasure
class IWriter {
public:
virtual ~IWriter() {}
public:
virtual void write(std::vector<char> const& data) = 0;
};
Someone else wanted to be usable with both approaches and writes
class MyOwnClass: private MyWriter, public IWriter {
// other stuff
};
MyOwnClass is implemented-in-terms-of MyWriter. Why doesn't MyOwnClass' inherited member functions implement the interface of IWriter automatically? Instead the user has to write forwarding functions that do nothing but call the base class versions, as in
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) {
MyWriter::write(data);
}
};
I know that in Java when you have a class that implements an interface and derives from a class that happens to have suitable methods, that base class automatically implements the interface for the derived class.
Why doesn't C++ do that? It seems like a natural thing to have.
This is multiple inheritance, and there are two inherited functions with the same signature, both of which have implementation. That's where C++ is different from Java.
Calling write on an expression whose static type is MyBigClass would therefore be ambiguous as to which of the inherited functions was desired.
If write is only called through base class pointers, then defining write in the derived class is NOT necessary, contrary to the claim in the question. Now that the question changed to include a pure specifier, implementing that function in the derived class is necessary to make the class concrete and instantiable.
MyWriter::write cannot be used for the virtual call mechanism of MyBigClass, because the virtual call mechanism requires a function that accepts an implicit IWriter* const this, and MyWriter::write accepts an implicit MyWriter* const this. A new function is required, which must take into account the address difference between the IWriter subobject and the MyWriter subobject.
It would be theoretically possible for the compiler to create this new function automatically, but it would be fragile, since a change in a base class could suddenly cause a new function to be chosen for forwarding. It's less fragile in Java, where only single inheritance is possible (there's only one choice for what function to forward to), but in C++, which supports full multiple inheritance, the choice is ambiguous, and we haven't even started on diamond inheritance or virtual inheritance yet.
Actually, this problem (difference between subobject addresses) is solved for virtual inheritance. But it requires additional overhead that's not necessary most of the time, and a C++ guiding principle is "you don't pay for what you don't use".
Why doesn't C++ do that? It seems like a natural thing to have.
Actually, no, it is extremely unnatural thing to have.
Please note that my reasoning is based on my own understanding of "common sense" and can be fundamentally flawed as a result.
You see, you have two different methods, first one in MyWriter, which is non virtual and second one in IWriter which is virtual. They are completely different despite "looking" similar.
I suggest to check this question. The good thing about non-virtual methods is that no matter what you do, as long as they don't call virtual methods, their behavior will never change. I.e. somebody deriving from your class with non-virtual methods will not break existing method by masking them. Virtual methods are designed to be overriden. The price of that is that it is possible to break underlying logic by improperly overriding virtual method. And this is a root of your problem.
Let's say what you propose is allowed. (automatic conversion to virtual with multiple inheritance) There two possible solutions:
Solution #1
MyWriter becomes virtual. Consequences: All existing C++ code in the world becomes easy to break via typo or name clash. MyWriter method was not supposed to be overriden initially, so suddenly turning it into virtual will (murphy's law) break underlying logic of MyWriter class when somebody derives from MyOwnClass. Which means that suddenly making MyWriter::write virtual is a bad idea.
Soluion #2
MyWriter remains static BUUUT it is included temporarily as a virtual method into IWriter, until overriden. At first glance there's nothing to worry about, but let's think about it. IWriter implements some kind of concept you had in mind, and it is supposed to do something. MyWriter implements another concept. To assign MyWriter::write as IWriter::write method you need two guarantees:
Compiler must ensure that MyWriter::write does what IWriter::write() is supposed to do.
Compiler must ensure that calling MyWriter::write from IWriter will not break existing functionality in MyWriter code programmer expects to use elsewhere.
So, the thing is that compiler cannot guarantee that. Functions have similar name and argument list, but by Murphy's law that means that they're prbably doing completely different thing. (sinf and cosf have same argument list, for example), and it is unlikely that compiler will be able to predict the future and make sure that at no point in development will MyWriter be changed in such way that it will become incompatible with IWriter. So, since machine can't make reasonable decision (no AI for that) by itself, it has to ask YOU, programmer - "What is it you wish to do?". And you say "redirect virtual method into MyWriter::write(). It totally won't break anything. I think.".
And that's why you must specify which method you want to use manually....
Doing it automatically would be unintuitive and surprising. C++ does not assume that multiple base classes are related to each other, and protects the user against name collisions between their members by defining nested name specifiers for nonstatic members. Adding implicit declarations to MyOwnClass where signatures from IWriter and MyWriter collide would be antithetical to protecting names.
However, C++11 extensions do bring us closer. Consider this:
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) final = MyWriter::write;
};
This mechanism would be safe because it expresses that MyWriter doesn't expect any further overrides, and convenient because it names the function signature that will be "joined" but nothing more. Also, final would be ill-formed if the function weren't implicitly virtual, so it checks that the signature matches the virtual interface.
On one hand, most interfaces don't just happen to match up this way. Defining this feature to work only with identical signatures would be safe but rarely useful. Defining it as a shortcut to a delegating function body would be useful but fragile. So it might not really be a good feature
On the other hand, this is a good design pattern to provide functionality which isn't virtual when you don't need it to be. So given this idiom, we might use it to write good code, even if it doesn't match up well with current practices.
Why doesn't C++ do that?
I'm not sure what you're asking here. Could C++ be rewritten to allow this? Yes, but to what end?
Because MyWriter and IWriter are completely different classes, it is illegal in C++ to call a member of MyWriter through an instance of IWriter. The member pointers have completely different types. And just as a MyWriter* is not convertible to a IWriter*, neither is a void (MyWriter::*)(const std::vector<char>&) convertible to a void (IWriter::*)(const std::vector<char>&).
The rules of C++ don't change just because there could be a third class that combines the two. Neither class is a direct parent/child relative of one another. Therefore, they are treated as entirely distinct classes.
Remember: member functions always take an additional parameter: a this pointer to the object that they point to. You cannot call void (MyWriter::*)(const std::vector<char>&) on an IWriter*. The third class can have a method that casts itself into the proper base class, but it must actually have this method. So either you or the C++ compiler must create it. The rules of C++ require this.
Consider what would have to happen to make this work without a derived-class method.
A function gets an IWriter*. The user calls the write member of it, using nothing more than the IWriter* pointer. So... exactly how can the compiler generate the code to call MyWriter::writer? Remember: MyWriter::writer needs a MyWriter instance. And there is no relationship between IWriter and MyWriter.
So how exactly could the compiler do the type coercion locally? The compiler would have to check the virtual function to see if the actual function to be called takes IWriter or some other type. If it takes another type, it would have to convert the pointer to its true type, then do another conversion to the type needed by the virtual function. After doing all of that, it would then be able to make the call.
All of this overhead would affect every virtual call. All of them would have to at least check to see if the actual function to be call. Every call will also have to generate the code to do the type conversions, just in case.
Every virtual function call would have a "get type" and conditional branch in it. Even if it is never possible to trigger that branch. So you would be paying for something regardless of whether you use it or not. That's not the C++ way.
Even worse, a straight v-table implementation of virtual calls is no longer possible. The fastest method of doing virtual dispatch would not be a conforming implementation. The C++ committee is not going to make any change that would make such implementations impossible.
Again, to what end? Just so that you don't have to write a simple forwarding function?
Just make MyWriter derive from IWriter, eliminate the IWriter derivation in MyOwnClass, and move on with life. This should resolve the problem and should not interfere with the template code.
Here is what I am talking about
// some guy wrote this, used as a Policy with templates
struct MyWriter {
void write(std::vector<char> const& data) {
// ...
}
};
In some existing code, the people did not use templates, but interfaces+type-erasure
class IWriter {
public:
virtual ~IWriter() {}
public:
virtual void write(std::vector<char> const& data) = 0;
};
Someone else wanted to be usable with both approaches and writes
class MyOwnClass: private MyWriter, public IWriter {
// other stuff
};
MyOwnClass is implemented-in-terms-of MyWriter. Why doesn't MyOwnClass' inherited member functions implement the interface of IWriter automatically? Instead the user has to write forwarding functions that do nothing but call the base class versions, as in
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) {
MyWriter::write(data);
}
};
I know that in Java when you have a class that implements an interface and derives from a class that happens to have suitable methods, that base class automatically implements the interface for the derived class.
Why doesn't C++ do that? It seems like a natural thing to have.
This is multiple inheritance, and there are two inherited functions with the same signature, both of which have implementation. That's where C++ is different from Java.
Calling write on an expression whose static type is MyBigClass would therefore be ambiguous as to which of the inherited functions was desired.
If write is only called through base class pointers, then defining write in the derived class is NOT necessary, contrary to the claim in the question. Now that the question changed to include a pure specifier, implementing that function in the derived class is necessary to make the class concrete and instantiable.
MyWriter::write cannot be used for the virtual call mechanism of MyBigClass, because the virtual call mechanism requires a function that accepts an implicit IWriter* const this, and MyWriter::write accepts an implicit MyWriter* const this. A new function is required, which must take into account the address difference between the IWriter subobject and the MyWriter subobject.
It would be theoretically possible for the compiler to create this new function automatically, but it would be fragile, since a change in a base class could suddenly cause a new function to be chosen for forwarding. It's less fragile in Java, where only single inheritance is possible (there's only one choice for what function to forward to), but in C++, which supports full multiple inheritance, the choice is ambiguous, and we haven't even started on diamond inheritance or virtual inheritance yet.
Actually, this problem (difference between subobject addresses) is solved for virtual inheritance. But it requires additional overhead that's not necessary most of the time, and a C++ guiding principle is "you don't pay for what you don't use".
Why doesn't C++ do that? It seems like a natural thing to have.
Actually, no, it is extremely unnatural thing to have.
Please note that my reasoning is based on my own understanding of "common sense" and can be fundamentally flawed as a result.
You see, you have two different methods, first one in MyWriter, which is non virtual and second one in IWriter which is virtual. They are completely different despite "looking" similar.
I suggest to check this question. The good thing about non-virtual methods is that no matter what you do, as long as they don't call virtual methods, their behavior will never change. I.e. somebody deriving from your class with non-virtual methods will not break existing method by masking them. Virtual methods are designed to be overriden. The price of that is that it is possible to break underlying logic by improperly overriding virtual method. And this is a root of your problem.
Let's say what you propose is allowed. (automatic conversion to virtual with multiple inheritance) There two possible solutions:
Solution #1
MyWriter becomes virtual. Consequences: All existing C++ code in the world becomes easy to break via typo or name clash. MyWriter method was not supposed to be overriden initially, so suddenly turning it into virtual will (murphy's law) break underlying logic of MyWriter class when somebody derives from MyOwnClass. Which means that suddenly making MyWriter::write virtual is a bad idea.
Soluion #2
MyWriter remains static BUUUT it is included temporarily as a virtual method into IWriter, until overriden. At first glance there's nothing to worry about, but let's think about it. IWriter implements some kind of concept you had in mind, and it is supposed to do something. MyWriter implements another concept. To assign MyWriter::write as IWriter::write method you need two guarantees:
Compiler must ensure that MyWriter::write does what IWriter::write() is supposed to do.
Compiler must ensure that calling MyWriter::write from IWriter will not break existing functionality in MyWriter code programmer expects to use elsewhere.
So, the thing is that compiler cannot guarantee that. Functions have similar name and argument list, but by Murphy's law that means that they're prbably doing completely different thing. (sinf and cosf have same argument list, for example), and it is unlikely that compiler will be able to predict the future and make sure that at no point in development will MyWriter be changed in such way that it will become incompatible with IWriter. So, since machine can't make reasonable decision (no AI for that) by itself, it has to ask YOU, programmer - "What is it you wish to do?". And you say "redirect virtual method into MyWriter::write(). It totally won't break anything. I think.".
And that's why you must specify which method you want to use manually....
Doing it automatically would be unintuitive and surprising. C++ does not assume that multiple base classes are related to each other, and protects the user against name collisions between their members by defining nested name specifiers for nonstatic members. Adding implicit declarations to MyOwnClass where signatures from IWriter and MyWriter collide would be antithetical to protecting names.
However, C++11 extensions do bring us closer. Consider this:
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) final = MyWriter::write;
};
This mechanism would be safe because it expresses that MyWriter doesn't expect any further overrides, and convenient because it names the function signature that will be "joined" but nothing more. Also, final would be ill-formed if the function weren't implicitly virtual, so it checks that the signature matches the virtual interface.
On one hand, most interfaces don't just happen to match up this way. Defining this feature to work only with identical signatures would be safe but rarely useful. Defining it as a shortcut to a delegating function body would be useful but fragile. So it might not really be a good feature
On the other hand, this is a good design pattern to provide functionality which isn't virtual when you don't need it to be. So given this idiom, we might use it to write good code, even if it doesn't match up well with current practices.
Why doesn't C++ do that?
I'm not sure what you're asking here. Could C++ be rewritten to allow this? Yes, but to what end?
Because MyWriter and IWriter are completely different classes, it is illegal in C++ to call a member of MyWriter through an instance of IWriter. The member pointers have completely different types. And just as a MyWriter* is not convertible to a IWriter*, neither is a void (MyWriter::*)(const std::vector<char>&) convertible to a void (IWriter::*)(const std::vector<char>&).
The rules of C++ don't change just because there could be a third class that combines the two. Neither class is a direct parent/child relative of one another. Therefore, they are treated as entirely distinct classes.
Remember: member functions always take an additional parameter: a this pointer to the object that they point to. You cannot call void (MyWriter::*)(const std::vector<char>&) on an IWriter*. The third class can have a method that casts itself into the proper base class, but it must actually have this method. So either you or the C++ compiler must create it. The rules of C++ require this.
Consider what would have to happen to make this work without a derived-class method.
A function gets an IWriter*. The user calls the write member of it, using nothing more than the IWriter* pointer. So... exactly how can the compiler generate the code to call MyWriter::writer? Remember: MyWriter::writer needs a MyWriter instance. And there is no relationship between IWriter and MyWriter.
So how exactly could the compiler do the type coercion locally? The compiler would have to check the virtual function to see if the actual function to be called takes IWriter or some other type. If it takes another type, it would have to convert the pointer to its true type, then do another conversion to the type needed by the virtual function. After doing all of that, it would then be able to make the call.
All of this overhead would affect every virtual call. All of them would have to at least check to see if the actual function to be call. Every call will also have to generate the code to do the type conversions, just in case.
Every virtual function call would have a "get type" and conditional branch in it. Even if it is never possible to trigger that branch. So you would be paying for something regardless of whether you use it or not. That's not the C++ way.
Even worse, a straight v-table implementation of virtual calls is no longer possible. The fastest method of doing virtual dispatch would not be a conforming implementation. The C++ committee is not going to make any change that would make such implementations impossible.
Again, to what end? Just so that you don't have to write a simple forwarding function?
Just make MyWriter derive from IWriter, eliminate the IWriter derivation in MyOwnClass, and move on with life. This should resolve the problem and should not interfere with the template code.
I read different opinions about this question. Let's say I have an interface class with a bunch of pure virtual methods. I implement those methods in a class that implements the interface and I do not expect to derive from the implementation.
Is there a need for declaring the methods in the implementation as virtual as well? If yes, why?
No - every function method declared virtual in the base class will be virtual in all derived classes.
But good coding practices are telling to declare those methods virtual.
virtual is optional in the derived class override declaration, but for clarity I personally include it.
Real need - no. Once a method is declared as virtual in the base class, it stays virtual for all derived classes. But it's good to know which method is virtual and which - not, instead of checking this in the base class. Also, in most cases, you cannot be sure, if your code will be derived or not (for example, if you're developing some software for some firm). As I said, it's not a problem, as once declared as virtual, it stays virtual, but just in case .. (:
There is no requirement to mark them virtual.
I'd start by arguing that virtual advertises to readers that you expect derived classes to override the virtual to do something useful. If you are implementing the virtual to do something, then the virtual method might have nothing to do with the kind of thing your class is: in which case marking it virtual is silly. consider:
class CommsObject {
virtual OnConnect();
virtual OnRawBytesIn();
};
class XMLStream : public CommsObject {
virtual OnConnect();
OnRawBytesIn();
virtual OnXMLData();
};
In that example, OnConnect is documented as virtual in both classes because it makes sense that a descendent would always want to know. OnRawBytesIn doesn't make sense to "Export" from XMLStream as it uses that to handle raw bytes, and generate parsed data - which it notifies via OnXMLData().
Having done all that, I'd then argue that the maintainer of a 3rd class, looking at XMLStream, might think that it would be "safe" to create their own OnRawBytes function and expect it to work as a normal overloaded function - i.e. the base class would call the internal correct one, and the outer one would mask the internal OnRawBytes.
So omitting the virtual has hidden important detail from consumers of the class and made the code behave in unexpected ways.
So ive gone full circle: Don't try to use it as a hint about the intended purpose of a function - DO use it as a hint about the behaviour of the function: mark functions virtual consistently so downstream programmers have to read less files to know how a function is going to behave when overridden.
No, it is not needed and it doesn't prevent any coding errors although many coders prefer to put it.
Once C++0x becomes mainstream you'll be able to use the override specifier instead.
Once 'virtual', it's virtual all the way down to the last child. Afaik, that's the feature of the c++.
If you never derive from a class then there is no point making its methods virtual.
I recently came to know that in C++ pure virtual functions can optionally have a body.
What are the real-world use cases for such functions?
The classic is a pure virtual destructor:
class abstract {
public:
virtual ~abstract() = 0;
};
abstract::~abstract() {}
You make it pure because there's nothing else to make so, and you want the class to be abstract, but you have to provide an implementation nevertheless, because the derived classes' destructors call yours explicitly. Yeah, I know, a pretty silly textbook example, but as such it's a classic. It must have been in the first edition of The C++ Programming Language.
Anyway, I can't remember ever really needing the ability to implement a pure virtual function. To me it seems the only reason this feature is there is because it would have had to be explicitly disallowed and Stroustrup didn't see a reason for that.
If you ever feel you need this feature, you're probably on the wrong track with your design.
Pure virtual functions with or without a body simply mean that the derived types must provide their own implementation.
Pure virtual function bodies in the base class are useful if your derived classes wants to call your base class implementation.
One reason that an abstract base class (with a pure virtual function) might provide an implementation for a pure virtual function it declares is to let derived classes have an easy 'default' they can choose to use. There isn't a whole lot of advantage to this over a normal virtual function that can be optionally overridden - in fact, the only real difference is that you're forcing the derived class to be explicit about using the 'default' base class implementation:
class foo {
public:
virtual int interface();
};
int foo::interface()
{
printf( "default foo::interface() called\n");
return 0;
};
class pure_foo {
public:
virtual int interface() = 0;
};
int pure_foo::interface()
{
printf( "default pure_foo::interface() called\n");
return 42;
}
//------------------------------------
class foobar : public foo {
// no need to override to get default behavior
};
class foobar2 : public pure_foo {
public:
// need to be explicit about the override, even to get default behavior
virtual int interface();
};
int foobar2::interface()
{
// foobar is lazy; it'll just use pure_foo's default
return pure_foo::interface();
}
I'm not sure there's a whole lot of benefit - maybe in cases where a design started out with an abstract class, then over time found that a lot of the derived concrete classes were implementing the same behavior, so they decided to move that behavior into a base class implementation for the pure virtual function.
I suppose it might also be reasonable to put common behavior into the pure virtual function's base class implementation that the derived classes might be expected to modify/enhance/augment.
One use case is calling the pure virtual function from the constructor or the destructor of the class.
The almighty Herb Sutter, former chair of the C++ standard committee, did give 3 scenarios where you might consider providing implementations for pure virtual methods.
Gotta say that personally – I find none of them convincing, and generally consider this to be one of C++'s semantic warts. It seems C++ goes out of its way to build and tear apart abstract-parent vtables, then briefly exposes them only during child construction/destruction, and then the community experts unanimously recommend never to use them.
The only difference of virtual function with body and pure virtual function with body is that existence of second prevent instantiation. You can't mark class abstract in c++.
This question can really be confusing when learning OOD and C++. Personally, one thing constantly coming in my head was something like:
If I needed a Pure Virtual function to also have an implementation, so why make it "Pure" in first place ? Why not just leaving it only "Virtual" and have derivatives, both benefit and override the base implementation ?
The confusion comes to the fact that many developers consider the no body/implementation as the primary goal/benefit of defining a pure virtual function. This is not true!
The absence of body is in most cases a logical consequence of having a pure virtual function. The main benefit of having a pure virtual function is defining a contract: By defining a pure virtual function, you want to force every derivative to always provide their own implementation of the function. This "contract aspect" is very important especially if you are developing something like a public API. Making the function only virtual is not so sufficient because derivatives are no longer forced to provide their own implementation, therefore you may loose the contract aspect (which can be limiting in the case of a public API).
As commonly said :
"Virtual functions can be overrided, Pure Virtual functions must be overrided."
And in most cases, contracts are abstract concepts so it doesn't make sense for the corresponding pure virtual functions to have any implementation.
But sometimes, and because life is weird, you may want to establish a strong contract among derivatives and also want them to somehow benefit from some default implementation while specifying their own behavior for the contract. Even if most book authors recommend to avoid getting yourself into these situations, the language needed to provide a safety net to prevent the worst! A simple virtual function wouldn't be enough since there might be risk of escaping the contract. So the solution C++ provided was to allow pure virtual functions to also be able to provide a default implementation.
The Sutter article cited above gives interesting use cases of having Pure Virtual functions with body.