When should a non-virtual method be redefined? - c++

virtual methods are part of the C++ implementation of polymorphism. Other than avoiding the overhead associated with RTTI** and method lookups, is there a compelling reason to omit virtual?
Assuming that virtual could be added to the base class at any time, what purpose would redefining a non-virtual method serve?
**Whether it's measurable on modern CPUs or not is irrelevant to this question.

Well, there is little reasons to redefining a function that is no virtual. In fact I would recommend against it as what looks like the same function call on the exact same object could behave differently based on the static type of the pointer/reference used.
Overriding a virtual member function allows you to specialize the behavior of the derived type. Overloading a non-virtual member function will instead provide an alternative behavior, in which it might not be obvious to a casual reader which of the functions/behaviors will be executed.

One possible use could be for implementing a CRTP framework where default versions of functions are defined:
#include <iostream>
//This could be any higher-order function.
template<typename T>
class CallFiveTimes {
protected:
void docalls() const {
for(int i(0); i != 5; ++i) static_cast<T const*>(this)->callme();
}
//Default implementation. If a lot
//of different functionality were required of `T`
//then defaults could make `T` easier to write.
void callme() const {
std::cout << "Default implementation.\n";
}
};
class Client : CallFiveTimes<Client> {
public:
void useFramework() {
docalls();
}
private:
friend struct CallFiveTimes<Client>;
//This redefinition will be used.
void callme() const {
std::cout << "Client implementation.\n";
}
};
class LazyClient : CallFiveTimes<LazyClient> {
public:
void useFramework() {
docalls();
}
friend struct CallFiveTimes<LazyClient>;
};
int main() {
Client c;
c.useFramework(); //prints "Client Implementation" five times
LazyClient lc;
lc.useFramework(); //prints "Default Implementation" five times
}
I've never seen this done in practice, but it may be worth considering in some cases.

Related

Static polymorphism with virtual functions

This is a followup question of Static polymorphism with final or templates?. So it seems that the best solution for static-only polymorphism is to use CRTP. If you want runtime polymorphism, virtual functions are a good solution.
I find that not very elegant because the problems are actually very similar (and maybe you want to change the behavior at some point) but the code is actually very different. The code would in my opinion be more expressive if the solutions would be very similar and only differ at a single spot.
So I would like to know if there is a way to get static-only polymorphism with virtual functions. That might be something like an attribute or some construction to not allow pointers to the abstract base class. Is there such a feature and if not, am I missing something why such a feature should not exist? Are static and runtime polymorphism actually more different than such a feature would suggest?
EDIT: To make the question and the usecase a bit clearer, here is some example, that I would like to write:
[[abstract]] class Base {
public:
void bar() { /* do something using foo() */ }
private:
virtual void foo() = 0;
};
class Derived1 : public Base {
public:
Derived1();
private:
void foo() override { /* do something */};
};
class Derived2 : pulic Base {
public:
Derived2();
private:
void foo() override { /* do something with data */ }
int data;
};
where the non-existing attribute [[abstract]] means that there cannot exist an instance of the class Bass, even not by a pointer. This would clearly express static polymorphism and the compiler could optimize away virtual calls because they do not exist. Also a virtual destructor would not be necessary.
EDIT 2: The goal is to provide an abstract interface that can be slightly modified in further derived classes and has the same options for extending as an abstract class. So the main implementation is still in Base and the specific implementation of the virtual functions is in Derived.
You are more likely to get something the other way around, runtime polymorphism that looks like static polymorphism, or that looks like something completely different.
The metaclass proposals floated for post-reflection C++ (maybe c++26) look powerful enough to do stuff like:
Interface IBob {
void foo();
};
and
Implementation<Dispatch::Static> BobImpl:IBob {
void foo() {}
};
Implementation<Dispatch::Dynamic> BobImpl:IBob {
void foo() {}
};
to do roughly what you are asking. (Syntax is ridiculously far from final in the metaclass proposal(s); the expressive power is clearly there to do the above, however).
The dynamic case would set up vtables and the like (possibly not the standard C++ vtables however), and in the static case BobImpl would be unrelated to the type bob.
Of course, at that point, I expect there to be so many new ways to express polymorphism in C++ that "I want my CRTP to be written like a virtual function table C++ object" to be a bit like seeing atomic power technology coming over the horizon, and being excited that it could replace the coal burner on your steam-train.
So it seems that the best solution for static-only polymorphism is to use CRTP. If you want runtime polymorphism, virtual functions are a good solution.
In your linked question, you used CRTP to enforce an interface:
template <typename TData>
struct Base {
void foo() {
static_cast<TData*>(this)->doFoo();
}
This uses static polymorphism, but it's a very specific use. It does nothing at all but refuse to compile if your derived class doesn't have a suitable doFoo method. So I don't know how you reached your conclusion.
I find that not very elegant because the problems are actually very similar (and maybe you want to change the behavior at some point) but the code is actually very different. The code would in my opinion be more expressive if the solutions would be very similar and only differ at a single spot.
You lost me. Runtime polymorphism affects two things in C++:
declaration, since you must have a base class, the virtual keyword, and optionally override and final
use, when the call site uses virtual dispatch to find the correct method implementation
(although, as discussed, this may be optimized out when the static type is known).
Note also that although polymorphism is often discussed as being about relationships between objects, in C++ we're really only talking about method dispatch.
Static polymorphism only affects the call site. The fact that your other question used CRTP doesn't mean that is the only way of using static polymorphism.
If I write a template function
template <typename T>
void foo_it(T&& t) { t.foo(); }
then that uses static polymorphism. It will work for any T with a suitable foo method, whether it derived from Base<T> or not. It will even work for a T which overrides a virtual foo() from some other base class. This is duck typing.
Since it's unclear what you hope your [[abstract]] Base to achieve, I can only advise you to just write
class Derived {
public:
void foo();
};
and pass it around to function templates that expect some type implementing foo().
As a follow-up to your edit
So the main implementation is still in Base and the specific implementation of the virtual functions is in Derived
It's perfectly fine to do this using CRTP. That is an implementation detail which happens to use static polymorphism, not a virtual-like hierarchy.
For example
template <typename Derived>
struct Template {
Derived* virt() { return static_cast<Derived*>(this); }
int foo(int i) {
return i + virt->detail(i) + virt->extra();
}
};
struct A: public Template<A> {
int detail(int i) { return i*i; }
int extra() { return 17; }
};
struct B: public Template<B> {
int detail(int i) { return i % 23; }
int extra() { return -42; }
};
creates two independent types A and B, which provide the same interface int foo(int), and happen to share some code as an implementation detail.
It doesn't create a hierarchy. If you write a function template that takes some object of type T and calls the method int T::foo(int) on it, this will work. That is static polymorphism. It doesn't require a shared base class.
Assuming the question is how to enforce at compile time that derived classes
implement a base "abstract interface"
without polymorphism
keeping the implementation non-public, yet accessible to base class members
the following could be one way to do it.
#include <type_traits>
#include <iostream>
using std::cout;
using std::endl;
template<class Impl> class Base {
protected:
void foo() {
Bridge::virtual_foo(static_cast<Impl&>(*this));
}
struct Bridge : public Impl {
static void virtual_foo(Impl &that) {
static constexpr void (Impl::*fn)() = &Bridge::foo;
(that.*fn)();
}
static_assert(std::is_same<void (Impl::*)(), decltype(&Bridge::foo)>::value, "foo not implemented");
};
public:
void bar() {
cout << "begin Base::bar" << endl;
foo();
cout << "end Base::bar" << endl << endl;
}
};
class Good : public Base<Good> {
protected:
void foo() {
cout << "in Good::foo" << endl;
}
};
class Bad : public Base<Bad> {
};
int main() {
Good().bar();
// Bad().bar(); // static_assert: 'foo' not implemented
}
Output:
begin Base::bar
in Good::foo
end Base::bar

C++ : Automatically run function when derived class is constructed

So I recently accidentally called some virtual functions from the constructor of a base class, i.e. Calling virtual functions inside constructors.
I realise that I should not do this because overrides of the virtual function will not be called, but how can I achieve some similar functionality? My use-case is that I want a particular function to be run whenever an object is constructed, and I don't want people who write derived classes to have to worry about what this is doing (because of course they could call this thing in their derived class constructor). But, the function that needs to be called in-turn happens to call a virtual function, which I want to allow the derived class the ability to override if they want.
But because a virtual function gets called, I can't just stick this function in the constructor of the base class and have it get run automatically that way. So I seem to be stuck.
Is there some other way to achieve what I want?
edit: I happen to be using the CRTP to access other methods in the derived class from the base class, can I perhaps use that instead of virtual functions in the constructor? Or is much the same issue present then? I guess perhaps it can work if the function being called is static?
edit2: Also just found this similar question: Call virtual method immediately after construction
If really needed, and you have access to the factory.
You may do something like:
template <typename Derived, typename ... Args>
std::unique_ptr<Derived> Make(Args&&... args)
{
auto derived = std::make_unique<Derived>(std::forward<Args>(args));
derived->init(); // virtual call
return derived;
}
There is no simple way to do this. One option would be to use so-called virtual constructor idiom, hide all constructors of the base class, and instead expose static 'create' - which will dynamically create an object, call your virtual override on it and return (smart)pointer.
This is ugly, and what is more important, constrains you to dynamically created objects, which is not the best thing.
However, the best solution is to use as little of OOP as possible. C++ strength (contrary to popular belief) is in it's non-OOP specific traits. Think about it - the only family of polymorphic classess inside standard library are streams, which everybody hate (because they are polymorphic!)
I want a particular function to be run whenever an object is constructed, [... it] in-turn happens to call a virtual function, which I want to allow the derived class the ability to override if they want.
This can be easily done if you're willing to live with two restrictions:
the constructors in the entire class hierarchy must be non-public, and thus
a factory template class must be used to construct the derived class.
Here, the "particular function" is Base::check, and the virtual function is Base::method.
First, we establish the base class. It has to fulfill only two requirements:
It must befriend MakeBase, its checker class. I assume that you want the Base::check method to be private and only usable by the factory. If it's public, you won't need MakeBase, of course.
The constructor must be protected.
https://github.com/KubaO/stackoverflown/tree/master/questions/imbue-constructor-35658459
#include <iostream>
#include <utility>
#include <type_traits>
using namespace std;
class Base {
friend class MakeBase;
void check() {
cout << "check()" << endl;
method();
}
protected:
Base() { cout << "Base()" << endl; }
public:
virtual ~Base() {}
virtual void method() {}
};
The templated CRTP factory derives from a base class that's friends with Base and thus has access to the private checker method; it also has access to the protected constructors in order to construct any of the derived classes.
class MakeBase {
protected:
static void check(Base * b) { b->check(); }
};
The factory class can issue a readable compile-time error message if you inadvertently use it on a class not derived from Base:
template <class C> class Make : public C, MakeBase {
public:
template <typename... Args> Make(Args&&... args) : C(std::forward<Args>(args)...) {
static_assert(std::is_base_of<Base, C>::value,
"Make requires a class derived from Base");
check(this);
}
};
The derived classes must have a protected constructor:
class Derived : public Base {
int a;
protected:
Derived(int a) : a(a) { cout << "Derived() " << endl; }
void method() override { cout << ">" << a << "<" << endl; }
};
int main()
{
Make<Derived> d(3);
}
Output:
Base()
Derived()
check()
>3<
If you take a look at how others solved this problem, you will notice that they simply transferred the responsibility of calling the initialization function to client. Take MFC’s CWnd, for instance: you have the constructor and you have Create, a virtual function that you must call to have a proper CWnd instantiation: “these are my rules: construct, then initialize; obey, or you’ll get in trouble”.
Yes, it is error prone, but it is better than the alternative: “It has been suggested that this rule is an implementation artifact. It is not so. In fact, it would be noticeably easier to implement the unsafe rule of calling virtual functions from constructors exactly as from other functions. However, that would imply that no virtual function could be written to rely on invariants established by base classes. That would be a terrible mess.” - Stroustrup. What he meant, I reckon, is that it would be easier to set the virtual table pointer to point to the VT of derived class instead of keep changing it to the VT of current class as your constructor call goes from base down.
I realise that I should not do this because overrides of the virtual function will not be called,...
Assuming that the call to a virtual function would work the way you want, you shouldn't do this because of the invariants.
class B // written by you
{
public:
B() { f(); }
virtual void f() {}
};
class D : public B // written by client
{
int* p;
public:
D() : p( new int ) {}
void f() override { *p = 10; } // relies on correct initialization of p
};
int main()
{
D d;
return 0;
}
What if it would be possible to call D::f from B via VT of D? You will use an uninitialized pointer, which will most likely result in a crash.
...but how can I achieve some similar functionality?
If you are willing to break the rules, I guess that it might be possible to get the address of desired virtual table and call the virtual function from constructor.
Seems you want this, or need more details.
class B
{
void templateMethod()
{
foo();
bar();
}
virtual void foo() = 0;
virtual void bar() = 0;
};
class D : public B
{
public:
D()
{
templateMethod();
}
virtual void foo()
{
cout << "D::foo()";
}
virtual void bar()
{
cout << "D::bar()";
}
};

Why is the "virtuality" of methods implicitly propagated in C++?

What is the reason for removing the ability to stop the propagation of methods virtuality?
Let me be clearer: In C++, whether you write "virtual void foo()" or "void foo()" in the derived class, it will be virtual as long as in the base class, foo is declared virtual.
This means that a call to foo() through a derived* pointer will result in a virtual table lookup (in case a derived2 function overrides foo), even if this behavior is not wanted by the programmer.
Let me give you an example (that looks pretty blatant to me) of how it would be useful to stop virtuality propagation:
template <class T>
class Iterator // Here is an iterator interface useful for defining iterators
{ // when implementation details need to be hidden
public:
virtual T& next() { ... }
...
};
template <class T>
class Vector
{
public:
class VectIterator : public Iterator<T>
{
public:
T& next() { ... }
...
};
...
};
In the example above, the Iterator base class can be used to achieve a form of "type erasure" in a much more clearer and Object-Oriented way. (See http://www.artima.com/cppsource/type_erasure.html for an example of type erasure.)
But still, in my example one can use a Vector::VectIterator object directly (which will be done in most cases) in order to access the real object without using the interface.
If virtuality was not propagated, calls to Vector::VectIterator::next() even from a pointer or reference would not be virtual and would be able to be inlined and to run efficiently, just as if the Iterator interface didn't exist.
C++11 added the contextual keyword final for this purpose.
class VectIterator : public Iterator<T>
{
public:
T& next() final { ... }
...
};
struct Nope : VecIterator {
T& next() { ... } // ill-formed
};
The simple snswer is : Don't mix concrete and abstract interfaces! The obvious approach in you example would be the use of a non-virtual function next() which delegates to a virtual function, e.g., do_next(). A derived class would override do_next() possibly delegating to a non-virtual function next(). Since the next() functions are likely inline there isn't any cost involved in the delegation.
In my opinion one of the good reasons for this propagation is the virtual destructors. In C++ when you have a base class with some virtual methods you should define the destructor virtual. This is because some code may have a pointer of base class which is actually pointing to the derived class and then tries to delete this pointer (see this question for more detail).
By defining the destructor in base class as vritual you can make sure all pointers of base class pointing to a derived class (in any level of inheritance) will delete properly.
I think the reason is that it would be really confusing to remove virtuality partway through an inheritance structure (I have an example of the complexity below).
However if your concern is the micro-optimization of removing a few virtual calls then I wouldn't worry. As long as you inline the virtual child method's code, AND your iterator is passed around by value and not reference, a good optimizing compiler will already be able to see the dynamic type at compile time and inline the whole thing for you in spite of it being a virtual method!
But for completeness, consider the following in a language where you can de-virtualize:
class A
{
public:
virtual void Foo() { }
};
class B : public A
{
public:
void Foo() { } // De-virtualize
};
class C: public B
{
public:
void Foo() { } // not virtual
};
void F1(B* obj)
{
obj->Foo();
static_cast<A*>(obj)->Foo();
}
C test_obj;
F1(test_obj); // Which two methods are called here?
You could make rules for exactly which methods would get called but the obvious choice will vary from person-to-person. It's far simpler to just propagate virtualness of a method.

Multiple Inheritance from same grandparent - merge implementations?

for a certain project I have declared an interface (a class with only pure virtual functions) and want to offer users some implementations of this interface.
I want users to have great flexibility, so I offer partial implementations of this interface. In every implementation there is some functionality included, other functions are not overridden since they take care about different parts.
However, I also want to present users with a fully usable implementation of the interface as well. So my first approach was to simply derive a class from both partial implementations. This did not work and exited with the error that some functions are still pure virtual in the derived class.
So my question is if there is any way to simply merge two partial implementations of the same interface. I found a workaround by explicitely stating which function I want to be called for each method, but I consider this pretty ugly and would be grateful for an mechanism taking care of this for me.
#include <iostream>
class A{
public:
virtual void foo() = 0;
virtual void bar() = 0;
};
class B: public A{
public:
void foo(){ std::cout << "Foo from B" << std::endl; }
};
class C: public A{
public:
void bar(){ std::cout << "Bar from C" << std::endl; }
};
// Does not work
class D: public B, public C {};
// Does work, but is ugly
class D: public B, public C {
public:
void foo(){ B::foo(); }
void bar(){ C::bar(); }
};
int main(int argc, char** argv){
D d;
d.foo();
d.bar();
}
Regards,
Alexander
The actual problem is about managing several visitors for a tree, letting each of them traverse the tree, make a decision for each of the nodes and then aggregate each visitor's decision and accumulate it into a definite decision.
A separation of both parts is sadly not possible without (I think) massive overhead, since I want to provide one implementation taking care of managing the visitors and one taking care of how to store the final decision.
Have you considered avoiding the diamond inheritance completely, providing several abstract classes each with optional implementations, allowing the user to mix and match default implementation and interface as needed?
In your case what's happening is that once you inherit to D, B::bar hasn't been implemented and C::foo hasn't been implemented. The intermediate classes B and C aren't able to see each others' implementations.
If you need the full interface in the grandparent, have you considered providing the implementation in a different way, possibly a policy with templates, and default classes that will be dispatched into to provide the default behavior?
If your top level interface has a logical division in functionality, you should split it into two separate interfaces. For example if you have both serialization and drawing functions in interface A, you should separate these into two interfaces, ISerialization and IDrawing.
You're free to then provide a default implementation of each of these interfaces. The user of your classes can inherit either your interface or your default implementation as needed.
There is also the possibility that you could use a "factory" class for the main interface type. In other words the primary interface class also contains some type of static function that generates an appropriate child class on-request from the user. For instance:
#include <cstdio>
class A
{
public:
enum class_t { CLASS_B, CLASS_C };
static A* make_a_class(class_t type);
virtual void foo() = 0;
virtual void bar() = 0;
};
class B: public A
{
private:
virtual void foo() { /* does nothing */ }
public:
virtual void bar() { printf("Called B::bar()\n"); }
};
class C: public A
{
private:
virtual void bar() { /* does nothing */ }
public:
virtual void foo() { printf("Called C::foo()\n"); }
};
A* A::make_a_class(class_t type)
{
switch(type)
{
case CLASS_B: return new B();
case CLASS_C: return new C();
default: return NULL;
}
}
int main()
{
B* Class_B_Obj = static_cast<B*>(A::make_a_class(A::CLASS_B));
C* Class_C_Obj = static_cast<C*>(A::make_a_class(A::CLASS_C));
//Class_B_Obj->foo(); //can't access since it's private
Class_B_Obj->bar();
Class_C_Obj->foo();
//Class_C_Obj->bar(); //can't access since it's private
return 0;
}
If class A for some reason needs to access some private members of class B or class C, just make class A a friend of the children classes (for instance, you could make the constructors of class B and class C private constructors so that only the static function in class A can generate them, and the user can't make one on their own without calling the static factory function in class A).
Hope this helps,
Jason
Since you mentioned that you mainly needed access to the functions rather than data-members, here is another method you could use rather than multiple inheritance using templates and template partial specialization:
#include <iostream>
using namespace std;
enum class_t { CLASS_A, CLASS_B, CLASS_C };
template<class_t class_type>
class base_type
{
public:
static void foo() {}
static void bar() {}
};
template<>
void base_type<CLASS_A>::foo() { cout << "Calling CLASS_A type foo()" << endl; }
template<>
void base_type<CLASS_B>::bar() { cout << "Calling CLASS_B type bar()" << endl; }
template<>
void base_type<CLASS_C>::foo() { base_type<CLASS_A>::foo(); }
template<>
void base_type<CLASS_C>::bar() { base_type<CLASS_B>::bar(); }
int main()
{
base_type<CLASS_A> Class_A;
Class_A.foo();
base_type<CLASS_B> Class_B;
Class_B.bar();
base_type<CLASS_C> Class_C;
Class_C.foo();
Class_C.bar();
return 0;
}
Now if you need non-static functions that have access to private data-members, this can get a bit trickier, but it should still be doable. It would though most likely require the need for a separate traits class you can use to access the proper types without running into "incomplete types" compiler errors.
Thanks,
Jason
I think the problem is that when using simple inheritance between B and A, and between C and A, you end up with two objects of type A in D (each of which will have a pure virtual function, causing a compile error because D is thus abstract and you try to create an instance of it).
Using virtual inheritance solves the problem since it ensure there is only one copy of A in D.

How to be sure a method is overriding an existing virtual one in C++?

Let's suppose we have a base class which has a virtual method:
class BaseClass
{
virtual void MethodToOverride() const
{
DoSomething();
}
};
And a derived class which overrides the method (depending on the situation we can make it again virtual or not):
class DerivedClass : public BaseClass
{
void MethodToOverride() const
{
DoSomethingElse();
}
}
If we make a mistake, for example defining the MethodToOverride non const or with a wrong character, we simply define a new method, for example:
void MethodToOverride() {} // I forgot the const
void MthodToOverride() const {} // I made a typo
So this compiles fine, but causes unwanted behavior at runtime.
Is there any way to define a function as an explicit override of an existing one, so the compiler warns me if I define it wrongly? Something like (I know it does not exist):
void MethodToOverride() const overrides BaseClass::MethodToOverride() const {}
The best way is to declare the method to be pure virtual in BaseClass.
class BaseClass
{
virtual void MethodToOverride() const = 0;
};
If implementing classes are inherited again (which I would put in question as a semi good practice), there is no way to control the correct implementation.
[[override]] attribute. However it is a part of C++0x.
If you are using gcc consider the -Woverloaded-virtual command-line option.
C++0x offers an attribute for this (see vitaut's answer), and e.g. Visual C++ offers a language extension.
But in portable C++98 the best you can do is a sanity check, that the base class offers an accessible member function that accepts the same arguments, like ...
// The following macro is mainly comment-like, but performs such checking as it can.
#define IS_OVERRIDE_OF( memberSpec, args ) \
suppressUnusedWarning( sizeof( (memberSpec args, 0) ) )
where
template< typename T >
inline void suppressUnusedWarning( T const& ) {}
You call the macro in your override implementation, with the function's actual arguments.
EDIT Added call example (disclaimer: untouched by compiler's hands):
class BaseClass
{
protected:
virtual void MethodToOverride() const
{
DoSomething();
}
};
class DerivedClass : public BaseClass
{
protected:
void MethodToOverride() const
{
IS_OVERRIDE_OF( BaseClass::MethodToOverride, () );
DoSomethingElse();
}
};
Using such a sanity check can improve the clarity of the code in certain cases, and can save your ass in certain cases. It has three costs. (1) Someone Else might mistake it for a guarantee, rather than just an informative comment and partial check. (2) the member function can't be private in the base class, as it is in your example (although that's perhaps positive). (3) Some people react instinctively negatively to any use of macros (they've just memorized a rule about badness without understanding it).
Cheers & hth.,
If your base class may be an abstract one, then the solution is to make the methods you want to be overriden pure virtual. In this case the compiler will yell if you try to instantiate the derived class. Note that pure virtual functions can also have definitions.
E.g.
class BaseClass
{
virtual void MethodToOverride() const = 0;
//let's not forget the destructor should be virtual as well!
};
inline void BaseClass::MethodToVerride const()
{
DoSomething();
}
//note that according to the current standard, for some inexplicable reasons the definition
//of a pure virtual function cannot appear 'inline' in the class, only outside
If you cannot afford your base class to be abstract, then C++03 gives little to do and #vitaut's answer gives what you need for C++0x.
There was a sentence in your question which alarmed me. You say you can choose to make the method further virtual or not. Well, you can't, in C++03. If the method has been declared virtual it will be virtual throughout the hierarchy, whether you explicitly specify it or not. E.G.
class A
{
virtual void f(){}
} ;
class B: public A
{
void f(); //same as virtual void f();
};
You can try this: :)
#include <iostream>
using namespace std;
class Base
{
public:
virtual void YourMethod(int) const = 0;
};
class Intermediate : private Base
{
public:
virtual void YourMethod(int i) const
{
cout << "Calling from Intermediate : " << i << "\n";
}
};
class Derived : public Intermediate, private Base
{
public:
void YourMethod(int i) const
{
//Default implementation
Intermediate::YourMethod(i);
cout << "Calling from Derived : " << i << "\n";
}
};
int main()
{
Intermediate* pInterface = new Derived;
pInterface->YourMethod(10);
}
I think the code speaks for itself. Base makes sure you implement the function with the correct signature (As a side effect makes you always implement it, even though you can use default behavior) while Intermediate which is the interface makes sure that there is a default implementation. However, you are left with a warning :).