Virtual functions versus Callbacks - c++

Consider a scenario where there are two classes i.e. Base and Derived. If the Base class wants to call a function of the derived class, it can do so by either making a virtual function and defining that VF in the derived class or by using callbacks. I want to know in what should be preferred out of the two? Choosing among the two depends on which situations/conditions?
EDIT:
Question Clarification:
The situation I was referring to is that there is a base class which receives messages. These different messages are to be handled differently by the derived class, so one way is to create a virtual function and let the derived class implement it, handling every message though various switch cases.
Another way is to implement the callbacks through the function pointers (pointing to the functions of derived class) inside the templates (templates are needed for handling the object of the derived class and the function names). The templates and the function pointers are going to reside in the base class.

A virtual function call is actually a callback.
The caller looks up the corresponding entry in the object's virtual function table and calls it. That's exactly like a callback behaves, except that member function pointers have awkward syntax. Virtual functions offload the work to the compiler, which makes them a very elegant solution.
Virtual functions are the way to communicate within the inheritance hierarchy.

I think this comes down to a decision about whether or not the behaviour you're talking about is something that belongs in the heirarchy that 'Base' knows about and child implements.
If you go with a callback solution, then the callback method (depending on signature) doesn't have to be implemented in a child of Base. This may be appropriate if for example you wanted to say 'this event has happened' to an 'event listener' that could be in a derived class, or could be in a totally unrelated class that happens to be interested in the event.
If you go with the virtual function solution, then you're more tightly coupling the implentation of the Derived and Base classes.
An interesting read, which may go some way to answering your question is: Callbacks in C++ which talks about the usage of Functors. There's also an example on Wikipedia that uses a template callback for sorting. You'll notice that the implementation for the callback (which is a comparison function) does not have to be in the object that is performing the sort. If it were implemented using virtual methods, this wouldn't be the case.

I don't think that the two cases you are describing are comparable. Virtual functions are a polymorphism tool that aid you in extending a base class in order to provide additional functionality. The key characteristic of them is that the decision which function will be called is made in runtime.
Callbacks are a more general concept, that doesn't apply only on a Parent-Child class relationship.
So, if you want to do involves extending a base class, I would certainly go with virtual functions. Be sure however to understand how virtual functions work.

Related

Object Oriented approach of Polymorphism

I have been taught in my C++ OOP class about polymorphism that how we can provide virtual function interfaces to derived classes. But the question is how all this can help? Every time we make a base class pointer and store a derived class object in it, But why? Can't we do it just by function overriding.
Please Tell a programming problem which cannot be solved except with polymorphism in C++
Virtual functions and overriding vs. non-virtual functions and name hiding
Virtual functions make a class polymorphic. A virtual function can be overriden in derived classes. When you invoke that function through a base class pointer, it's always the function corresponding to the real dynamic type of the poined object that will be called. It's dynamic determination at run-time.
Non-virtual functions can't be overriden. When a derived class has a non-virtual function with the same signature than the base class, it's two different functions but the name of the derived class hides the one of the base class. When you invoke the function through a base class pointer, it's allways the function corresponding to the base class which will be invoked. It's static determination at compile-time.
What's the benefit ? do we need virtual functions ?
Virtual functions are just an easy way to define abstraction. The typical example are shapes. You define an abstract shape, with a virtual functions such as calculateSurface(). You then can call that function via any pointer and you'll be sure that for any concrete shape (e.g. circle, square, hexagon...) it will always apply the right formula for the object.
Abstraction is convenient. But you could live without it. For example, you could as well implement the same functionality, by using a shape code, and having a a calculateSurface() that would execute the right formula depending on the shape code. It's perfectly possible. It's just more difficult to maintain, because everytime you create a new shape, you'd need to ad another if (shapeCode==xx) clause in all the places where the behavior is dependent of the shape.
In fact you don't even need an OOP. In former times, before c++ existed, it was a common programming technique to use function pointers in C to emulate such a type dependent behavior (using a struct that contained a function pointer for every type dependent operation). Again, it's perfectly feasible, but even more tedious and more error prone and less encapsulated.
So, there is no problel that would require polymorphism to be solved. There are just plenty of problems where OOP and polymorphism makes the problem easier to solve, with more maintainable code.

Is it better to cast a base class to derived class or create a virtual function on the base class?

According to this answer, dynamic_cast'ing a base class to derived class is fine, but he says this shows that there is a fundamental problem with the code logic.
I've looked at other answers and using dynamic_cast is fine since you can check the pointer validity later.
Now in my real problem the derived class has a GetStrBasedOnCP function which is not virtual (only the derived class has it) and I have to access it.
What is better, to create a virtual void GetStrBasedOnCP on the base class and make it virtual on the derived OR, to just cast the base class pointer to derived class?
Oh also notice that this is a unsigned int GetStrBasedOnCP so the base class must also return a value...
There are more than two answers to the "what is better" question, and it all depends on what you are modeling:
If the GetStrBasedOnCP function is logically applicable to the base class, using virtual dispatch is the best approach.
If having the GetStrBasedOnCP function in the base class does not make logical sense, you need to use an approach based on the actual type; you could use dynamic_cast, or
You could implement multiple dispatch, e.g. through a visitor or through a map of dynamic types.
The test for logical applicability is the most important one. If GetStrBasedOnCP function is specific to your subclass, adding it to the base class will create maintenance headaches for developers using and maintaining your code.
Multiple dispatch, on the other hand, gives you a flexible approach that lets you access statically typed objects. For example, implementing visitor pattern in your base class lets you make visitors that process the subclass with GetStrBasedOnCP function differently from other subclasses.
Does it make sense for the base class you have to have the virtual function in it?
If it does not then you should not include the function in the base class. Remember that best practices cover the general case. There are times you need to do things you wouldn't normally do to get the code working. The key thing is you need is clear, concise, understandable code
There's a lot of "it depends".
If you can guarantee that the base pointer is the correct child pointer, then you can use dynamic_cast.
If you can't guarantee which child type the base pointer is pointing to, you may want to place the function in the base class.
However, be aware that all children of the base class will get the functionality of whatever you place into the base class. Does it make sense for all the children to have the functionality?
You may want to review your design.

Is there any way to avoid declaring virtual methods when storing (children) pointers?

I have run into an annoying problem lately, and I am not satisfied with my own workaround: I have a program that maintains a vector of pointers to a base class, and I am storing there all kind of children object-pointers. Now, each child class has methods of their own, and the main program may or not may call these methods, depending on the type of object (note though that they all heavily use common methods of the base class, so this justify inheritance).
I have found useful to have an "object identifier" to check the class type (and then either call the method or not), which is already not very beautiful, but this is not the main inconvenience. The main inconvenience is that, if I want to actually be able to call a derived class method using the base class pointer (or even just store the pointer in the pointer array), then one need to declare the derived methods as virtual in the base class.
Make sense from the C++ coding point of view.. but this is not practical in my case (from the development point of view), because I am planning to create many different children classes in different files, perhaps made by different people, and I don't want to tweak/maintain the base class each time, to add virtual methods!
How to do this? Essentially, what I am asking (I guess) is how to implement something like Objective-C NSArrays - if you send a message to an object that does not implement the method, well, nothing happens.
regards
Instead of this:
// variant A: declare everything in the base class
void DoStuff_A(Base* b) {
if (b->TypeId() == DERIVED_1)
b->DoDerived1Stuff();
else if if (b->TypeId() == DERIVED_2)
b->DoDerived12Stuff();
}
or this:
// variant B: declare nothing in the base class
void DoStuff_B(Base* b) {
if (b->TypeId() == DERIVED_1)
(dynamic_cast<Derived1*>(b))->DoDerived1Stuff();
else if if (b->TypeId() == DERIVED_2)
(dynamic_cast<Derived2*>(b))->DoDerived12Stuff();
}
do this:
// variant C: declare the right thing in the base class
b->DoStuff();
Note there's a single virtual function in the base per stuff that has to be done.
If you find yourself in a situation where you are more comfortable with variants A or B then with variant C, stop and rethink your design. You are coupling components too tightly and in the end it will backfire.
I am planning to create many different children classes in different
files, perhaps made by different people, and I don't want to
tweak/maintain the base class each time, to add virtual methods!
You are OK with tweaking DoStuff each time a derived class is added, but tweaking Base is a no-no. May I ask why?
If your design does not fit in either A, B or C pattern, show what you have, for clairvoyance is a rare feat these days.
You can do what you describe in C++, but not using functions. It is, by the way, kind of horrible but I suppose there might be cases in which it's a legitimate approach.
First way of doing this:
Define a function with a signature something like boost::variant parseMessage(std::string, std::vector<boost::variant>); and perhaps a string of convenience functions with common signatures on the base class and include a message lookup table on the base class which takes functors. In each class constructor add its messages to the message table and the parseMessage function then parcels off each message to the right function on the class.
It's ugly and slow but it should work.
Second way of doing this:
Define the virtual functions further down the hierarchy so if you want to add int foo(bar*); you first add a class that defines it as virtual and then ensure every class that wants to define int foo(bar*); inherit from it. You can then use dynamic_cast to ensure that the pointer you are looking at inherits from this class before trying to call int foo(bar*);. Possible these interface adding classes could be pure virtual so they can be mixed in to various points using multiple inheritance, but that may have its own problems.
This is less flexible than the first way and requires the classes that implement a function to be linked to each other. Oh, and it's still ugly.
But mostly I suggest you try and write C++ code like C++ code not Objective-C code.
This can be solved by adding some sort of introspection capabilities and meta object system. This talk Metadata and reflection in C++ — Jeff Tucker demonstrates how to do this using c++'s template meta programming.
If you don't want to go to the trouble of implementing one yourself, then it would be easier to use an existing one such as Qt's meta object system. Note that this solution does not work with multiple inheritance due to limitations in the meta object compiler: QObject Multiple Inheritance.
With that installed, you can query for the presence of methods and call them. This is quite tedious to do by hand, so the easiest way to call such a methods is using the signal and slot mechanism.
There is also GObject which is quite simmilar and there are others.
If you are planning to create many different children classes in different files, perhaps made by different people, and also I would guess you don't want to change your main code for every child class. Then I think what you need to do in your base class is to define several (not to many) virtual functions (with empty implementation) BUT those functions should be used to mark a time in the logic where they are called like "AfterInseart" or "BeforeSorting", Etc.
Usually there are not to many places in the logic you wish a derived classes to perform there own logic.

When to mark a function in C++ as a virtual?

Because of C++ nature of static-binding for methods, this affects the polymorphic calls.
From Wikipedia:
Although the overhead involved in this dispatch mechanism is low, it
may still be significant for some application areas that the language
was designed to target. For this reason, Bjarne Stroustrup, the
designer of C++, elected to make dynamic dispatch optional and
non-default. Only functions declared with the virtual keyword will be
dispatched based on the runtime type of the object; other functions
will be dispatched based on the object's static type.
So the code:
Polygon* p = new Triangle;
p->area();
provided that area() is a non-virtual function in Parent class that is overridden in the Child class, the code above will call the Parent's class method which might not be expected by the developer. (thanks to the static-binding I've introduced)
So, If I want to write a class to be used by others (e.g library), should I make all my functions to be virtual for the such previous code to run as expected?
The simple answer is if you intend functions of your class to be overridden for runtime polymorphism you should mark them as virtual, and not if you don't intend so.
Don't mark your functions virtual just because you feel it imparts additional flexibility, rather think of your design and purpose of exposing an interface. For ex: If your class is not designed to be inherited then making your member functions virtual will be misleading. A good example of this is Standard Library containers,which are not meant to be inherited and hence they do not have virtual destructors.
There are n no of reasons why not to mark all your member functions virtual, to quote some performance penalties, non-POD class type and so on, but if you really intent that your class is intended for run time overidding then that is the purpose of it and its about and over the so-called deficiencies.
Mark it virtual if derived classes should be able to override that method. It's as simple as that.
In terms of memory performance, you get a virtual pointer table if anything is virtual, so one way to look at it is "please one, please all". Otherwise, as the others say, mark them as virtual if you want them to be overridable such that calling that method on a base class means that the specialized versions are run.
As a general rule, you should only mark a function virtual if the class is explicitly designed to be used as a base class, and that function is designed to be overridden. In practice, most virtual functions will be pure virtual in the base class. And except in cases of call inversion, where you explicitly don't provide a contract for the overriding function, virtual functions should be private (or at the most protected), and wrapped with non-virtual functions enforcing the contract.
That's basically the idea ; actually if you are using a parent class, I don't think you'll need to override every methods so just make them virtual if you think you'll use it this way.

If a class might be inherited, should every function be virtual?

In C++, a coder doesn't know whether other coders will inherit his class. Should he make every function in that class virtual? Are there any drawbacks? Or is it just not acceptable at all?
In C++, you should only make a class inheritable from if you intend for it to be used polymorphically. The way that you treat polymorphic objects in C++ is very different from how you treat other objects. You don't tend to put polymorphic classes on the stack, or pass them by or return them from functions by value, since this can lead to slicing. Polymorphic objects tend to be heap-allocated, be passed around and returns by pointer or by reference, etc.
If you design a class to not be inherited from and then inherit from it, you cause all sorts of problems. If the destructor isn't marked virtual, you can't delete the object through a base class pointer without causing undefined behavior. Without the member functions marked virtual, they can't be overridden in a derived class.
As a general rule in C++, when you design the class, determine whether you want it be inherited from. If you do, mark the appropriate functions virtual and give it a virtual destructor. You might also disable the copy assignment operator to avoid slicing. Similarly, if you want the class not to be inheritable, don't give it any of these functions. In most cases it's a logic error to inherit from a class that wasn't designed to be inherited from, and most of the times you'd want to do this you can often use composition instead of inheritance to achieve this effect.
No, not usually.
A non-virtual function enforces class-invariant behavior. A virtual function doesn't. As such, the person writing the base class should think about whether the behavior of a particular function is/should be class invariant or not.
While it's possible for a design to allow all behaviors to vary in derived classes, it's fairly unusual. It's usually a pretty good clue that the person who wrote the class either didn't think much about its design, lacked the resolve to make a decision.
In C++ you design your class to be used either as a value type or a polymorphic type. See, for example, C++ FAQ.
If you are making a class to be used by other people, you should put a lot of thought into your interface and try to work out how your class will be used. Then make the decisions like which functions should be virtual.
Or better yet write a test case for your class, using it how you expect it to be used, and then make the interface work for that. You might be surprised what you find out doing it. Things you thought were absolutely necessary might turn out to be rarely needed and things that you thought were not going to be used might turn out to be the most useful methods. Doing it this way around will save you time not doing unnecessary work in the long run and end up with solid designs.
Jerry Coffin and Dominic McDonnell have already covered the most important points.
I'll just add an observation, that in the time of MFC (middle 1990s) I was very annoyed with the lack of ways hook into things. For example, the documentation suggested copying MFC's source code for printing and modifying, instead of overriding behavior. Because nothing was virtual there.
There are of course a zillion+1 ways to provide "hooks", but virtual methods are one easy way. They're needed in badly designed classes, so that the client code can fix things, but in those badly designed classes the methods are not virtual. For classes with better design there is not so much need to override behavior, and so for those classes making methods virtual by default (and non-virtual only as active choice) can be counter-productive; as Jerry remarked, virtuals provide opportunites for derived classes to screw up.
There are design patterns that can be employed to minimize the possibilities of screw-ups.
For example, wrapping internal virtuals in exposed non-virtual methods with sanity checks, and, for example, using decoupled event handling (where appropriate) instead of virtuals.
Cheers & hth.,
When you create a class, and you want that class to be used polymorphically you have to consider that the class has two different interfaces. The user interface is defined by the set of public functions that are available in your base class, and that should pretty much cover all operations that users want to perform on objects of your class. This interface is defined by the access qualifiers, and in particular the public qualifier.
There is a second interface, that defines how your class is to be extended. At that level you have to think on what behavior you want to be overridden by extending classes, and what elements of your object you want to provide to extending classes. You offer access to derived classes by means of the protected qualifier, and you offer extension points by means of virtual functions.
You should try to follow the Non-Virtual Interface idiom whenever possible. That idiom (google for it) basically tries to fully separate the two interfaces by not having public virtual functions. Users call non-virtual functions, and those in turn call on configurable functionalities by means of protected/private virtual functions. This clearly separates extension points from the class interface.
There is a single case, where virtual has to be part of the user interface: the destructor. If you want to offer your users the ability to destroy derived objects through pointers to the base, then you have to provide a virtual destructor. Else you just provide a protected non-virtual one.
He should code the functions as it is, he shouldn't make them virtual at all, as in the circumstances specified by you.
The reasons being
1> The CLASS CODER would obviously have certain use of functions he is using.
2> The inherited class may or may not make use of these functions as per requirement.
3> Any function may be overwritten in derived class without any errors.