For what purpose non-virtual method will be used with C++? [duplicate] - c++

This question already has answers here:
Is there any reason not to make a member function virtual?
(7 answers)
Closed 4 years ago.
What will be the purpose of using non-virtual method in terms of functionality with C++? Is there any merit to change a object's method by its object handle rather than its object type? I found many articles about how virtual method will be utilized but I could not find how non-virtual method will be utilized.
Other languages I know of such as Java, Ruby and Python only have virtual method. So functionally non-virtual method is not needed and just be used for performance reason?
OK. I hadn't read the article marked duplicate of. But your answers are still valuable to me in the point the telling the origin of C++ and comparing C++ to other object oriented languages. Thanks everyone to answer.

The answer is very simple : because C++ is not Java.
Programming languages have different philosophies and different ways to accomplish the same result.
Java (and other "OOP-language-where-every-object-is-GCed-and-is-a-reference-type", like C#) encourage you to think about objects in a very specific way: Inheritance and Polymoprphism are the main ways to achieve flexibility and generalization of code. Objects are almost always reference type, meaning that Car car can actually point to Toyota, Ford and whatever. objects are garbaged-collected and dynamically allocated. All objects anyway inherit from an Object class, so inheritance and dynamic polymorphism is anyway imbued into the language objects by the very language design.
C++ is different. the concept of object might be central to the language, but an object is basically a unit of data and functionality. it's a leaner form of a "real" OOP-language object and it usually allocated on the stack, uses RAII to handle its own resources, and is a value type.
Inheritance and Polymorphism exist, but they are inferior to composition and compile-time-polymorphism (templates).
C++ doesn't encourage you to think about objects as a reference type. objects might be a reference type, they might have virtual functions, but this is only one way to achieve flexibility and generalization in C++, as opposed to Java. you might use templates instead, function pointers and opaque types (a-la C style polymorphism), inheritance + overriding function (a-la Java style), Hence, C++ doesn't force you to take the Java route to flexibility - it gives you the opportunity to choose the best way to accomplish things.

When marking a method as virtual, every time such method will be called, the program will have to check the virtual table inside the object that you're calling the method on, it's called dynamic dispatch. It creates quit a bit of overhead compared to normal methods that are resolved using static dispatch.
As big part of C++ is giving the programmer the choice what he wants to do, you can choose if you want static of dynamic linking.

C++ method lookup mechanisms won't allow for polymorphism if it's non-virtual. Defining classes as non-virtual will prevent the overhead and confusion.
Have a look at This question and answers

If a method is not virtual, the compiler knows the address in memory where this method's code will be located right at compile time, and can use it right away. If a method is virtual, it will have to be determined at runtime, which implementation should be called, based on the object type. It adds overhead on every call. So, by making a method non-virtual, you make it more efficient.
It should be mentioned that in some languages it's the other way around: the methods are "virtual" by default, but you can explicitly mark them as "non-virtual" (usually called final).

Non-virtual methods can add additional functionalities specific only to the derived class.
class animal {
public:
virtual string name() = 0;
};
class rhino :public animal {
public:
string name() override { return "Rhino"; }
int getHornSize() { return 10; } // non-virtual method add functionality only specific to rhino class
};

Related

Why aren't all functions virtual in C++? [duplicate]

Is there any real reason not to make a member function virtual in C++? Of course, there's always the performance argument, but that doesn't seem to stick in most situations since the overhead of virtual functions is fairly low.
On the other hand, I've been bitten a couple of times with forgetting to make a function virtual that should be virtual. And that seems to be a bigger argument than the performance one. So is there any reason not to make member functions virtual by default?
One way to read your questions is "Why doesn't C++ make every function virtual by default, unless the programmer overrides that default." Without consulting my copy of "Design and Evolution of C++": this would add extra storage to every class unless every member function is made non-virtual. Seems to me this would have required more effort in the compiler implementation, and slowed down the adoption of C++ by providing fodder to the performance obsessed (I count myself in that group.)
Another way to read your questions is "Why do C++ programmers do not make every function virtual unless they have very good reasons not to?" The performance excuse is probably the reason. Depending on your application and domain, this might be a good reason or not. For example, part of my team works in market data ticker plants. At 100,000+ messages/second on a single stream, the virtual function overhead would be unacceptable. Other parts of my team work in complex trading infrastructure. Making most functions virtual is probably a good idea in that context, as the extra flexibility beats the micro-optimization.
Stroustrup, the designer of the language, says:
Because many classes are not designed to be used as base classes. For example, see class complex.
Also, objects of a class with a virtual function require space needed by the virtual function call mechanism - typically one word per object. This overhead can be significant, and can get in the way of layout compatibility with data from other languages (e.g. C and Fortran).
See The Design and Evolution of C++ for more design rationale.
There are several reasons.
First, performance: Yes, the overhead of a virtual function is relatively low seen in isolation. But it also prevents the compiler from inlining, and that is a huge source of optimization in C++. The C++ standard library performs as well as it does because it can inline the dozens and dozens of small one-liners it consists of. Additionally, a class with virtual methods is not a POD datatype, and so a lot of restrictions apply to it. It can't be copied just by memcpy'ing, it becomes more expensive to construct, and takes up more space. There are a lot of things that suddenly become illegal or less efficient once a non-POD type is involved.
And second, good OOP practice. The point in a class is that it makes some kind of abstraction, hides its internal details, and provides a guarantee that "this class will behave so and so, and will always maintain these invariants. It will never end up in an invalid state".
That is pretty hard to live up to if you allow others to override any member function. The member functions you defined in the class are there to ensure that the invariant is maintained. If we didn't care about that, we could just make the internal data members public, and let people manipulate them at will. But we want our class to be consistent. And that means we have to specify the behavior of its public interface. That may involve specific customizability points, by making individual functions virtual, but it almost always also involves making most methods non-virtual, so that they can do the job of ensuring that the invariant is maintained. The non-virtual interface idiom is a good example of this:
http://www.gotw.ca/publications/mill18.htm
Third, inheritance isn't often needed, especially not in C++. Templates and generic programming (static polymorphism) can in many cases do a better job than inheritance (runtime polymorphism). Yes, you sometimes still need virtual methods and inheritance, but it is certainly not the default. If it is, you're Doing It Wrong. Work with the language, rather than try to pretend it was something else. It's not Java, and unlike Java, in C++ inheritance is the exception, not the rule.
I'll ignore performance and memory cost, because I have no way to measure them for the "in general" case...
Classes with virtual member functions are non-POD. So if you want to use your class in low-level code which relies on it being POD, then (among other restrictions) any member functions must be non-virtual.
Examples of things you can portably do with an instance of a POD class:
copy it with memcpy (provided the target address has sufficient alignment).
access fields with offsetof()
in general, treat it as a sequence of char
... um
that's about it. I'm sure I've forgotten something.
Other things people have mentioned that I agree with:
Many classes are not designed for inheritance. Making their methods virtual would be misleading, since it implies child classes might want to override the method, and there shouldn't be any child classes.
Many methods are not designed to be overridden: same thing.
Also, even when things are intended to be subclassed / overridden, they aren't necessarily intended for run-time polymorphism. Very occasionally, despite what OO best practice says, what you want inheritance for is code reuse. For example if you're using CRTP for simulated dynamic binding. So again you don't want to imply your class will play nicely with runtime polymorphism by making its methods virtual, when they should never be called that way.
In summary, things which are intended to be overridden for runtime polymorphism should be marked virtual, and things which don't, shouldn't. If you find that almost all your member functions are intended to be virtual, then mark them virtual unless there's a reason not to. If you find that most of your member functions are not intended to be virtual, then don't mark them virtual unless there's a reason to do so.
It's a tricky issue when designing a public API, because flipping a method from one to the other is a breaking change, so you have to get it right first time. But you don't necessarily know before you have any users, whether your users are going to want to "polymorph" your classes. Ho hum. The STL container approach, of defining abstract interfaces and banning inheritance entirely, is safe but sometimes requires users to do more typing.
The following post is mostly opinion, but here goes:
Object oriented design is three things, and encapsulation (information hiding) is the first of these things. If a class design is not solid on this, then the rest doesn't really matter very much.
It has been said before that "inheritance breaks encapsulation" (Alan Snyder '86) A good discussion of this is present in the group of four design pattern book. A class should be designed to support inheritance in a very specific manner. Otherwise, you open the possibility of misuse by inheritors.
I would make the analogy that making all of your methods virtual is akin to making all your members public. A bit of a stretch, I know, but that's why I used the word 'analogy'
As you are designing your class hierarchy, it may make sense to write a function that should not be overridden. One example is if you are doing the "template method" pattern, where you have a public method that calls several private virtual methods. You would not want derived classes to override that; everyone should use the base definition.
There is no "final" keyword, so the best way to communicate to other developers that a method should not be overridden is to make it non-virtual. (other than easily ignored comments)
At the class level, making the destructor non-virtual communicates that the class should not be used as a base class, such as the STL containers.
Making a method non-virtual communicates how it should be used.
The Non-Virtual Interface idiom makes use of non-virtual methods. For more information please refer to Herb Sutter "Virtuality" article.
http://www.gotw.ca/publications/mill18.htm
And comments on the NVI idiom:
http://www.parashift.com/c++-faq-lite/strange-inheritance.html#faq-23.3
http://accu.org/index.php/journals/269 [See sub-section]

C++ late binding (dynamic binding) [duplicate]

I've recently read about the Dynamic Dispatch on Wikipedia and couldn't understand the difference between dynamic dispatch and late binding in C++.
When each one of the mechanisms is used?
The exact quote from Wikipedia:
Dynamic dispatch is different from late binding (also known as dynamic binding). In the context of selecting an operation, binding refers to the process of associating a name with an operation. Dispatching refers to choosing an implementation for the operation after you have decided which operation a name refers to. With dynamic dispatch, the name may be bound to a polymorphic operation at compile time, but the implementation not be chosen until runtime (this is how dynamic dispatch works in C++). However, late binding does imply dynamic dispatching since you cannot choose which implementation of a polymorphic operation to select until you have selected the operation that the name refers to.
A fairly decent answer to this is actually incorporated into a question on late vs. early binding on programmers.stackexchange.com.
In short, late binding refers to the object-side of an eval, dynamic dispatch refers to the functional-side. In late binding the type of a variable is the variant at runtime. In dynamic-dispatch, the function or subroutine being executed is the variant.
In C++, we don't really have late binding because the type is known (not necessarily the end of the inheritance hierarchy, but at least a formal base class or interface). But we do have dynamic dispatch via virtual methods and polymorphism.
The best example I can offer for late-binding is the untyped "object" in Visual Basic. The runtime environment does all the late-binding heavy lifting for you.
Dim obj
- initialize object then..
obj.DoSomething()
The compiler will actually code the appropriate execution context for the runtime-engine to perform a named lookup of the method called DoSomething, and if discovered with the properly matching parameters, actually execute the underlying call. In reality, something about the type of the object is known (it inherits from IDispatch and supports GetIDsOfNames(), etc). but as far as the language is concerned the type of the variable is utterly unknown at compile time, and it has no idea if DoSomething is even a method for whatever obj actually is until runtime reaches the point of execution.
I won't bother dumping a C++ virtual interface et'al, as I'm confident you already know what they look like. I hope it is obvious that the C++ language simply can't do this. It is strongly-typed. It can (and does, obviously) do dynamic dispatch via the polymorphic virtual method feature.
In C++, both are same.
In C++, there are two kinds of binding:
static binding — which is done at compile-time.
dynamic binding — which is done at runtime.
Dynamic binding, since it is done at runtime, is also referred to as late binding and static binding is sometime referred to as early binding.
Using dynamic-binding, C++ supports runtime-polymorphism through virtual functions (or function pointers), and using static-binding, all other functions calls are resolved.
Late binding is calling a method by name during runtime.
You don't really have this in c++, except for importing methods from a DLL.
An example for that would be: GetProcAddress()
With dynamic dispatch, the compiler has enough information to call the right implementation of the method. This is usually done by creating a virtual table.
The link itself explained the difference:
Dynamic dispatch is different from late binding (also known as dynamic binding). In the context of selecting an operation, binding refers to the process of associating a name with an operation. Dispatching refers to choosing an implementation for the operation after you have decided which operation a name refers to.
and
With dynamic dispatch, the name may be bound to a polymorphic operation at compile time, but the implementation not be chosen until runtime (this is how dynamic dispatch works in C++). However, late binding does imply dynamic dispatching since you cannot choose which implementation of a polymorphic operation to select until you have selected the operation that the name refers to.
But they're mostly equal in C++ you can do a dynamic dispatch by virtual functions and vtables.
C++ uses early binding and offers both dynamic and static dispatch. The default form of dispatch is static. To get dynamic dispatch you must declare a method as virtual.
Binding refers to the process of associating a name with an operation.
the main thing here is function parameters these decides which function to call at runtime
Dispatching refers to choosing an implementation for the operation after you have decided which operation a name refers to.
dispatch control to that according to parameter match
http://en.wikipedia.org/wiki/Dynamic_dispatch
hope this help you
Let me give you an example of the differences because they are NOT the same. Yes, dynamic dispatch lets you choose the correct method when you are referring to an object by a superclass, but that magic is very specific to that class hierarchy, and you have to do some declarations in the base class to make it work (abstract methods fill out the vtables since the index of the method in the table cant change between specific types). So, you can call methods in Tabby and Lion and Tiger all by a generic Cat pointer and even have arrays of Cats filled with Lions and Tigers and Tabbys. It knows what indexes those methods refer to in the object's vtable at compile-time (static/early binding), even though the method is selected at run-time (dynamic dispatch).
Now, lets implement an array that contains Lions and Tigers and Bears! ((Oh My!)). Assuming we don't have a base class called Animal, in C++, you are going to have significant work to do to because the compiler isn't going to let you do any dynamic dispatch without a common base class. The indexes for the vtables need to match up, and that can't be done between unreleated classes. You'd need to have a vtable big enough to hold the virtual methods of all classes in the system. C++ programmers rarely see this as a limitation because you have been trained to think a certain way about class design. I'm not saying its better or worse.
With late binding, the run-time takes care of this without a common base class. There is normally a hash table system used to find methods in the classes with a cache system used in the dispatcher. Where in C++, the compiler knows all the types. In a late-bound language, the objects themselves know their type (its not typeless, the objects themselves know exactly who they are in most cases). This means I can have arrays of multiple types of objects if I want (Lions and Tigers and Bears). And you can implement message forwarding and prototyping (allows behaviors to be changed per object without changing the class) and all sorts of other things in ways that are much more flexible and lead to less code overhead than in languages that don't support late binding.
Ever program in Android and use findViewById()? You almost always end up casting the result to get the right type, and casting is basically lying to the compiler and giving up all the static type-checking goodness that is supposed to make static languages superior. Of course, you could instead have findTextViewById(), findEditTextById(), and a million others so that your return types match, but that is throwing polymorphism out the window; arguably the whole basis of OOP. A late-bound language would probably let you simply index by an ID, and treat it like a hash table and not care what the type was being indexed nor returned.
Here's another example. Let's say that you have your Lion class and its default behavior is to eat you when you see it. In C++, if you wanted to have a single "trained" lion, you need to make a new subclass. Prototyping would let you simply change the one or two methods of that particular Lion that need to be changed. It's class and type don't change. C++ can't do that. This is important since when you have a new "AfricanSpottedLion" that inherits from Lion, you can train it too. The prototyping doesn't change the class structure so it can be expanded. This is normally how these languages handle issues that normally require multiple inheritance, or perhaps multiple inheritance is how you handle a lack of prototyping.
FYI, Objective-C is C with SmallTalk's message passing added and SmallTalk is the original OOP, and both are late bound with all the features above and more. Late bound languages may be slightly slower from a micro-level standpoint, but can often allow the code to structured in a way that is more efficient at a macro-level, and it all boils down to preference.
Given that wordy Wikipedia definition I'd be tempted to classify dynamic dispatch as the late binding of C++
struct Base {
virtual void foo(); // Dynamic dispatch according to Wikipedia definition
void bar(); // Static dispatch according to Wikipedia definition
};
Late binding instead, for Wikipedia, seems to mean pointer-to-member dispatch of C++
(this->*mptr)();
where the selection of what is the operation being invoked (and not just which implementation) is done at runtime.
In C++ literature however late binding is normally used for what Wikipedia calls dynamic dispatch.
Dynamic dispatch is what happens when you use the virtual keyword in C++. So for example:
struct Base
{
virtual int method1() { return 1; }
virtual int method2() { return 2; } // not overridden
};
struct Derived : public Base
{
virtual int method1() { return 3; }
}
int main()
{
Base* b = new Derived;
std::cout << b->method1() << std::endl;
}
will print 3, because the method has been dynamically dispatched. The C++ standard is very careful not to specify how exactly this happens behind the scenes, but every compiler under the sun does it in the same way. They create a table of function pointers for each polymorphic type (called the virtual table or vtable), and when you call a virtual method, the "real" method is looked up from the vtable, and that version is called. So you can imaging something like this pseudocode:
struct BaseVTable
{
int (*_method1) () = &Base::method1; // real function address
int (*_method2) () = &Base::method2;
};
struct DerivedVTable
{
int (*method1) () = &Derived::method1; //overriden
int (*method2) () = &Base::method2; // not overridden
};
In this way, the compiler can be sure that a method with a particular signature exists at compile time. However, at run-time, the call might actually be dispatched via the vtable to a different function. Calls to virtual functions are a tiny bit slower than non-virtual calls, because of the extra indirection step.
On the other hand, my understanding of the term late binding is that the function pointer is looked up by name at runtime, from a hash table or something similar. This is the way things are done in Python, JavaScript and (if memory serves) Objective-C. This makes it possible to add new methods to a class at run-time, which cannot directly be done in C++. This is particularly useful for implementing things like mixins. However, the downside is that the run-time lookup is generally considerably slower than even a virtual call in C++, and the compiler is not able to perform any compile-time type checking for the newly-added methods.
This question might help you.
Dynamic dispatch generally refers to multiple dispatch.
Consider the below example. I hope it might help you.
class Base2;
class Derived2; //Derived2 class is child of Base2
class Base1 {
public:
virtual void function1 (Base2 *);
virtual void function1 (Derived2 *);
}
class Derived1: public Base1 {
public:
//override.
virtual void function1(Base2 *);
virtual void function1(Derived2 *);
};
Consider the case of below.
Derived1 * d = new Derived1;
Base2 * b = new Derived2;
//Now which function1 will be called.
d->function1(b);
It will call function1 taking Base2* not Derived2*. This is due to lack of dynamic multiple dispatch.
Late binding is one of the mechanism to implement dynamic single dispatch.
I suppose the meaning is when you have two classes B,C inherits the same father class A. so, pointer of the father (type A) can hold each of sons types. The compiler cannot know what the type holds in the pointer in certain time, because it can change during the program run.
There is special functions to determine what the type of certain object in certain time. like instanceof in java, or by if(typeid(b) == typeid(A))... in c++.
In C++, both dynamic dispatch and late binding is the same. Basically, the value of a single object determines the piece of code invoked at runtime. In languages like C++ and java dynamic dispatch is more specifically dynamic single dispatch which works as mentioned above. In this case, since the binding occurs at runtime, it is also called late binding. Languages like smalltalk allow dynamic multiple dispatch in which the runtime method is chosen at runtime based on the identities or values of more than one object.
In C++ we dont really have late binding, because the type information is known. Thus in the C++ or Java context, dynamic dispatch and late binding are the same. Actual/fully late binding, I think is in languages like python which is a method-based lookup rather than type based.

Why should I mark all methods virtual in C++? Is there a trade-off?

I know why and how virtual methods work, and most of the time people tell me I should always mark a method virtual, but, I don't understand why if I'm not going to override it. And I also know there's a tiny memory issue.
Please explain me why I should mark all methods virtual and what's the trade-off.
Code example:
class Point
{
int x, y;
public:
virtual void setX(int i);
virtual void setY(int i);
};
(That question is not equal to Should I mark all methods virtual? because I want to know the trade-off and because the programming language in the case is C++, not C#)
OBS: I'm sorry if there's any grammar error, English is not my native language.
No, you should not "mark all methods as virtual".
If you want the method to be virtual, then mark it as such. If not, leave the keyword out.
There is an overhead for virtual methods compared to regular ones. If you want to read more about this, check out the Wikipedia side about VTables.
The real reason to make member functions non-virtual is to enforce class invariants.
Advice to make all member functions virtual generally means that either:
The people giving the advice don't understand the class, or
the people giving the advice don't understand OO design.
Yes, there are a few cases (e.g., some abstract base classes, where the only class invariant is the existence of specific functions) in which all the functions should be virtual. Those are the exception though. In most classes, virtual functions should be restricted to those that you really intend to allow derived classes to provide new/different behavior.
As for the discussion of things like vtables and the overhead of virtual function calls, I'd say they're correct as far as they go, but they miss the really big point. Whether a particular function should or shouldn't be virtual is primarily a question of class design and only secondarily a question of function call overhead. The two don't do the same thing, so trying to compare overhead rarely makes sense.
That is not the case, ie, if you dont need a virtual function then dont use it. Also as per Bjarne Stroustrup Pay per use
In C++: --
Virtual functions have a slight performance penalty. Normally it is too small to make any difference but in a tight loop it might be
significant.
A virtual function increases the size of each object by one pointer. Again this is typically insignificant, but if you create
millions of small objects it could be a factor.
Classes with virtual functions are generally meant to be inherited from. The derived classes may replace some, all or none of the
virtual functions. This can create additional complexity and
complexity is the programmers mortal enemy. For example, a derived
class may poorly implement a virtual function. This may break a part
of the base class that relies on the virtual function.
One of C++'s basic principles is that you don't pay for what you don't need. virtual functions cost more than normal member functions in both time and space. Therefore you shouldn't always use them irregardless of whether or not you'll actually ever need them or not.
Making methods virtual has slight costs (more code, more complexity, larger binaries, slower method calls), and if the class is not inherited from it brings no benefit. Classes need to be designed for inheritance, otherwise inheriting from them is just begging to shoot yourself in the foot. So no, you should not always make every method virtual. The people who tell you this are probably just too inheritance-happy.
It is not true that all functions should be marked as virtual.
Indeed, there's a pattern for enforcing pre/postconditions which explicitly requires that public members are not virtual. It works as follows:
class Whatever
{
public:
int frobnicate(int);
protected:
virtual int do_frobnicate(int);
};
int Whatever::frobnicate(int x)
{
check_preconditions(x);
int result = do_frobnicate(x);
check_postconditions(x, result);
return result;
}
Since derived classes cannot override the public function, they cannot remove the pre/postcondition checks. They can, however, override the protected do_frobnicate which does the actual work.
(Edit - I got to this question by way of a duplicate C# question, but the answer is still useful, I think! Edited to reflect that:)
Actually, C# has a good "official" reason: https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/versioning-with-the-override-and-new-keywords
The first sentence there is:
The C# language is designed so that versioning between base and derived classes in different libraries can evolve and maintain backward compatibility.
So if you're writing a class and it's possible end-users will make derived classes, and it's possible different versions will get out of sync...all of a sudden it starts to be important. If you have carefully protected core aspects of your code, then if you update things, people can still use the old derived class (hopefully).
On the other hand, if you are okay with no one being able to used derived classes until their authors have updated to your newest version, everything can be virtual. I have seen this scenario in several "early access" games that allow modding - when the game version increases, all of a sudden a lot of mods are broken because they relied on things working one way...and they changed. (Okay, not all changes are related to this, but some are.)
It really depends on your usage scenario. If people can keep using your old version, maybe they don't care if you've updated it and are happy to keep using it with their derived classes. In a large business scenario, making everything virtual may very well be a recipe for breaking many things at once when someone updates something.
Does this apply to C++ as well? I don't see why not - C++ is also used for massive projects and would also face the dangers of multiple simultaneous versions.

If a class might be inherited, should every function be virtual?

In C++, a coder doesn't know whether other coders will inherit his class. Should he make every function in that class virtual? Are there any drawbacks? Or is it just not acceptable at all?
In C++, you should only make a class inheritable from if you intend for it to be used polymorphically. The way that you treat polymorphic objects in C++ is very different from how you treat other objects. You don't tend to put polymorphic classes on the stack, or pass them by or return them from functions by value, since this can lead to slicing. Polymorphic objects tend to be heap-allocated, be passed around and returns by pointer or by reference, etc.
If you design a class to not be inherited from and then inherit from it, you cause all sorts of problems. If the destructor isn't marked virtual, you can't delete the object through a base class pointer without causing undefined behavior. Without the member functions marked virtual, they can't be overridden in a derived class.
As a general rule in C++, when you design the class, determine whether you want it be inherited from. If you do, mark the appropriate functions virtual and give it a virtual destructor. You might also disable the copy assignment operator to avoid slicing. Similarly, if you want the class not to be inheritable, don't give it any of these functions. In most cases it's a logic error to inherit from a class that wasn't designed to be inherited from, and most of the times you'd want to do this you can often use composition instead of inheritance to achieve this effect.
No, not usually.
A non-virtual function enforces class-invariant behavior. A virtual function doesn't. As such, the person writing the base class should think about whether the behavior of a particular function is/should be class invariant or not.
While it's possible for a design to allow all behaviors to vary in derived classes, it's fairly unusual. It's usually a pretty good clue that the person who wrote the class either didn't think much about its design, lacked the resolve to make a decision.
In C++ you design your class to be used either as a value type or a polymorphic type. See, for example, C++ FAQ.
If you are making a class to be used by other people, you should put a lot of thought into your interface and try to work out how your class will be used. Then make the decisions like which functions should be virtual.
Or better yet write a test case for your class, using it how you expect it to be used, and then make the interface work for that. You might be surprised what you find out doing it. Things you thought were absolutely necessary might turn out to be rarely needed and things that you thought were not going to be used might turn out to be the most useful methods. Doing it this way around will save you time not doing unnecessary work in the long run and end up with solid designs.
Jerry Coffin and Dominic McDonnell have already covered the most important points.
I'll just add an observation, that in the time of MFC (middle 1990s) I was very annoyed with the lack of ways hook into things. For example, the documentation suggested copying MFC's source code for printing and modifying, instead of overriding behavior. Because nothing was virtual there.
There are of course a zillion+1 ways to provide "hooks", but virtual methods are one easy way. They're needed in badly designed classes, so that the client code can fix things, but in those badly designed classes the methods are not virtual. For classes with better design there is not so much need to override behavior, and so for those classes making methods virtual by default (and non-virtual only as active choice) can be counter-productive; as Jerry remarked, virtuals provide opportunites for derived classes to screw up.
There are design patterns that can be employed to minimize the possibilities of screw-ups.
For example, wrapping internal virtuals in exposed non-virtual methods with sanity checks, and, for example, using decoupled event handling (where appropriate) instead of virtuals.
Cheers & hth.,
When you create a class, and you want that class to be used polymorphically you have to consider that the class has two different interfaces. The user interface is defined by the set of public functions that are available in your base class, and that should pretty much cover all operations that users want to perform on objects of your class. This interface is defined by the access qualifiers, and in particular the public qualifier.
There is a second interface, that defines how your class is to be extended. At that level you have to think on what behavior you want to be overridden by extending classes, and what elements of your object you want to provide to extending classes. You offer access to derived classes by means of the protected qualifier, and you offer extension points by means of virtual functions.
You should try to follow the Non-Virtual Interface idiom whenever possible. That idiom (google for it) basically tries to fully separate the two interfaces by not having public virtual functions. Users call non-virtual functions, and those in turn call on configurable functionalities by means of protected/private virtual functions. This clearly separates extension points from the class interface.
There is a single case, where virtual has to be part of the user interface: the destructor. If you want to offer your users the ability to destroy derived objects through pointers to the base, then you have to provide a virtual destructor. Else you just provide a protected non-virtual one.
He should code the functions as it is, he shouldn't make them virtual at all, as in the circumstances specified by you.
The reasons being
1> The CLASS CODER would obviously have certain use of functions he is using.
2> The inherited class may or may not make use of these functions as per requirement.
3> Any function may be overwritten in derived class without any errors.

C++ : implications of making a method virtual

Should be a newbie question...
I have existing code in an existing class, A, that I want to extend in order to override an existing method, A::f().
So now I want to create class B to override f(), since I don't want to just change A::f() because other code depends on it.
To do this, I need to change A::f() to a virtual method, I believe.
My question is besides allowing a method to be dynamically invoked (to use B's implementation and not A's) are there any other implications to making a method virtual? Am I breaking some kind of good programming practice? Will this affect any other code trying to use A::f()?
Please let me know.
Thanks,
jbu
edit: my question was more along the lines of is there anything wrong with making someone else's method virtual? even though you're not changing someone else's implementation, you're still having to go into someone's existing code and make changes to the declaration.
If you make the function virtual inside of the base class, anything that derives from it will also have it virtual.
Once virtual, if you create an instance of A, then it will still call A::f.
If you create an instance of B and store it in a pointer of type A*. And then you call A*::->f, then it will call B's B::f.
As for side effects, there probably won't be any side effects, other than a slight (unnoticeable) performance loss.
There is a very small side effect as well, there could be a class C that also derives from A, and it may implement C::f, and expect that if A*::->f was called, then it expects A::f to be called. But this is not very common.
But more than likely, if C exists, then it does not implement C::f at all, and in which case everything is fine.
Be careful though, if you are using an already compiled library and you are modifying it's header files, what you are expecting to work probably will not. You will need to recompile the header and source files.
You could consider doing the following to avoid side effects:
Create a type A2 that derives from A and make it's f virtual
Use pointers of type A2 instead of A
Derive B from type A2.
In this way anything that used A will work in the same way guaranteed
Depending on what you need you may also be able to use a has-a relationship instead of a is-a.
There is a small implied performance penalty of a vtable lookup every time a virtual function is called. If it were not virtual, function calls are direct, since the code location is known at compile time. Wheras at runtime, a virtual function address must be referenced from the vtable of the object you're calling upon.
To do this, I need to change A::f() to
a virtual method, I believe.
Nope, you do not need to change it to a virtual method in order to override it. However, if you are using polymorphism you need to, i.e. if you have a lot of different classes deriving from A but stored as pointers to A.
There's also a memory overhead for virtual functions because of the vtable (apart from what spoulson mentioned)
There are other ways of accomplishing your goal. Does it make sense for B to be an A? For example, it makes sense for a Cat to be an Animal, but not for a Cat to be a Dog. Perhaps both A and B should derive from a base class, if they are related.
Is there just common functionality you can factor out? It sounds to me like you'll never be using these classes polymorphically, and just want the functionality. I would suggest you take that common functionality out and then make your two separate classes.
As for cost, if you're using A ad B directly, the compile will by-pass any virtual dispatching and just go straight to the functions calls, as if they were never virtual. If you pass a B into a place expecting `A1 (as a reference or pointer), then it will have to dispatch.
There are 2 performance hits when speaking about virtual methods.
vtable dispatching, its nothing to really worry about
virtual functions are never inlined, this can be much worse than the previous one, function inlining is something that can really speed things in some situations, it can never happen with a virtual function.
How kosher it is to change somebody else's code depends entirely on the local mores and customs. It isn't something we can answer for you.
The next question is whether the class was designed to be inherited from. In many cases, classes are not, and changing them to be useful base classes, without changing other aspects, can be tricky. A non-base class is likely to have everything private except the public functions, so if you need to access more of the internals in B you'll have to make more modifications to A.
If you're going to use class B instead of class A, then you can just override the function without making it virtual. If you're going to create objects of class B and refer to them as pointers to A, then you do need to make f() virtual. You also should make the destructor virtual.
It is good programming practise to use virtual methods where they are deserved. Virtual methods have many implications as to how sensible your C++ Class is.
Without virtual functions you cannot create interfaces in C++. A interface is a class with all undefined virtual functions.
However sometimes using virtual methods is not good. It doesn't always make sense to use a virtual methods to change the functionality of an object, since it implies sub-classing. Often you can just change the functionality using function objects or function pointers.
As mentioned a virtual function creates a table which a running program will reference to check what function to use.
C++ has many gotchas which is why one needs to be very aware of what they want to do and what the best way of doing it is. There aren't as many ways of doing something as it seems when compared to runtime dynamic OO programming languages such as Java or C#. Some ways will be either outright wrong, or will eventually lead to undefined behavior as your code evolves.
Since you have asked a very good question :D, I suggest you buy Scott Myer's Book: Effective C++, and Bjarne Stroustrup's book: The C++ Programming Language. These will teach you the subtleties of OO in C++ particularly when to use what feature.
If thats the first virtual method the class is going to have, you're making it no longer a POD. This can break things, although the chances for that are slim.
POD: http://en.wikipedia.org/wiki/Plain_old_data_structures