Here is what I am talking about
// some guy wrote this, used as a Policy with templates
struct MyWriter {
void write(std::vector<char> const& data) {
// ...
}
};
In some existing code, the people did not use templates, but interfaces+type-erasure
class IWriter {
public:
virtual ~IWriter() {}
public:
virtual void write(std::vector<char> const& data) = 0;
};
Someone else wanted to be usable with both approaches and writes
class MyOwnClass: private MyWriter, public IWriter {
// other stuff
};
MyOwnClass is implemented-in-terms-of MyWriter. Why doesn't MyOwnClass' inherited member functions implement the interface of IWriter automatically? Instead the user has to write forwarding functions that do nothing but call the base class versions, as in
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) {
MyWriter::write(data);
}
};
I know that in Java when you have a class that implements an interface and derives from a class that happens to have suitable methods, that base class automatically implements the interface for the derived class.
Why doesn't C++ do that? It seems like a natural thing to have.
This is multiple inheritance, and there are two inherited functions with the same signature, both of which have implementation. That's where C++ is different from Java.
Calling write on an expression whose static type is MyBigClass would therefore be ambiguous as to which of the inherited functions was desired.
If write is only called through base class pointers, then defining write in the derived class is NOT necessary, contrary to the claim in the question. Now that the question changed to include a pure specifier, implementing that function in the derived class is necessary to make the class concrete and instantiable.
MyWriter::write cannot be used for the virtual call mechanism of MyBigClass, because the virtual call mechanism requires a function that accepts an implicit IWriter* const this, and MyWriter::write accepts an implicit MyWriter* const this. A new function is required, which must take into account the address difference between the IWriter subobject and the MyWriter subobject.
It would be theoretically possible for the compiler to create this new function automatically, but it would be fragile, since a change in a base class could suddenly cause a new function to be chosen for forwarding. It's less fragile in Java, where only single inheritance is possible (there's only one choice for what function to forward to), but in C++, which supports full multiple inheritance, the choice is ambiguous, and we haven't even started on diamond inheritance or virtual inheritance yet.
Actually, this problem (difference between subobject addresses) is solved for virtual inheritance. But it requires additional overhead that's not necessary most of the time, and a C++ guiding principle is "you don't pay for what you don't use".
Why doesn't C++ do that? It seems like a natural thing to have.
Actually, no, it is extremely unnatural thing to have.
Please note that my reasoning is based on my own understanding of "common sense" and can be fundamentally flawed as a result.
You see, you have two different methods, first one in MyWriter, which is non virtual and second one in IWriter which is virtual. They are completely different despite "looking" similar.
I suggest to check this question. The good thing about non-virtual methods is that no matter what you do, as long as they don't call virtual methods, their behavior will never change. I.e. somebody deriving from your class with non-virtual methods will not break existing method by masking them. Virtual methods are designed to be overriden. The price of that is that it is possible to break underlying logic by improperly overriding virtual method. And this is a root of your problem.
Let's say what you propose is allowed. (automatic conversion to virtual with multiple inheritance) There two possible solutions:
Solution #1
MyWriter becomes virtual. Consequences: All existing C++ code in the world becomes easy to break via typo or name clash. MyWriter method was not supposed to be overriden initially, so suddenly turning it into virtual will (murphy's law) break underlying logic of MyWriter class when somebody derives from MyOwnClass. Which means that suddenly making MyWriter::write virtual is a bad idea.
Soluion #2
MyWriter remains static BUUUT it is included temporarily as a virtual method into IWriter, until overriden. At first glance there's nothing to worry about, but let's think about it. IWriter implements some kind of concept you had in mind, and it is supposed to do something. MyWriter implements another concept. To assign MyWriter::write as IWriter::write method you need two guarantees:
Compiler must ensure that MyWriter::write does what IWriter::write() is supposed to do.
Compiler must ensure that calling MyWriter::write from IWriter will not break existing functionality in MyWriter code programmer expects to use elsewhere.
So, the thing is that compiler cannot guarantee that. Functions have similar name and argument list, but by Murphy's law that means that they're prbably doing completely different thing. (sinf and cosf have same argument list, for example), and it is unlikely that compiler will be able to predict the future and make sure that at no point in development will MyWriter be changed in such way that it will become incompatible with IWriter. So, since machine can't make reasonable decision (no AI for that) by itself, it has to ask YOU, programmer - "What is it you wish to do?". And you say "redirect virtual method into MyWriter::write(). It totally won't break anything. I think.".
And that's why you must specify which method you want to use manually....
Doing it automatically would be unintuitive and surprising. C++ does not assume that multiple base classes are related to each other, and protects the user against name collisions between their members by defining nested name specifiers for nonstatic members. Adding implicit declarations to MyOwnClass where signatures from IWriter and MyWriter collide would be antithetical to protecting names.
However, C++11 extensions do bring us closer. Consider this:
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) final = MyWriter::write;
};
This mechanism would be safe because it expresses that MyWriter doesn't expect any further overrides, and convenient because it names the function signature that will be "joined" but nothing more. Also, final would be ill-formed if the function weren't implicitly virtual, so it checks that the signature matches the virtual interface.
On one hand, most interfaces don't just happen to match up this way. Defining this feature to work only with identical signatures would be safe but rarely useful. Defining it as a shortcut to a delegating function body would be useful but fragile. So it might not really be a good feature
On the other hand, this is a good design pattern to provide functionality which isn't virtual when you don't need it to be. So given this idiom, we might use it to write good code, even if it doesn't match up well with current practices.
Why doesn't C++ do that?
I'm not sure what you're asking here. Could C++ be rewritten to allow this? Yes, but to what end?
Because MyWriter and IWriter are completely different classes, it is illegal in C++ to call a member of MyWriter through an instance of IWriter. The member pointers have completely different types. And just as a MyWriter* is not convertible to a IWriter*, neither is a void (MyWriter::*)(const std::vector<char>&) convertible to a void (IWriter::*)(const std::vector<char>&).
The rules of C++ don't change just because there could be a third class that combines the two. Neither class is a direct parent/child relative of one another. Therefore, they are treated as entirely distinct classes.
Remember: member functions always take an additional parameter: a this pointer to the object that they point to. You cannot call void (MyWriter::*)(const std::vector<char>&) on an IWriter*. The third class can have a method that casts itself into the proper base class, but it must actually have this method. So either you or the C++ compiler must create it. The rules of C++ require this.
Consider what would have to happen to make this work without a derived-class method.
A function gets an IWriter*. The user calls the write member of it, using nothing more than the IWriter* pointer. So... exactly how can the compiler generate the code to call MyWriter::writer? Remember: MyWriter::writer needs a MyWriter instance. And there is no relationship between IWriter and MyWriter.
So how exactly could the compiler do the type coercion locally? The compiler would have to check the virtual function to see if the actual function to be called takes IWriter or some other type. If it takes another type, it would have to convert the pointer to its true type, then do another conversion to the type needed by the virtual function. After doing all of that, it would then be able to make the call.
All of this overhead would affect every virtual call. All of them would have to at least check to see if the actual function to be call. Every call will also have to generate the code to do the type conversions, just in case.
Every virtual function call would have a "get type" and conditional branch in it. Even if it is never possible to trigger that branch. So you would be paying for something regardless of whether you use it or not. That's not the C++ way.
Even worse, a straight v-table implementation of virtual calls is no longer possible. The fastest method of doing virtual dispatch would not be a conforming implementation. The C++ committee is not going to make any change that would make such implementations impossible.
Again, to what end? Just so that you don't have to write a simple forwarding function?
Just make MyWriter derive from IWriter, eliminate the IWriter derivation in MyOwnClass, and move on with life. This should resolve the problem and should not interfere with the template code.
Related
Example:
class IGui{
protected:
virtual bool OnClicked(){return false;}
virtual bool OnHover(){return false;}
virtual bool OnScrollBarChange(){return false;}
virtual bool OnTextChange(){return false;}
...
}
class IGuiButton: public IGui{
protected:
virtual bool OnClicked() = 0;
virtual bool OnHover(){
do stuff
return true;}
...
}
The point is having a commom interface for all gui types that can be (where not all virtuals need to be overridden), and then provide a lite specialization for a button, but for the button, theres must be a override for the OnClicked..
Also, I think I should make the ones a button shouldnt override private( so use private inheritance, and use that fancy "using Base::Method;" for making the specific ones protected?
There are multiple sides to the question. The first one is actually a quite interesting question:
Can a derived class have a pure virtual method that is not pure in the base?
The answer is yes, it can. With the expected (if you expected this to work) semantics: a type derived from the intermediate type must implement the virtual function not to be an abstract type. This leads to a curious circumstance, where the base is not abstract, but the derived type is... which will be surprising. Just for this, I would avoid this in a design.
Should you mark as private the members that deriving types should not override?
No, there is no reason or advantage to do that. Whether the member function is public, protected or private derived classes can override it. Any code that can call the functions through the base type will still be able to call it by casting to base. This leads to another strange thing in your design. The base class is filled with protected virtual functions, which means that they are accessible only by the derived type. This does not define an interface and cannot be used as such. If a function/class takes a reference to a IGui, or a IGuiButton it will not be able to do much, as there is no public interface. That basically means that noone will be able to call any of the events --unless you are also abusing friendship to provide access to the event handler, but you should avoid it.
So what is a proper design?
There are different alternatives. I'd recommend that before creating your own square wheel you look at those wheels that were invented in the past: look at different graphical frameworks and libraries and try to understand why they decided to design them as such. Look at the differences and try to determine what advantages/disadvantages they bring and which option matches your problem. UI is a domain where there is a lot of prior art, and chances are you will not design from scratch anything better than people in the field have done in the past --you might do it, but it is much easier to fall in the same pitfalls everyone else felt before.
I'd have to say I think what you are trying to do is poor design.
Your top level (IGui) "has everything" and then you are effectively taking stuff out as you move down the class hierarchy. The top level would normally have the common stuff and you add the differences as you move down.
You are losing the protections that a good design can give you.
Here is what I am talking about
// some guy wrote this, used as a Policy with templates
struct MyWriter {
void write(std::vector<char> const& data) {
// ...
}
};
In some existing code, the people did not use templates, but interfaces+type-erasure
class IWriter {
public:
virtual ~IWriter() {}
public:
virtual void write(std::vector<char> const& data) = 0;
};
Someone else wanted to be usable with both approaches and writes
class MyOwnClass: private MyWriter, public IWriter {
// other stuff
};
MyOwnClass is implemented-in-terms-of MyWriter. Why doesn't MyOwnClass' inherited member functions implement the interface of IWriter automatically? Instead the user has to write forwarding functions that do nothing but call the base class versions, as in
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) {
MyWriter::write(data);
}
};
I know that in Java when you have a class that implements an interface and derives from a class that happens to have suitable methods, that base class automatically implements the interface for the derived class.
Why doesn't C++ do that? It seems like a natural thing to have.
This is multiple inheritance, and there are two inherited functions with the same signature, both of which have implementation. That's where C++ is different from Java.
Calling write on an expression whose static type is MyBigClass would therefore be ambiguous as to which of the inherited functions was desired.
If write is only called through base class pointers, then defining write in the derived class is NOT necessary, contrary to the claim in the question. Now that the question changed to include a pure specifier, implementing that function in the derived class is necessary to make the class concrete and instantiable.
MyWriter::write cannot be used for the virtual call mechanism of MyBigClass, because the virtual call mechanism requires a function that accepts an implicit IWriter* const this, and MyWriter::write accepts an implicit MyWriter* const this. A new function is required, which must take into account the address difference between the IWriter subobject and the MyWriter subobject.
It would be theoretically possible for the compiler to create this new function automatically, but it would be fragile, since a change in a base class could suddenly cause a new function to be chosen for forwarding. It's less fragile in Java, where only single inheritance is possible (there's only one choice for what function to forward to), but in C++, which supports full multiple inheritance, the choice is ambiguous, and we haven't even started on diamond inheritance or virtual inheritance yet.
Actually, this problem (difference between subobject addresses) is solved for virtual inheritance. But it requires additional overhead that's not necessary most of the time, and a C++ guiding principle is "you don't pay for what you don't use".
Why doesn't C++ do that? It seems like a natural thing to have.
Actually, no, it is extremely unnatural thing to have.
Please note that my reasoning is based on my own understanding of "common sense" and can be fundamentally flawed as a result.
You see, you have two different methods, first one in MyWriter, which is non virtual and second one in IWriter which is virtual. They are completely different despite "looking" similar.
I suggest to check this question. The good thing about non-virtual methods is that no matter what you do, as long as they don't call virtual methods, their behavior will never change. I.e. somebody deriving from your class with non-virtual methods will not break existing method by masking them. Virtual methods are designed to be overriden. The price of that is that it is possible to break underlying logic by improperly overriding virtual method. And this is a root of your problem.
Let's say what you propose is allowed. (automatic conversion to virtual with multiple inheritance) There two possible solutions:
Solution #1
MyWriter becomes virtual. Consequences: All existing C++ code in the world becomes easy to break via typo or name clash. MyWriter method was not supposed to be overriden initially, so suddenly turning it into virtual will (murphy's law) break underlying logic of MyWriter class when somebody derives from MyOwnClass. Which means that suddenly making MyWriter::write virtual is a bad idea.
Soluion #2
MyWriter remains static BUUUT it is included temporarily as a virtual method into IWriter, until overriden. At first glance there's nothing to worry about, but let's think about it. IWriter implements some kind of concept you had in mind, and it is supposed to do something. MyWriter implements another concept. To assign MyWriter::write as IWriter::write method you need two guarantees:
Compiler must ensure that MyWriter::write does what IWriter::write() is supposed to do.
Compiler must ensure that calling MyWriter::write from IWriter will not break existing functionality in MyWriter code programmer expects to use elsewhere.
So, the thing is that compiler cannot guarantee that. Functions have similar name and argument list, but by Murphy's law that means that they're prbably doing completely different thing. (sinf and cosf have same argument list, for example), and it is unlikely that compiler will be able to predict the future and make sure that at no point in development will MyWriter be changed in such way that it will become incompatible with IWriter. So, since machine can't make reasonable decision (no AI for that) by itself, it has to ask YOU, programmer - "What is it you wish to do?". And you say "redirect virtual method into MyWriter::write(). It totally won't break anything. I think.".
And that's why you must specify which method you want to use manually....
Doing it automatically would be unintuitive and surprising. C++ does not assume that multiple base classes are related to each other, and protects the user against name collisions between their members by defining nested name specifiers for nonstatic members. Adding implicit declarations to MyOwnClass where signatures from IWriter and MyWriter collide would be antithetical to protecting names.
However, C++11 extensions do bring us closer. Consider this:
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) final = MyWriter::write;
};
This mechanism would be safe because it expresses that MyWriter doesn't expect any further overrides, and convenient because it names the function signature that will be "joined" but nothing more. Also, final would be ill-formed if the function weren't implicitly virtual, so it checks that the signature matches the virtual interface.
On one hand, most interfaces don't just happen to match up this way. Defining this feature to work only with identical signatures would be safe but rarely useful. Defining it as a shortcut to a delegating function body would be useful but fragile. So it might not really be a good feature
On the other hand, this is a good design pattern to provide functionality which isn't virtual when you don't need it to be. So given this idiom, we might use it to write good code, even if it doesn't match up well with current practices.
Why doesn't C++ do that?
I'm not sure what you're asking here. Could C++ be rewritten to allow this? Yes, but to what end?
Because MyWriter and IWriter are completely different classes, it is illegal in C++ to call a member of MyWriter through an instance of IWriter. The member pointers have completely different types. And just as a MyWriter* is not convertible to a IWriter*, neither is a void (MyWriter::*)(const std::vector<char>&) convertible to a void (IWriter::*)(const std::vector<char>&).
The rules of C++ don't change just because there could be a third class that combines the two. Neither class is a direct parent/child relative of one another. Therefore, they are treated as entirely distinct classes.
Remember: member functions always take an additional parameter: a this pointer to the object that they point to. You cannot call void (MyWriter::*)(const std::vector<char>&) on an IWriter*. The third class can have a method that casts itself into the proper base class, but it must actually have this method. So either you or the C++ compiler must create it. The rules of C++ require this.
Consider what would have to happen to make this work without a derived-class method.
A function gets an IWriter*. The user calls the write member of it, using nothing more than the IWriter* pointer. So... exactly how can the compiler generate the code to call MyWriter::writer? Remember: MyWriter::writer needs a MyWriter instance. And there is no relationship between IWriter and MyWriter.
So how exactly could the compiler do the type coercion locally? The compiler would have to check the virtual function to see if the actual function to be called takes IWriter or some other type. If it takes another type, it would have to convert the pointer to its true type, then do another conversion to the type needed by the virtual function. After doing all of that, it would then be able to make the call.
All of this overhead would affect every virtual call. All of them would have to at least check to see if the actual function to be call. Every call will also have to generate the code to do the type conversions, just in case.
Every virtual function call would have a "get type" and conditional branch in it. Even if it is never possible to trigger that branch. So you would be paying for something regardless of whether you use it or not. That's not the C++ way.
Even worse, a straight v-table implementation of virtual calls is no longer possible. The fastest method of doing virtual dispatch would not be a conforming implementation. The C++ committee is not going to make any change that would make such implementations impossible.
Again, to what end? Just so that you don't have to write a simple forwarding function?
Just make MyWriter derive from IWriter, eliminate the IWriter derivation in MyOwnClass, and move on with life. This should resolve the problem and should not interfere with the template code.
Because of C++ nature of static-binding for methods, this affects the polymorphic calls.
From Wikipedia:
Although the overhead involved in this dispatch mechanism is low, it
may still be significant for some application areas that the language
was designed to target. For this reason, Bjarne Stroustrup, the
designer of C++, elected to make dynamic dispatch optional and
non-default. Only functions declared with the virtual keyword will be
dispatched based on the runtime type of the object; other functions
will be dispatched based on the object's static type.
So the code:
Polygon* p = new Triangle;
p->area();
provided that area() is a non-virtual function in Parent class that is overridden in the Child class, the code above will call the Parent's class method which might not be expected by the developer. (thanks to the static-binding I've introduced)
So, If I want to write a class to be used by others (e.g library), should I make all my functions to be virtual for the such previous code to run as expected?
The simple answer is if you intend functions of your class to be overridden for runtime polymorphism you should mark them as virtual, and not if you don't intend so.
Don't mark your functions virtual just because you feel it imparts additional flexibility, rather think of your design and purpose of exposing an interface. For ex: If your class is not designed to be inherited then making your member functions virtual will be misleading. A good example of this is Standard Library containers,which are not meant to be inherited and hence they do not have virtual destructors.
There are n no of reasons why not to mark all your member functions virtual, to quote some performance penalties, non-POD class type and so on, but if you really intent that your class is intended for run time overidding then that is the purpose of it and its about and over the so-called deficiencies.
Mark it virtual if derived classes should be able to override that method. It's as simple as that.
In terms of memory performance, you get a virtual pointer table if anything is virtual, so one way to look at it is "please one, please all". Otherwise, as the others say, mark them as virtual if you want them to be overridable such that calling that method on a base class means that the specialized versions are run.
As a general rule, you should only mark a function virtual if the class is explicitly designed to be used as a base class, and that function is designed to be overridden. In practice, most virtual functions will be pure virtual in the base class. And except in cases of call inversion, where you explicitly don't provide a contract for the overriding function, virtual functions should be private (or at the most protected), and wrapped with non-virtual functions enforcing the contract.
That's basically the idea ; actually if you are using a parent class, I don't think you'll need to override every methods so just make them virtual if you think you'll use it this way.
In C++, a coder doesn't know whether other coders will inherit his class. Should he make every function in that class virtual? Are there any drawbacks? Or is it just not acceptable at all?
In C++, you should only make a class inheritable from if you intend for it to be used polymorphically. The way that you treat polymorphic objects in C++ is very different from how you treat other objects. You don't tend to put polymorphic classes on the stack, or pass them by or return them from functions by value, since this can lead to slicing. Polymorphic objects tend to be heap-allocated, be passed around and returns by pointer or by reference, etc.
If you design a class to not be inherited from and then inherit from it, you cause all sorts of problems. If the destructor isn't marked virtual, you can't delete the object through a base class pointer without causing undefined behavior. Without the member functions marked virtual, they can't be overridden in a derived class.
As a general rule in C++, when you design the class, determine whether you want it be inherited from. If you do, mark the appropriate functions virtual and give it a virtual destructor. You might also disable the copy assignment operator to avoid slicing. Similarly, if you want the class not to be inheritable, don't give it any of these functions. In most cases it's a logic error to inherit from a class that wasn't designed to be inherited from, and most of the times you'd want to do this you can often use composition instead of inheritance to achieve this effect.
No, not usually.
A non-virtual function enforces class-invariant behavior. A virtual function doesn't. As such, the person writing the base class should think about whether the behavior of a particular function is/should be class invariant or not.
While it's possible for a design to allow all behaviors to vary in derived classes, it's fairly unusual. It's usually a pretty good clue that the person who wrote the class either didn't think much about its design, lacked the resolve to make a decision.
In C++ you design your class to be used either as a value type or a polymorphic type. See, for example, C++ FAQ.
If you are making a class to be used by other people, you should put a lot of thought into your interface and try to work out how your class will be used. Then make the decisions like which functions should be virtual.
Or better yet write a test case for your class, using it how you expect it to be used, and then make the interface work for that. You might be surprised what you find out doing it. Things you thought were absolutely necessary might turn out to be rarely needed and things that you thought were not going to be used might turn out to be the most useful methods. Doing it this way around will save you time not doing unnecessary work in the long run and end up with solid designs.
Jerry Coffin and Dominic McDonnell have already covered the most important points.
I'll just add an observation, that in the time of MFC (middle 1990s) I was very annoyed with the lack of ways hook into things. For example, the documentation suggested copying MFC's source code for printing and modifying, instead of overriding behavior. Because nothing was virtual there.
There are of course a zillion+1 ways to provide "hooks", but virtual methods are one easy way. They're needed in badly designed classes, so that the client code can fix things, but in those badly designed classes the methods are not virtual. For classes with better design there is not so much need to override behavior, and so for those classes making methods virtual by default (and non-virtual only as active choice) can be counter-productive; as Jerry remarked, virtuals provide opportunites for derived classes to screw up.
There are design patterns that can be employed to minimize the possibilities of screw-ups.
For example, wrapping internal virtuals in exposed non-virtual methods with sanity checks, and, for example, using decoupled event handling (where appropriate) instead of virtuals.
Cheers & hth.,
When you create a class, and you want that class to be used polymorphically you have to consider that the class has two different interfaces. The user interface is defined by the set of public functions that are available in your base class, and that should pretty much cover all operations that users want to perform on objects of your class. This interface is defined by the access qualifiers, and in particular the public qualifier.
There is a second interface, that defines how your class is to be extended. At that level you have to think on what behavior you want to be overridden by extending classes, and what elements of your object you want to provide to extending classes. You offer access to derived classes by means of the protected qualifier, and you offer extension points by means of virtual functions.
You should try to follow the Non-Virtual Interface idiom whenever possible. That idiom (google for it) basically tries to fully separate the two interfaces by not having public virtual functions. Users call non-virtual functions, and those in turn call on configurable functionalities by means of protected/private virtual functions. This clearly separates extension points from the class interface.
There is a single case, where virtual has to be part of the user interface: the destructor. If you want to offer your users the ability to destroy derived objects through pointers to the base, then you have to provide a virtual destructor. Else you just provide a protected non-virtual one.
He should code the functions as it is, he shouldn't make them virtual at all, as in the circumstances specified by you.
The reasons being
1> The CLASS CODER would obviously have certain use of functions he is using.
2> The inherited class may or may not make use of these functions as per requirement.
3> Any function may be overwritten in derived class without any errors.
Should be a newbie question...
I have existing code in an existing class, A, that I want to extend in order to override an existing method, A::f().
So now I want to create class B to override f(), since I don't want to just change A::f() because other code depends on it.
To do this, I need to change A::f() to a virtual method, I believe.
My question is besides allowing a method to be dynamically invoked (to use B's implementation and not A's) are there any other implications to making a method virtual? Am I breaking some kind of good programming practice? Will this affect any other code trying to use A::f()?
Please let me know.
Thanks,
jbu
edit: my question was more along the lines of is there anything wrong with making someone else's method virtual? even though you're not changing someone else's implementation, you're still having to go into someone's existing code and make changes to the declaration.
If you make the function virtual inside of the base class, anything that derives from it will also have it virtual.
Once virtual, if you create an instance of A, then it will still call A::f.
If you create an instance of B and store it in a pointer of type A*. And then you call A*::->f, then it will call B's B::f.
As for side effects, there probably won't be any side effects, other than a slight (unnoticeable) performance loss.
There is a very small side effect as well, there could be a class C that also derives from A, and it may implement C::f, and expect that if A*::->f was called, then it expects A::f to be called. But this is not very common.
But more than likely, if C exists, then it does not implement C::f at all, and in which case everything is fine.
Be careful though, if you are using an already compiled library and you are modifying it's header files, what you are expecting to work probably will not. You will need to recompile the header and source files.
You could consider doing the following to avoid side effects:
Create a type A2 that derives from A and make it's f virtual
Use pointers of type A2 instead of A
Derive B from type A2.
In this way anything that used A will work in the same way guaranteed
Depending on what you need you may also be able to use a has-a relationship instead of a is-a.
There is a small implied performance penalty of a vtable lookup every time a virtual function is called. If it were not virtual, function calls are direct, since the code location is known at compile time. Wheras at runtime, a virtual function address must be referenced from the vtable of the object you're calling upon.
To do this, I need to change A::f() to
a virtual method, I believe.
Nope, you do not need to change it to a virtual method in order to override it. However, if you are using polymorphism you need to, i.e. if you have a lot of different classes deriving from A but stored as pointers to A.
There's also a memory overhead for virtual functions because of the vtable (apart from what spoulson mentioned)
There are other ways of accomplishing your goal. Does it make sense for B to be an A? For example, it makes sense for a Cat to be an Animal, but not for a Cat to be a Dog. Perhaps both A and B should derive from a base class, if they are related.
Is there just common functionality you can factor out? It sounds to me like you'll never be using these classes polymorphically, and just want the functionality. I would suggest you take that common functionality out and then make your two separate classes.
As for cost, if you're using A ad B directly, the compile will by-pass any virtual dispatching and just go straight to the functions calls, as if they were never virtual. If you pass a B into a place expecting `A1 (as a reference or pointer), then it will have to dispatch.
There are 2 performance hits when speaking about virtual methods.
vtable dispatching, its nothing to really worry about
virtual functions are never inlined, this can be much worse than the previous one, function inlining is something that can really speed things in some situations, it can never happen with a virtual function.
How kosher it is to change somebody else's code depends entirely on the local mores and customs. It isn't something we can answer for you.
The next question is whether the class was designed to be inherited from. In many cases, classes are not, and changing them to be useful base classes, without changing other aspects, can be tricky. A non-base class is likely to have everything private except the public functions, so if you need to access more of the internals in B you'll have to make more modifications to A.
If you're going to use class B instead of class A, then you can just override the function without making it virtual. If you're going to create objects of class B and refer to them as pointers to A, then you do need to make f() virtual. You also should make the destructor virtual.
It is good programming practise to use virtual methods where they are deserved. Virtual methods have many implications as to how sensible your C++ Class is.
Without virtual functions you cannot create interfaces in C++. A interface is a class with all undefined virtual functions.
However sometimes using virtual methods is not good. It doesn't always make sense to use a virtual methods to change the functionality of an object, since it implies sub-classing. Often you can just change the functionality using function objects or function pointers.
As mentioned a virtual function creates a table which a running program will reference to check what function to use.
C++ has many gotchas which is why one needs to be very aware of what they want to do and what the best way of doing it is. There aren't as many ways of doing something as it seems when compared to runtime dynamic OO programming languages such as Java or C#. Some ways will be either outright wrong, or will eventually lead to undefined behavior as your code evolves.
Since you have asked a very good question :D, I suggest you buy Scott Myer's Book: Effective C++, and Bjarne Stroustrup's book: The C++ Programming Language. These will teach you the subtleties of OO in C++ particularly when to use what feature.
If thats the first virtual method the class is going to have, you're making it no longer a POD. This can break things, although the chances for that are slim.
POD: http://en.wikipedia.org/wiki/Plain_old_data_structures