Performance: Inheritance and inlining? [duplicate] - c++

I got this question when I received a code review comment saying virtual functions need not be inline.
I thought inline virtual functions could come in handy in scenarios where functions are called on objects directly. But the counter-argument came to my mind is -- why would one want to define virtual and then use objects to call methods?
Is it best not to use inline virtual functions, since they're almost never expanded anyway?
Code snippet I used for analysis:
class Temp
{
public:
virtual ~Temp()
{
}
virtual void myVirtualFunction() const
{
cout<<"Temp::myVirtualFunction"<<endl;
}
};
class TempDerived : public Temp
{
public:
void myVirtualFunction() const
{
cout<<"TempDerived::myVirtualFunction"<<endl;
}
};
int main(void)
{
TempDerived aDerivedObj;
//Compiler thinks it's safe to expand the virtual functions
aDerivedObj.myVirtualFunction();
//type of object Temp points to is always known;
//does compiler still expand virtual functions?
//I doubt compiler would be this much intelligent!
Temp* pTemp = &aDerivedObj;
pTemp->myVirtualFunction();
return 0;
}

Virtual functions can be inlined sometimes. An excerpt from the excellent C++ faq:
"The only time an inline virtual call
can be inlined is when the compiler
knows the "exact class" of the object
which is the target of the virtual
function call. This can happen only
when the compiler has an actual object
rather than a pointer or reference to
an object. I.e., either with a local
object, a global/static object, or a
fully contained object inside a
composite."

C++11 has added final. This changes the accepted answer: it's no longer necessary to know the exact class of the object, it's sufficient to know the object has at least the class type in which the function was declared final:
class A {
virtual void foo();
};
class B : public A {
inline virtual void foo() final { }
};
class C : public B
{
};
void bar(B const& b) {
A const& a = b; // Allowed, every B is an A.
a.foo(); // Call to B::foo() can be inlined, even if b is actually a class C.
}

There is one category of virtual functions where it still makes sense to have them inline. Consider the following case:
class Base {
public:
inline virtual ~Base () { }
};
class Derived1 : public Base {
inline virtual ~Derived1 () { } // Implicitly calls Base::~Base ();
};
class Derived2 : public Derived1 {
inline virtual ~Derived2 () { } // Implicitly calls Derived1::~Derived1 ();
};
void foo (Base * base) {
delete base; // Virtual call
}
The call to delete 'base', will perform a virtual call to call correct derived class destructor, this call is not inlined. However because each destructor calls it's parent destructor (which in these cases are empty), the compiler can inline those calls, since they do not call the base class functions virtually.
The same principle exists for base class constructors or for any set of functions where the derived implementation also calls the base classes implementation.

I've seen compilers that don't emit any v-table if no non-inline function at all exists (and defined in one implementation file instead of a header then). They would throw errors like missing vtable-for-class-A or something similar, and you would be confused as hell, as i was.
Indeed, that's not conformant with the Standard, but it happens so consider putting at least one virtual function not in the header (if only the virtual destructor), so that the compiler could emit a vtable for the class at that place. I know it happens with some versions of gcc.
As someone mentioned, inline virtual functions can be a benefit sometimes, but of course most often you will use it when you do not know the dynamic type of the object, because that was the whole reason for virtual in the first place.
The compiler however can't completely ignore inline. It has other semantics apart from speeding up a function-call. The implicit inline for in-class definitions is the mechanism which allows you to put the definition into the header: Only inline functions can be defined multiple times throughout the whole program without a violation any rules. In the end, it behaves as you would have defined it only once in the whole program, even though you included the header multiple times into different files linked together.

Well, actually virtual functions can always be inlined, as long they're statically linked together: suppose we have an abstract class Base with a virtual function F and derived classes Derived1 and Derived2:
class Base {
virtual void F() = 0;
};
class Derived1 : public Base {
virtual void F();
};
class Derived2 : public Base {
virtual void F();
};
An hypotetical call b->F(); (with b of type Base*) is obviously virtual. But you (or the compiler...) could rewrite it like so (suppose typeof is a typeid-like function that returns a value that can be used in a switch)
switch (typeof(b)) {
case Derived1: b->Derived1::F(); break; // static, inlineable call
case Derived2: b->Derived2::F(); break; // static, inlineable call
case Base: assert(!"pure virtual function call!");
default: b->F(); break; // virtual call (dyn-loaded code)
}
while we still need RTTI for the typeof, the call can effectively be inlined by, basically, embedding the vtable inside the instruction stream and specializing the call for all the involved classes. This could be also generalized by specializing only a few classes (say, just Derived1):
switch (typeof(b)) {
case Derived1: b->Derived1::F(); break; // hot path
default: b->F(); break; // default virtual call, cold path
}

inline really doesn't do anything - it's a hint. The compiler might ignore it or it might inline a call event without inline if it sees the implementation and likes this idea. If code clarity is at stake the inline should be removed.

Marking a virtual method inline, helps in further optimizing virtual functions in following two cases:
Curiously recurring template pattern (http://www.codeproject.com/Tips/537606/Cplusplus-Prefer-Curiously-Recurring-Template-Patt)
Replacing virtual methods with templates (http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html)

Inlined declared Virtual functions are inlined when called through objects and ignored when called via pointer or references.

With modern compilers, it won't do any harm to inlibe them. Some ancient compiler/linker combos might have created multiple vtables, but I don't believe that is an issue anymore.

A compiler can only inline a function when the call can be resolved unambiguously at compile time.
Virtual functions, however are resolved at runtime, and so the compiler cannot inline the call, since at compile type the dynamic type (and therefore the function implementation to be called) cannot be determined.

In the cases where the function call is unambiguous and the function a suitable candidate for inlining, the compiler is smart enough to inline the code anyway.
The rest of the time "inline virtual" is a nonsense, and indeed some compilers won't compile that code.

It does make sense to make virtual functions and then call them on objects rather than references or pointers. Scott Meyer recommends, in his book "effective c++", to never redefine an inherited non-virtual function. That makes sense, because when you make a class with a non-virtual function and redefine the function in a derived class, you may be sure to use it correctly yourself, but you can't be sure others will use it correctly. Also, you may at a later date use it incorrectly yoruself. So, if you make a function in a base class and you want it to be redifinable, you should make it virtual. If it makes sense to make virtual functions and call them on objects, it also makes sense to inline them.

Actually in some cases adding "inline" to a virtual final override can make your code not compile so there is sometimes a difference (at least under VS2017s compiler)!
Actually I was doing a virtual inline final override function in VS2017 adding c++17 standard to compile and link and for some reason it failed when I am using two projects.
I had a test project and an implementation DLL that I am unit testing. In the test project I am having a "linker_includes.cpp" file that #include the *.cpp files from the other project that are needed. I know... I know I can set up msbuild to use the object files from the DLL, but please bear in mind that it is a microsoft specific solution while including the cpp files is unrelated to build-system and much more easier to version a cpp file than xml files and project settings and such...
What was interesting is that I was constantly getting linker error from the test project. Even if I added the definition of the missing functions by copy paste and not through include! So weird. The other project have built and there are no connection between the two other than marking a project reference so there is a build order to ensure both is always built...
I think it is some kind of bug in the compiler. I have no idea if it exists in the compiler shipped with VS2020, because I am using an older version because some SDK only works with that properly :-(
I just wanted to add that not only marking them as inline can mean something, but might even make your code not build in some rare circumstances! This is weird, yet good to know.
PS.: The code I am working on is computer graphics related so I prefer inlining and that is why I used both final and inline. I kept the final specifier to hope the release build is smart enough to build the DLL by inlining it even without me directly hinting so...
PS (Linux).: I expect the same does not happen in gcc or clang as I routinely used to do these kind of things. I am not sure where this issue comes from... I prefer doing c++ on Linux or at least with some gcc, but sometimes project is different in needs.

Related

How can a destructor be both virtual and inline in C++? [duplicate]

I got this question when I received a code review comment saying virtual functions need not be inline.
I thought inline virtual functions could come in handy in scenarios where functions are called on objects directly. But the counter-argument came to my mind is -- why would one want to define virtual and then use objects to call methods?
Is it best not to use inline virtual functions, since they're almost never expanded anyway?
Code snippet I used for analysis:
class Temp
{
public:
virtual ~Temp()
{
}
virtual void myVirtualFunction() const
{
cout<<"Temp::myVirtualFunction"<<endl;
}
};
class TempDerived : public Temp
{
public:
void myVirtualFunction() const
{
cout<<"TempDerived::myVirtualFunction"<<endl;
}
};
int main(void)
{
TempDerived aDerivedObj;
//Compiler thinks it's safe to expand the virtual functions
aDerivedObj.myVirtualFunction();
//type of object Temp points to is always known;
//does compiler still expand virtual functions?
//I doubt compiler would be this much intelligent!
Temp* pTemp = &aDerivedObj;
pTemp->myVirtualFunction();
return 0;
}
Virtual functions can be inlined sometimes. An excerpt from the excellent C++ faq:
"The only time an inline virtual call
can be inlined is when the compiler
knows the "exact class" of the object
which is the target of the virtual
function call. This can happen only
when the compiler has an actual object
rather than a pointer or reference to
an object. I.e., either with a local
object, a global/static object, or a
fully contained object inside a
composite."
C++11 has added final. This changes the accepted answer: it's no longer necessary to know the exact class of the object, it's sufficient to know the object has at least the class type in which the function was declared final:
class A {
virtual void foo();
};
class B : public A {
inline virtual void foo() final { }
};
class C : public B
{
};
void bar(B const& b) {
A const& a = b; // Allowed, every B is an A.
a.foo(); // Call to B::foo() can be inlined, even if b is actually a class C.
}
There is one category of virtual functions where it still makes sense to have them inline. Consider the following case:
class Base {
public:
inline virtual ~Base () { }
};
class Derived1 : public Base {
inline virtual ~Derived1 () { } // Implicitly calls Base::~Base ();
};
class Derived2 : public Derived1 {
inline virtual ~Derived2 () { } // Implicitly calls Derived1::~Derived1 ();
};
void foo (Base * base) {
delete base; // Virtual call
}
The call to delete 'base', will perform a virtual call to call correct derived class destructor, this call is not inlined. However because each destructor calls it's parent destructor (which in these cases are empty), the compiler can inline those calls, since they do not call the base class functions virtually.
The same principle exists for base class constructors or for any set of functions where the derived implementation also calls the base classes implementation.
I've seen compilers that don't emit any v-table if no non-inline function at all exists (and defined in one implementation file instead of a header then). They would throw errors like missing vtable-for-class-A or something similar, and you would be confused as hell, as i was.
Indeed, that's not conformant with the Standard, but it happens so consider putting at least one virtual function not in the header (if only the virtual destructor), so that the compiler could emit a vtable for the class at that place. I know it happens with some versions of gcc.
As someone mentioned, inline virtual functions can be a benefit sometimes, but of course most often you will use it when you do not know the dynamic type of the object, because that was the whole reason for virtual in the first place.
The compiler however can't completely ignore inline. It has other semantics apart from speeding up a function-call. The implicit inline for in-class definitions is the mechanism which allows you to put the definition into the header: Only inline functions can be defined multiple times throughout the whole program without a violation any rules. In the end, it behaves as you would have defined it only once in the whole program, even though you included the header multiple times into different files linked together.
Well, actually virtual functions can always be inlined, as long they're statically linked together: suppose we have an abstract class Base with a virtual function F and derived classes Derived1 and Derived2:
class Base {
virtual void F() = 0;
};
class Derived1 : public Base {
virtual void F();
};
class Derived2 : public Base {
virtual void F();
};
An hypotetical call b->F(); (with b of type Base*) is obviously virtual. But you (or the compiler...) could rewrite it like so (suppose typeof is a typeid-like function that returns a value that can be used in a switch)
switch (typeof(b)) {
case Derived1: b->Derived1::F(); break; // static, inlineable call
case Derived2: b->Derived2::F(); break; // static, inlineable call
case Base: assert(!"pure virtual function call!");
default: b->F(); break; // virtual call (dyn-loaded code)
}
while we still need RTTI for the typeof, the call can effectively be inlined by, basically, embedding the vtable inside the instruction stream and specializing the call for all the involved classes. This could be also generalized by specializing only a few classes (say, just Derived1):
switch (typeof(b)) {
case Derived1: b->Derived1::F(); break; // hot path
default: b->F(); break; // default virtual call, cold path
}
inline really doesn't do anything - it's a hint. The compiler might ignore it or it might inline a call event without inline if it sees the implementation and likes this idea. If code clarity is at stake the inline should be removed.
Marking a virtual method inline, helps in further optimizing virtual functions in following two cases:
Curiously recurring template pattern (http://www.codeproject.com/Tips/537606/Cplusplus-Prefer-Curiously-Recurring-Template-Patt)
Replacing virtual methods with templates (http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html)
Inlined declared Virtual functions are inlined when called through objects and ignored when called via pointer or references.
With modern compilers, it won't do any harm to inlibe them. Some ancient compiler/linker combos might have created multiple vtables, but I don't believe that is an issue anymore.
A compiler can only inline a function when the call can be resolved unambiguously at compile time.
Virtual functions, however are resolved at runtime, and so the compiler cannot inline the call, since at compile type the dynamic type (and therefore the function implementation to be called) cannot be determined.
In the cases where the function call is unambiguous and the function a suitable candidate for inlining, the compiler is smart enough to inline the code anyway.
The rest of the time "inline virtual" is a nonsense, and indeed some compilers won't compile that code.
It does make sense to make virtual functions and then call them on objects rather than references or pointers. Scott Meyer recommends, in his book "effective c++", to never redefine an inherited non-virtual function. That makes sense, because when you make a class with a non-virtual function and redefine the function in a derived class, you may be sure to use it correctly yourself, but you can't be sure others will use it correctly. Also, you may at a later date use it incorrectly yoruself. So, if you make a function in a base class and you want it to be redifinable, you should make it virtual. If it makes sense to make virtual functions and call them on objects, it also makes sense to inline them.
Actually in some cases adding "inline" to a virtual final override can make your code not compile so there is sometimes a difference (at least under VS2017s compiler)!
Actually I was doing a virtual inline final override function in VS2017 adding c++17 standard to compile and link and for some reason it failed when I am using two projects.
I had a test project and an implementation DLL that I am unit testing. In the test project I am having a "linker_includes.cpp" file that #include the *.cpp files from the other project that are needed. I know... I know I can set up msbuild to use the object files from the DLL, but please bear in mind that it is a microsoft specific solution while including the cpp files is unrelated to build-system and much more easier to version a cpp file than xml files and project settings and such...
What was interesting is that I was constantly getting linker error from the test project. Even if I added the definition of the missing functions by copy paste and not through include! So weird. The other project have built and there are no connection between the two other than marking a project reference so there is a build order to ensure both is always built...
I think it is some kind of bug in the compiler. I have no idea if it exists in the compiler shipped with VS2020, because I am using an older version because some SDK only works with that properly :-(
I just wanted to add that not only marking them as inline can mean something, but might even make your code not build in some rare circumstances! This is weird, yet good to know.
PS.: The code I am working on is computer graphics related so I prefer inlining and that is why I used both final and inline. I kept the final specifier to hope the release build is smart enough to build the DLL by inlining it even without me directly hinting so...
PS (Linux).: I expect the same does not happen in gcc or clang as I routinely used to do these kind of things. I am not sure where this issue comes from... I prefer doing c++ on Linux or at least with some gcc, but sometimes project is different in needs.

C++ virtual function inlining when derived class is final?

I'm using C++ in an embedded environment where the runtime of virtual functions does matter. I have read about the rare cases when virtual functions can be inlined, for example: Are inline virtual functions really a non-sense?
The accepted answer states that inlining is only possible when the exact class is known at runtime, for example when dealing with a local, global, or static object (not a pointer or reference to the base type). I understand the logic behind this, but I wonder if inlining would be also possible in the following case:
class Base {
public:
inline virtual void x() = 0;
}
class Derived final : Base {
public:
inline virtual void x(){
cout << "inlined?";
}
}
int main(){
Base* a;
Derived* b;
b = new Derived();
a = b;
a->x(); //This can definitely not be inlined.
b->x(); //Can this be inlined?
}
From my point of view the compiler should know the definitive type of a at compiletime, as it is a final class. Is it possible to inline the virtual function in this case? If not, then why? If yes, then does the gcc-compiler (respectively avr-gcc) does so?
Thanks!
The first step is called devirtualization; where a function call does not go through virtual dispatch.
Compilers can and do devirtualize final methods and methods of final classes. That is almost the entire point of final.
Once devirtualized, methods can be inlined.
Some compilers can sometimes prove the static type of *a and even devirtualize that. This is less reliable. Godbolt's compiler explorer can be useful to understand what specific optimizations can happen and how it can fail.

C++ Virtual function called normally or virtually?

Let's say a virtual function is called on a derived class object (on the object normally or through pointer/reference) that overrides that function but none of its derived classes override it. Is it called "virtually" through v-pointer or as a normal function so all optimizations / inlining apply to it? Does the standard say anything about this?
class CBase{
public:
virtual void doSomething() = 0;
};
class CDerived1 : public CBase{
public:
void doSomething() override { /* do stuff */};
};
class CDerived2 : public CDerived1{
};
//...
CDerived1 derived1;
CDerived1* p_derived1 = &derived1;
p_derived1->doSomething();
Does the standard say anything about this?
No. Whether the call uses the dynamic dispatch mechanism or not is not observable behavior. And the standard only concerns itself with observable behavior.
How much compilers "devirtualize" virtual calls is ultimately implementation defined. If you just have T t; and you do t.whatever(), then you shouldn't use a compiler that cannot devirtualize that.
Inlining affects devirtualization as well. Given the T t declarations, if you pass a function a reference to this object, and it takes a T& parameter, calls into it can be devirtualized if that function gets inlined.
But if its a non-inlined instance of that function (say, a pointer to the function, perhaps through std::function or whatever), then devirtualization is much harder. See, the compiler doesn't see the whole program, so it cannot see if you have some class somewhere that inherits from T and overrides that method.
Only the linker, through whole program optimization, could possibly see all class definitions that inherit from it. And even then... maybe not. Because it is still technically possible for DLLs/SOs to inherit from classes. And the non-inlined function ought to be able to take those T&s and call their overridden methods.
So once you leave a chain of inlining where the object's dynamic type and virtual calls into it are both visible to the compiler, devirtualization becomes far more difficult if not impossible.

Why does this compile, but not link?

#include <iostream>
using namespace std;
class C
{
public:
virtual void a();
};
class D : public C
{
public:
void a() { cout<<"D::a\n"; }
void b() { cout<<"D::b\n"; }
};
int main()
{
D a;
a.b();
return 0;
}
I am getting a link error about undefined reference to 'vtable for C'. What does this mean and why is it?
I know the problem is obviously that the base class has a non-pure virtual function that is never defined, but why does this bother the linker if I am never calling it? Why is it different then any other function that I declare and don't define, that if I never call it I am fine?
I am interested in the nitty-gritty details.
Most implementations of C++ compilers generate a vtable for each class, which is a table of function pointers for the virtual functions. Like any other data item, there can only be one definition of the vtable. Some C++ compilers generate this vtable when compiling the implementation of the first declared virtual function within a type (this guarantees that there is only one definition of the vtable). If you fail to provide an implementation for the first virtual function, your compiler does not generate the vtable and the linker complains about the missing vtable with a link error.
As you can see, the precise details of this depends on the implementation of your chosen compiler and linker. Not all toolchains are the same.
Q: I know the problem is obviously that the base class has a non-pure virtual function that is never defined
A: That's the answer to your question :)
Q: Why is it different then any other function that I declare and don't define, that if I never call it I am fine?
A: Because it's not just a "function". It's a virtual class method.
SUGGESTION:
Declare three different classes:
1) a simple method
2) a virtual method (as your "C" above)
3) an abstract virtual method ( = 0)
Generate assembly output (e.g. "-S" for GCC)
Compare the three cases. Carefully note if a *constructor" is created :)
If you want C to be a pure virtual base-class, you need to tell the compiler "there will be no implementation of a()", you do this by usring ... a() = 0; to the declaration of the class. The compiler is trying to fing a base-class vtable, but there isn't one!
The need for actual function definition can arise in several circumstances. You already named one of such circumstances: the function has to be defined if you call it. However, this is not all. Consider this piece of code
void foo();
int main() {
void (*p)() = &foo;
}
This code never calls foo. However, if you try to compile it, a typical compiler will complain about missing definition of foo at linking stage. So, taking the address of a function (even if you never call it) happens to be one of the situations when the function has to be defined.
And this is where the matter of virtual functions come in. Virtual functions are typically implemented through a virtual table: a table that contains pointers to the definitions of the virtual functions. This table is formed and initialized by the compiler in advance, unconditionally, regardless of whether you actually call the functions or not. The compiler will essentially implicitly take the address of each non-pure virtual function in your program and place it into the corresponding table. For this reason, each non-pure virtual function will require a definition even if you never call it.
The exact virtual mechanism is an implementation detail, so you might end up with different specific errors is your situation (like a missing function definition or missing virtual table itself), but the fundamental reason for these error is what I described above.

In C++, does overriding an existing virtual function break ABI?

My library has two classes, a base class and a derived class. In the current version of the library the base class has a virtual function foo(), and the derived class does not override it. In the next version I'd like the derived class to override it. Does this break ABI? I know that introducing a new virtual function usually does, but this seems like a special case. My intuition is that it should be changing an offset in the vtbl, without actually changing the table's size.
Obviously since the C++ standard doesn't mandate a particular ABI this question is somewhat platform specific, but in practice what breaks and maintains ABI is similar across most compilers. I'm interested in GCC's behavior, but the more compilers people can answer for the more useful this question will be ;)
It might.
You're wrong regarding the offset. The offset in the vtable is determined already. What will happen is that the Derived class constructor will replace the function pointer at that offset with the Derived override (by switching the in-class v-pointer to a new v-table). So it is, normally, ABI compatible.
There might be an issue though, because of optimization, and especially the devirtualization of function calls.
Normally, when you call a virtual function, the compiler introduces a lookup in the vtable via the vpointer. However, if it can deduce (statically) what the exact type of the object is, it can also deduce the exact function to call and shave off the virtual lookup.
Example:
struct Base {
virtual void foo();
virtual void bar();
};
struct Derived: Base {
virtual void foo();
};
int main(int argc, char* argv[]) {
Derived d;
d.foo(); // It is necessarily Derived::foo
d.bar(); // It is necessarily Base::bar
}
And in this case... simply linking with your new library will not pick up Derived::bar.
This doesn't seem like something that could be particularly relied on in general - as you said C++ ABI is pretty tricky (even down to compiler options).
That said I think you could use g++ -fdump-class-hierarchy before and after you made the change to see if either the parent or child vtables change in structure. If they don't it's probably "fairly" safe to assume you didn't break ABI.
Yes, in some situations, adding a reimplementation of a virtual function will change the layout of the virtual function table. That is the case if you're reimplementing a virtual function from a base that isn't the first base class (multiple-inheritance):
// V1
struct A { virtual void f(); };
struct B { virtual void g(); };
struct C : A, B { virtual void h(); }; //does not reimplement f or g;
// V2
struct C : A, B {
virtual void h();
virtual void g(); //added reimplementation of g()
};
This changes the layout of C's vtable by adding an entry for g() (thanks to "Gof" for bringing this to my attention in the first place, as a comment in http://marcmutz.wordpress.com/2010/07/25/bcsc-gotcha-reimplementing-a-virtual-function/).
Also, as mentioned elsewhere, you get a problem if the class you're overriding the function in is used by users of your library in a way where the static type is equal to the dynamic type. This can be the case after you new'ed it:
MyClass * c = new MyClass;
c->myVirtualFunction(); // not actually virtual at runtime
or created it on the stack:
MyClass c;
c.myVirtualFunction(); // not actually virtual at runtime
The reason for this is an optimisation called "de-virtualisation". If the compiler can prove, at compile time, what the dynamic type of the object is, it will not emit the indirection through the virtual function table, but instead call the correct function directly.
Now, if users compiled against an old version of you library, the compiler will have inserted a call to the most-derived reimplementation of the virtual method. If, in a newer version of your library, you override this virtual function in a more-derived class, code compiled against the old library will still call the old function, whereas new code or code where the compiler could not prove the dynamic type of the object at compile time, will go through the virtual function table. So, a given instance of the class may be confronted, at runtime, with calls to the base class' function that it cannot intercept, potentially creating violations of class invariants.
My intuition is that it should be changing an offset in the vtbl, without actually changing the table's size.
Well, your intuition is clearly wrong:
either there is a new entry in the vtable for the overrider, all following entries are moved, and the table grows,
or there is no new entry, and the vtable representation does not change.
Which one is true can depends on many factors.
Anyway: do not count on it.
Caution: see In C++, does overriding an existing virtual function break ABI? for a case where this logic doesn't hold true;
In my mind Mark's suggestion to use g++ -fdump-class-hierarchy would be the winner here, right after having proper regression tests
Overriding things should not change vtable layout[1]. The vtable entries itself would be in the datasegment of the library, IMHO, so a change to it should not pose a problem.
Of course, the applications need to be relinked, otherwise there is a potential for breakage if the consumer had been using direct reference to &Derived::overriddenMethod;
I'm not sure whether a compiler would have been allowed to resolve that to &Base::overriddenMethod at all, but better safe than sorry.
[1] spelling it out: this presumes that the method was virtual to begin with!