Enable warnings for one definition rule violations - c++

There's a particular ODR violation that results from innocent enough code.
// TU1.cpp
struct Foo
{
int Bar() { return 1; }
};
// TU2.cpp
struct Foo
{
int Bar() { return 2; }
};
These two classes have no relation to each other. They are purely implementation details in unrelated cpp files. They just happen to have the same name and a function with the same signature. Member functions have external linkage by default and the functions are inline by default because they are defined inside the class. So the linker sees 2 definitions for Foo::Bar, assumes they are the same, and silently throws one of them away.
I'd like to be able to catch this specific bug, but so far I can't find any options for any of the big three compilers/linkers that will actually catch this. Does such an option exist? Is there another way to catch these kinds of bugs that isn't "wrap every class in every cpp in an anonymous namespace"?
Here's a working example of this specific issue.
https://godbolt.org/z/as9frG5o3

Related

Avoid compiling definition of inline function multiple times

I have a non-template struct in a header file:
struct X {
constexpr X() : /* ... */ { /* ... */ }
constexpr void f() {
// ...
}
};
With functions of varying size. This is used in a lot of different translation units, and each function appears in multiple object files for them to be discarded in the final executable.
What I want is for the definitions to be in a single object file, and the other translation units can either inline the function or use an external definition (something like the extern inline semantics from C). How can I do that?
It seems to work with templates and extern template:
namespace detail {
template<std::nullptr_t>
struct X_base {
constexpr X_base() // ...
constexpr void f() // ...
};
extern template struct X_base<nullptr>;
}
struct X : detail::X_base<nullptr> {
using X_base::X_base;
};
// X.cpp
#include <X.hpp>
template struct detail::X_base<nullptr>;
But are there any major downsides to this (longer symbol names, confusing to read, needs documentation, etc.), or are there any easier ways to do this?
C++ doesn’t have the notion of an inline function that must be emitted in one translation unit and which therefore certainly need not be emitted anywhere else. (It doesn’t have the notion of emitting object code at all, but the point is that there’s no syntax that says “I promise this definition is ODR-identical to the others except that it and only it bears this marker.” so that compilers could do that.)
However, the behavior you want is the obvious way of implementing C++20 modules: because the definition of an inline function in a module is known to be the only definition, it can and should be emitted once in case several importing translation units need an out-of-line copy of it. (Inlining is still possible because the definition is made available in a compiler-internal form as part of building the module.) Bear in mind that member functions defined in a class in a module are not automatically inline, although constexpr still implies it.
Another ugly workaround is to make non-inline wrappers to be used outside of constant evaluation, although this could get unwieldy if there were multiple levels of constexpr functions that might also be used at runtime.

c++ classes: different ways to write a function

what's the difference between the following 3 cases:
1) in point.h:
class point
{
int x,y;
public:
int getX();
};
int point::getX() {
return this->x;
}
2) in point.h:
class point
{
int x,y;
public:
int getX()
{
return this->x;
}
};
3) in point.h:
class point
{
int x,y;
public:
int getX();
};
int point.cpp:
int point::getX() {
return this->x;
}
Note: I read that it's somehow connected to inline but not sure which one of them makes the compiler to treat int getX() and inline int getX()
Avoid this first one:
struct point
{
int x,y;
int getX();
};
int point::getX() {
return this->x;
}
If multiple source files include point.h, you will get multiple definitions of point::getX, leading to a violation of the One Definition Rule (and modern linkers will give an error message).
For the second one:
struct point
{
int x,y;
int getX()
{
return this->x;
}
};
This implicitly inlines the function. This means that the function definition may be copy-pasted everywhere it is used, instead of resolving a function call. There are a few trade offs here. On one hand, by providing definitions in headers, you can more easily distribute your library. Additionally, in some cases you may see performance improvements due to the locality of the code. On the other hand, you may actually hurt performance due to instruction cache misses (more instructions around == it won't all fit in cache). And the size of your binaries may grow as the inlined function gets copied around.
Another tradeoff is that, should you ever need to change your implementation, all clients must rebuild.
Finally, depending on the sensitivity of the function, you may be revealing trade secrets through your headers (that is, there is absolutely no hiding of your secret sauce) (note: one can always decompile your binary and reverse engineer an implementation, so putting the def in the .cpp file won't stop a determined programmer, but it keeps honest people honest).
The third one, which separates a definition into a .cpp file:
// point.h
struct point
{
int x,y;
int getX();
};
// point.cpp
int point::getX() {
return this->x;
}
This will cause a function to get exported to your library (at least for gcc. In Windows, you need to be explicit by using __declspec directives to import/export). Again, there are tradeoffs here.
Changing the implementation does not require clients to recompile; you can distribute a new library for them to link to instead (the new library is ABI-compatible if you only change the impl details in the .cpp file). However, it is more difficult to distribute your library, as your binaries now need to be built for each platform.
You may see a performance decrease due to the requirement to resolve function pointers into a library for running code. You may also see a performance increase over inlining due to the fact that your code may be friendlier to the instruction cache.
In the end, there is a lot to consider. My recommendation is to go with #3 by default unless you are writing templates. When you want to look at improving performance, you can start to measure what inlining does for you (binary size as well as runtime perf). Of course you may have other information up front that makes approach #2 or #3 better suited for the task (e.g., you have a Point class, and you know that accessing X will happen everywhere and it's a really small function, so you decide to inline it).
what's the difference between the following 3 cases
The function definition is outside of the class definition. Note that in this example you've defined a non-inline function in a header. Including this header into more than one translation unit violates the One Definition Rule. This is most likely a bug.
The function definition is inside of the class definition. In this case, the function is implicitly inline. As such, it is fine to include it into multiple translation units.
The function definition is outside of the class definition again. The function is not declared inline. This time the function is defined in a separate translation unit, thereby conforming to the ODR even if the header is included into multiple translation units.
what's the problem if both b.cpp & a.cpp includes my header file
The problem is that then both b.cpp and a.cpp will define a non-inline function. The One Definition Rule says that there must be at most one definition of any inline function. Two is more than one. Therefore doing this violates the ODR and therefore such program would be ill-formed.
I'm too much confused why it's an error to write the same function in two different cpp files?
It is an "error" because the rules of the language (explained above) say that it is an "error".
what if both want to use that function?
Then declare the function in both translation units. Only define the function in one translation unit unless you declare the function inline, in which case define the function in all translation units (where the function is used) instead. Look at the examples 2. and 3. of your question to see how that can be done.
so the code in method 1 is not automatically inlined?
No. Functions are not automatically declared inline. Function is declared inline only if A. inline keyword is used, or if B. it is a non-static member function that is defined within the class definition (or in a case involving constexpr that I shall omit here). None of those cases apply to the example 1, therefore it is not an inline function.

Is there a difference between defining member functions inside vs outside the class definition?

Consider the following four member function declarations and definitions:
// ==== file: x.h
#ifndef X_H
#define X_H
class X {
public:
int a(int i) { return 2 * i; }
inline int b(int i) { return 2 * i; }
int c(int i);
int d(int i);
};
inline int X::c(int i) { return 2 * i; }
int X::d(int i) { return 2 * i; }
#endif
For completeness, here's the .cpp file that instantiates an X and calls the methods...
// ==== file: x.cpp
#include "x.h"
#include <stdio.h>
int main() {
X x;
printf("a(3) = %d\n", x.a(3));
printf("b(3) = %d\n", x.b(3));
printf("c(3) = %d\n", x.c(3));
printf("d(3) = %d\n", x.d(3));
return 0;
}
My question: are there any salient differences among the four methods? I understand from a comment in this post that the compiler may automatically inline methods that are defined in the class definition.
update
Many answers assume that I'm asking about the difference between inlining and not. I'm not. As I mentioned in the original post, I understand that defining a method in the header file gives the compiler license to inline the method.
I also (now) understand that method d is is risky as written: since it is not inlined, it will be multiply defined if there are multiple translation units.
My question remains: are there any salient differences among the four methods? (As noted, I know that method d is different). But -- just as important -- are there stylistic or idiomatic considerations that would make a developer choose one over the others?
Since this answer keeps getting upvotes, I feel obligated to improve it. But much of what I'm adding has already been stated in other answers and comments, and those authors deserve the credit.
On the subject of whether there's a difference between placing a function body inside the class definition or just below it (but still in the header file), there are 3 different cases to think about:
1) The function is not a template and is not declared to be inline. In this case it must be defined in the class definition or a separate cpp or you will get a linker error as soon as you try to include the h in more than one compilation unit.
2) The function is a template, but is not declared inline. In this case, putting the body within the class definition provides a hint to the compiler that the function can be inlined (but the final decision is still at its own discretion).
3) The function is declared to be inline. In this case there is no semantic difference, but it may sometimes be necessary to place the function body at the bottom in order to accommodate dependency cycles.
Original answer, which provides good info but does not address the actual question:
You've already noted the inline difference. In addition, defining member functions in the header means your implementation is visible to everyone. More importantly, it means everyone who includes your header also needs to include everything needed to make your implementations work.
If you are going to inline it regardless, then you'd move it out of the class if you want to be able to see all your members in one screen, or you have a cyclic dependency as mentioned below. If you don't want to inline it, then you have to move it out of the class and into an implementation file.
In the cases of classes that cyclically refer to each other, it may be impossible to define the functions in the classes so as to inline them. In that case, to achieve the same effect, you need to move the functions out of the classes.
Doesn't compile:
struct B;
struct A {
int i;
void foo(const B &b) {
i = b.i;
}
};
struct B {
int i;
void foo(const A &a) {
i = a.i;
}
};
Does compile, and achieves the same effect:
struct B;
struct A {
int i;
inline void foo(const B &b);
};
struct B {
int i;
inline void foo(const A &a);
};
inline void A::foo(const B &b) {
i = b.i;
}
inline void B::foo(const A &a) {
i = a.i;
}
Oops, just realised you had the definitions in the header file. That creates problems if the include file is included in more than one place.
If the functions are defined in a CPP file then there is no difference.
The only time it makes sense to implement a function inline is when the function is very clearly trivial and/or it has performance implications.
For all other times, it's best to put them in a .cc file and keep its implementation not exposed to the user of the class.
As pointed out by user3521733, it is impossible to implement some functions in the header file when there are cyclic dependencies. Here you are forced to put the implementations in a .cc file.
Update
As far as the compiler, and the runtime, is concerned, there is no difference that I can think of between defining the function inside the body of the class or outside if you use inline when defining it outside the body of the class.
X::a, X::b and X::c are all inlined. X::d is not. That's the only real differnce between these functions, aside from the fact that they are all different functions. The fact that X::c is defined in the header is irrelevant. What is relevant there is that the definition is marked inline.
In order to understand what the differences are, it's important to understand what inline is and is not. inline is not a performance tweak. It's not about making your code faster, and it's not about blowing the code out inline.
What it is about is the ODR. A function marked inline will have the exact same definition in each translation unit where it is used.
This comes in to play when you try to #include your file above in two or more CPP files and call X::d in those translation units. The linker will complain that X::d is defined more than once -- you've violated the ODR. The fix to this is to either mark the function inline or move the definition to it's own translation unit. (eg, to a CPP file)

Class defined in different translation units

As I have understood, a class can be defined in multiple translation units aslong they're identical. With that in mind, consider the following examples:
//1.cpp
class Foo{
public:
int i;
};
void FooBar();
void BarFoo(){
Foo f;
}
int main(){
FooBar();
BarFoo();
}
//2.cpp
class Foo{
public:
std::string s;
};
void FooBar(){
Foo f;
}
This compiles and I don't get a crash.
If I do the following changes:
//1.cpp
Foo FooBar();
//2.cpp
Foo FooBar(){
Foo f;
return f;
}
I get a crash. Why does one result in a crash and the other doesn't. Also, am I not violating ODR in the first example? If I am, why does it compile ok?
The program is ill-formed for the reason you stated. The compiler is not required a diagnostics, but I don't see a point in discussing reasons for a crash in an ill-formed program.
Still, let's do it:
The first example probably doesn't crash because FooBar's behavior doesn't affect the run of main. The method is called, it does something, and that's it.
In the second example, you attempt to return a Foo. FooBar returns the version of Foo defined in 2.cpp. main appears in 1.cpp so it expects the version of Foo defined in 1.cpp, which is a completely different version - different members, sizes. You most likely get a corruption on the destructor. (just a guess)
EDIT: this does break the one definition rule:
3.2 One definition rule [basic.def.odr]
6) There can be more than one definition of a class type [...] in a program provided that each definition
appears in a different translation unit, and provided the definitions satisfy the following requirements. [...]
each definition of D shall consist of the same sequence of tokens;
[...]
Here is how compiler/linker work:
Compiler translates cpp file having the headers that are provided. It generates an .obj file. In your case the o.bj file will have references to data-struct Foo. And there will be no any other details.
Linker links .obj files together. It compares only the string names. In your obj files your have the same Foo. Names match. For the linker this is the same thing.
After that you start your program. Most likely it will crash. To be more specific it will show undefined behavior. It can enter infinite loop, show strange messages, etc.
It is your responsibility to provide identical headers or definitions in the cpp files into translations of every cpp file. Existing software tools cannot check this for you. This is how it works.

Shared vtables between classes of same name: call to virtual method crashes when casting to base type

Check below for UPDATE, I could reproduce and need help.
I have a strange crash where some method works fine everywhere except in 1 place. Here's the code:
struct base
{
virtual wchar_t* get() = 0; // can be { return NULL; } doesn't matter
};
struct derived: public base
{
virtual wchar_t* get() { return SomeData(); }
};
struct container
{
derived data;
};
// this is approx. how it is used in real program
void output(const base& data)
{
data.get();
}
smart_ptr<container> item = GetItSomehow();
derived &v1 = item->data;
v1.get(); // works OK
//base &v2 = (base&)derived; // the old line, to understand old comments in the question
base &v2 = v1; // or base* v2 doesn't matter
v2.get(); // segmentation fault without going into method at all
Now, as I said, I call item->data.get() in many places on different objects and it works... always. Except for 1 place. But there it doesn't work only if casted to base class (output is an example why it is needed).
Now, the question is - HOW and WHY this can happen? I'd suspect pure virtual call but I don't call virtual method in the constructor. I don't see how the calls are different. I would suspect base method is abstract but it is same if I add a body to it.
I cannot provide a small example to test because, as I said, it works always, except for 1 place. If I knew why it doesn't work there, I wouldn't need the test sample because that would already be the answer...
P.S. The environment is Ubuntu 11.10 x64 but the program is compiled for 32 bit using gcc 4.5.2 custom build.
P.P.S. Another clue, not sure if related...
warning: can't find linker symbol for virtual table for `derived::get' value
warning: found `SomeOtherDerivedFromBaseClass::SomeOtherCrazyFunction' instead
in the real program
UPDATE: Any chance this can happen because of gcc linking vtable to a wrong class with same name but inside different shared library? The "derived" class in real app actually defined in several shared libraries, and worse, there's another similar class with same name but different interface. What's strange is that without casting to base class it works.
I am especially interested in gcc/linking/vtables details here.
Here's how I seem to reproduce:
// --------- mod1.h
class base
{
public:
virtual void test(int i); // add method to make vtables different with mod2
virtual const char* data();
};
class test: public base
{
public:
virtual const char* data();
};
// --------- mod2.h
class base
{
public:
virtual const char* data();
};
class test: public base
{
public:
virtual const char* data();
};
// --------- mod2.cpp
#include "mod2.h"
const char* base::data() { return "base2"; }
const char* test::data() { return "test2"; }
// --------- modtest.cpp
#include <stdio.h>
// !!!!!!!!! notice that we include mod1
#include "mod1.h"
int main()
{
test t;
base& b = t;
printf("%s\n", t.data());
printf("%s\n", b.data());
return 0;
}
// --------- how to compile and run
g++ -c mod2.cpp && g++ mod2.o modtest.cpp && ./a.out
// --------- output from the program
queen3#pro-home:~$ ./a.out
test2
Segmentation fault
In the modtest above, if we include "mod2.h" instead of "mod1.h", we get normal "test2\ntest2" output without segfault.
The question is - what is the exact mechanism for this? How to detect and prevent? I knew that static data in gcc will be linked to single memory entry, but vtables...
Edit in response to update:
In your updated code where you use mod1 and mod2 header you're violating the One Definition Rule for classes (even by appearing in shared libraries). It basically states that in your entire program you must have only one definition of a class (base in this case) although the same definition can appear in multiple source files. If you have more than one definition then all bets are off and you get undefined behavior. In this case, the undefined behavior happens to be a crash. The fix is of course to not have multiple versions of the same class in the same program. This is usually accomplished by defining each class in a single header (or implementation for non-API/impl classes) and including that header where the class definition is needed.
Original answer:
If it works everywhere except one place it sounds like the object isn't valid in that one place (working as derived pointer but not as base sounds a lot like you entered the realm of undefined behavior). Either it's corrupted memory, a deleted object pointer, or something else. Your best bet is if you can run valgrind on it.
Your answer is in your question: "Shared vtables between classes of same name ...".
You have a compiled a single binary from two cpp files, but each cpp file included a different header file and in particular a different definition of struct base. In C++, you can't have two classes with the same name. If the same name is used, then they are the same class and you must be consistent. (The obvious exception is to put them in two different namespaces.)
(Everything here is compiler-specific. But this is probably a typical approach across most compilers.)
First, let's understand non-virtual methods. When you execute such a method on an object:
b.foo(3);
the code is basically rewritten as follows as if it were a conventional free function:
foo_(b,3);
with the method implemented as follows:
void foo_(base * this, int i) {
...
}
i.e. the this pointer is 'secretly' passed as the first parameter to the function.
But things aren't so simple with virtual methods. There will be two different free functions that implement get. We'll call one of these get_base and the other get_derived. (Never mind that you actually have a pure virtual method (=0), it doesn't really change the story.)
The question is, how is the correct get selected at runtime for execution? Well, for each class that has at least one virtual method, the compiler builds a vtable. The vtable for a given class lists all the virtual methods in that class. For example
struct vtable_for_base_t {
wchar_t* (*get_function_pointer)(base *); // initialized to get_base
};
vtable_for_base_t vtable_for_base;
vtable_for_base.get_function_pointer = &get_base;
vtable_for_????_t vtable_for_derived;
vtable_for_derived.get_function_pointer = &get_derived;
The type of the function pointer is a function which takes one parameter (a base*, which will become this) and which returns wchar_t*.
The two classes, base and derived actually include pointers to these vtables under the hood.
struct base {
vtable_for_base_t * vtable;
.... other members of base
};
struct derived {
vtable_for_????_t * vtable;
.... other members of derived
};
Whenever a base object is constructed, the vtable pointer is initialized to point to the vtable for base. Whenever a derived object is constructed, it points to the vtable for derived instead. Now, whenever the compiler sees b.get() it will change this to the following
(b.vtable->get_function_pointer)(&b);
It looks up the vtable pointed to by the b object to get a function pointer to the correct version of get to use. And then it passes b to that function, in order to ensure that it has a correct this pointer.
In summary, each object has a (hidden) member that knows the correct version of the virtual functions. In this case, the compiler assumes that the first entry in the vtable for base, and also the vtable for any type derived from base, will be the get method.
When constructing vtables for derived classes, the first entries will correspond to the methods which were in the base class. And they will be in the same order as they were in the base. Any new virtual methods in the derived class will be listed later.
If you had two virtual methods, foo and bar, in base, then these will be the first two entries in the vtable for base, and the corresponding versions for derived will also take up the first two slots in the vtable for derived.
Now, to understand why you're getting a segfault. In mod2.h, a vtable for base is created where data is the first (and only) entry. Therefore, any code that includes mod2.h and which attempts to execute b.data() will execute the first entry in the vtable. But that's not relevant when modtest.cpp is being compiled, because it includes mod1.h instead.
modtest.cpp includes mod1.h. As a result, it sees a base class that has two methods, where data is the second method listed in the vtable. Therefore, any attempt to execute b.data will actually become:
(b.vtable.SECOND_ENTRY)(&b);
because it assumes that the second entry will be the data() entry.
It will attempt to get the second entry from the vtable, but the real vtable (created in mod2.h) has only one entry! Therefore it's trying to access invalid memory and everything fails.
In short, consider defining these two structs in two different header files in C:
// in one file
struct A {
int i;
char c[500];
}
// in another file
struct A {
char c[500];
int i;
}
Nobody would expect this to work. The program would often be accessing the wrong memory. Therefore, you shouldn't mess with vtables.
There is no need to cast explicitly when treating derived class as a parent class:
#include <iostream>
struct A {
virtual void get() { std::cout << "A" << std::endl; }
};
struct B : public A {
virtual void get() { std::cout << "B" << std::endl; }
};
int main(int argc, char **argv)
{
B b;
A & a = b;
a.get();
return 0;
}
What's more explicit cast in this case might hide bugs. By casting you tell compiler that you are aware of what you are doing and it will not stop, or in many cases not even warn you, that you are doing something that will fail.
If it doesn't compile without the cast it means there is an error in the code (and in most cases compiler gives you the cause in the error message).
In your second example you are violating the one definition rule.
To Quote from Wikipedia:
In any translation unit, a template, type, function, or object can have no more than one definition. Some of these can have any number of declarations. A definition provides an instance.
In the entire program, an object or non-inline function cannot have more than one definition; if an object or function is used, it must have exactly one definition. You can declare an object or function that is never used, in which case you don't have to provide a definition. In no event can there be more than one definition.
Some things, like types, templates, and extern inline functions, can be defined in more than one translation unit. For a given entity, each definition must be the same. Non-extern objects and functions in different translation units are different entities, even if their names and types are the same.
You are violating part 2 of the rule. Both base as well as test are declared multiple times and conflicting in mod1.hh and mod2.hh, hence your program is invalid and invokes undefined behavior. Hence you sometimes experience crashes and sometimes you done. Your program is invalid nevertheless. The compiler does not have to warn you, because both definitions appear in different translation units and the standard does not require it to check for consistency accross compilation units in this case.
Preventing this kind of problem is quite easy. This is what namespaces were invented for. Try to separate your classes in a specific namespace and ODR will not be a problem anymore.
Detecting this kind of thing is a bit harder. One thing you can try is a unity-build. This looks really scary at first sight, but actually helps in solving a lot of problems with this kind of thing. As a sideeffect a unity-build will also speed up compilation time while you are developing. The link above gives instructions for using a unity-build in Visual Studio, but it is actually quite simple to add to makefiles as well (including the automatic generation of the necessary header).
base &v2 = (base&)derived; // or base* v2, doesn't matter
should read
base &v2 = v1;
The question is - what is the exact mechanism for this?
See the other answers about ODR.
How to detect and prevent?
Create a massive translation for your libraries, and include every dependency in it.
Make sure you're using proper scoping and visibility. If it's image-private, that's one case (anon or reserved image namespaces). Otherwise, it should be public and visible to clients via inclusion. Including everything in one TU and using well defined conventions for scoping and visibility will catch many of the errors.
The linker can also kick in in some cases. In fact, exporting your virtual defs is a great idea for many reasons -- the linker would have spotted this issue.
I knew that static data in gcc will be linked to single memory entry, but vtables...
may be duplicated if virtuals definitions are visible. That is, your entire rtti-info and vtable may be exported per TU which can cause serious bloat and add quite a bit to compile and link times.
Your problem here isn't a violation of the one definition rule. In fact, the one definition rule is ONE problem, but it can be solved by using this method.
A dynamic cast will fix'er'up.
test t;
// Using a pointer to make the cast a little more obvious
base *b = dynamic_cast<base *>(&t);
That's straight out of the C++ documentation website at http://www.cplusplus.com/doc/tutorial/typecasting/. It'll return a NULL pointer or throw an exception on failure, depending. Either way, you'll catch the error at run-time.
Although dynamic_casts are technically better practice, it is possible to use a static_cast also. UPDATE: you wanted to know how to catch it at run-time, and a static_cast probably won't catch it at compile time either, sorry.
Nextly, to avoid problems like this in the future, use explicit namespaces. There's really no reason ever to not use them. Even your main program can use one, even if it's long, by aliasing it.
I'll rip an example from IBM because they're schmucks:
namespace INTERNATIONAL_BUSINESS_MACHINES {
void f();
}
namespace IBM = INTERNATIONAL_BUSINESS_MACHINES;
If your libraries aren't using namespaces then they are poor libraries and should be deleted, then the media they were on should be dipped in a vat of acid, and whatever tubes you downloaded them through should get a triple dose of Draino. Though of course we often are stuck using code that leaves things to be desired...