I know adding static member function is fine, but how about an enum definition? No new data members, just it's definition.
A little background:
I need to add a static member function (in a class), that will recognize (the function) the version of an IP address by its string representation. The first thing, that comes to my mind is to declare a enum for IPv4, IPv6 and Unknown and make this enum return code of my function.
But I don't want to break the binary backward compatibility.
And a really bad question (for SO) - is there any source or question here, I can read more about that? I mean - what breaks the binary compatibility and what - does not. Or it depends on many things (like architecture, OS, compiler..)?
EDIT: Regarding the #PeteKirkham 's comment: Okay then, at least - is there a way to test/check for changed ABI or it's better to post new question about that?
EDIT2: I just found a SO Question : Static analysis tool to detect ABI breaks in C++ . I think it's somehow related here and answers the part about tool to check binary compatibility. That's why I relate it here.
The real question here, is obviously WHY make it a class (static) member ?
It seems obvious from the definition that this could perfectly be a free function in its own namespace (and probably header file) or if the use is isolated define in an anonymous namespace within the source file.
Although this could still potentially break ABI, it would really take a funny compiler to do so.
As for ABI breakage:
modifying the size of a class: adding data members, unless you manage to stash them into previously unused padding (compiler specific, of course)
modifying the alignment of a class: changing data members, there are tricks to artificially inflate the alignment (union) but deflating it requires compiler specific pragmas or attributes and compliant hardware
modifying the layout of a vtable: adding a virtual method may change the offsets of previous virtual methods in the vtable. For gcc, the vtable is layed out in the order of declaration, so adding the virtual method at the end works... however it does not work in base classes as vtable layout may be shared with derived classes. Best considered frozen
modyfing the signature of a function: the name of the symbol usually depends both on the name of the function itself and the types of its arguments (plus for methods the name of the class and the qualifiers of the method). You can add a top-level const on an argument, it's ignored anyway, and you can normally change the return type (this might entails other problems though). Note that adding a parameter with a default value does break the ABI, defaults are ignored as far as signatures are concerned. Best considered frozen
removing any function or class that previously exported symbols (ie, classes with direct or inherited virtual methods)
I may have forgotten one or two points, but that should get you going for a while already.
Example of what an ABI is: the Itanium ABI.
Formally... If you link files which were compiled against two different
versions of your class, you've violated the one definition rule, which
is undefined behavior. Practically... about the only things which break
binary compatibilty are adding data members or virtual functions
(non-virtual functions are fine), or changing the name or signature of a
function, or anything involving base classes. And this seems to be
universal—I don't know of a compiler where the rules are
different.
Related
In a project I read, there are two header files and two declarations for the same class. One is used by programs that use this library, serving as an interface. Another is used by the library itself.The interface header file is simpler. It doesn't contain private members and has less methods. Even methods that appear in both files may not appear in the same order. I wonder if it is legal to have 2 header files for the same class? If it is not, what are the possible consequences?
In short
This is not legal at all. Depending of what's in the private part that is committed, it might work on some implementations, but it might very well fail at the first change or new release of the compiler. Just don't. There are better ways to achieve the intended objectives.
Some more explanations
Why it's not legal?
It's the One Definition Rule (ODR). It's defined in the standard, in a long section [basic.def.odr]. In summary, it is possible to have multiple definition of the same class in different compilation units (e.g. your code, and the library's code), but only if it's exactly the same definition. This requires that it's exactly the same sequence of tokens that are used for the two definitions, which is clearly not the case if you leave out the private members. (Note that there are additional requirements as well, but the first one is already broken, so I short-circuit).
Why does it work in practice in some cases even if not legal?
It's purely implementation dependent luck. I will not develop this topic, in order not to encourage dangerous behavior.
What alternatives
Just use the same definition of the class everywhere. Private members are private, so what's the risk of leaving them where they were ?
Ok, sometimes the definition of private members would require to disclose also private types, end as a chain reaction, much too much. In this case, you may think of:
The simple opaque pointer technique, which uses a pointer to a private implementation class, whose type is declared but that is not defined in the compilation units that do not need to know.
The more elaborate bridge pattern, which allows to build class hierarchies for an abstraction, and class hierarchies with the implementation. It can be used in a similar way than the opaque pointer, but would allow for different kind of private implementation classes (it's a complex pattern, and it's an overkill if it's just for hiding private details).
We all know members specified protected from a base class can only be accessed from a derived class own instance. This is a feature from the Standard, and this has been discussed on Stack Overflow multiple times:
Cannot access protected member of another instance from derived type's scope
;
Why can't my object access protected members of another object defined in common base class?
And others.
But it seems possible to walk around this restriction with member pointers, as user chtz has shown me:
struct Base { protected: int value; };
struct Derived : Base
{
void f(Base const& other)
{
//int n = other.value; // error: 'int Base::value' is protected within this context
int n = other.*(&Derived::value); // ok??? why?
(void) n;
}
};
Live demo on coliru
Why is this possible, is it a wanted feature or a glitch somewhere in the implementation or the wording of the Standard?
From comments emerged another question: if Derived::f is called with an actual Base, is it undefined behaviour?
The fact that a member is not accessible using class member access expr.ref (aclass.amember) due to access control [class.access] does not make this member inaccessible using other expressions.
The expression &Derived::value (whose type is int Base::*) is perfectly standard compliant, and it designates the member value of Base. Then the expression a_base.*p where p is a pointer to a member of Base and a_base an instance of Base is also standard compliant.
So any standard compliant compiler shall make the expression other.*(&Derived::value); defined behavior: access the member value of other.
is it a hack?
In similar vein to using reinterpret_cast, this can be dangerous and may potentially be a source of hard to find bugs. But it's well formed and there's no doubt whether it should work.
To clarify the analogy: The behaviour of reinterpret_cast is also specified exactly in the standard and can be used without any UB. But reinterpret_cast circumvents the type system, and the type system is there for a reason. Similarly, this pointer to member trick is well formed according to the standard, but it circumvents the encapsulation of members, and that encapsulation (typically) exists for a reason (I say typically, since I suppose a programmer can use encapsulation frivolously).
[Is it] a glitch somewhere in the implementation or the wording of the Standard?
No, the implementation is correct. This is how the language has been specified to work.
Member function of Derived can obviously access &Derived::value, since it is a protected member of a base.
The result of that operation is a pointer to a member of Base. This can be applied to a reference to Base. Member access privileges does not apply to pointers to members: It applies only to the names of the members.
From comments emerged another question: if Derived::f is called with an actual Base, is it undefined behaviour?
Not UB. Base has the member.
Just to add to the answers and zoom in a bit on the horror I can read between your lines. If you see access specifiers as 'the law', policing you to keep you from doing 'bad things', I think you are missing the point. public, protected, private, const ... are all part of a system that is a huge plus for C++. Languages without it may have many merits but when you build large systems such things are a real asset.
Having said that: I think it's a good thing that it is possible to get around almost all the safety nets provided to you. As long as you remember that 'possible' does not mean 'good'. This is why it should never be 'easy'. But for the rest - it's up to you. You are the architect.
Years ago I could simply do this (and it may still work in certain environments):
#define private public
Very helpful for 'hostile' external header files. Good practice? What do you think? But sometimes your options are limited.
So yes, what you show is kind-of a breach in the system. But hey, what keeps you from deriving and hand out public references to the member? If horrible maintenance problems turn you on - by all means, why not?
Basically what you're doing is tricking the compiler, and this is supposed to work. I always see this kind of questions and people some times get bad results and some times it works, depending on how this converts to assembler code.
I remember seeing a case with a const keyword on a integer, but then with some trickery the guy was able to change the value and successfully circumvented the compiler's awareness. The result was: A wrong value for a simple mathematical operation. The reason is simple: Assembly in x86 does make a distinction between constants and variables, because some instructions do contain constants in their opcode. So, since the compiler believes it's a constant, it'll treat it as a constant and deal with it in an optimized way with the wrong CPU instruction, and baam, you have an error in the resulting number.
In other words: The compiler will try to enforce all the rules it can enforce, but you can probably eventually trick it, and you may or may not get wrong results based on what you're trying to do, so you better do such things only if you know what you're doing.
In your case, the pointer &Derived::value can be calculated from an object by how many bytes there are from the beginning of the class. This is basically how the compiler accesses it, so, the compiler:
Doesn't see any problem with permissions, because you're accessing value through derived at compile-time.
Can do it, because you're taking the offset in bytes in an object that has the same structure as derived (well, obviously, the base).
So, you're not violating any rules. You successfully circumvented the compilation rules. You shouldn't do it, exactly because of the reasons described in the links you attached, as it breaks OOP encapsulation, but, well, if you know what you're doing...
It is well-established and a canonical reference question that in C++ structs and classes are pretty much interchangeable, when writing code by hand.
However, if I want to link to existing code, can I expect it to make any difference (i.e. break, nasal demons etc.) if I redeclare a struct as a class, or vice versa, in a header after the original code has been generated?
So the situation is the type was compiled as a struct (or a class), and I'm then changing the header file to the other declaration before including it in my project.
The real-world use case is that I'm auto-generating code with SWIG, which generates different output depending on whether it's given structs or classes; I need to change one to the other to get it to output the right interface.
The example is here (Irrlicht, SVertexManipulator.h) - given:
struct IVertexManipulator
{
};
I am redeclaring it mechanically as:
/*struct*/class IVertexManipulator
{public:
};
The original library compiles with the original headers, untouched. The wrapper code is generated using the modified forms, and compiled using them. The two are then linked into the same program to work together. Assume I'm using the exact same compiler for both libraries.
Is this sort of thing undefined? "Undefined", but expected to work on real-world compilers? Perfectly allowable?
Other similar changes I'm making include removing some default values from parameters (to prevent ambiguity), and removing field declarations from a couple of classes where the type is not visible to SWIG (which changes the structure of the class, but my reasoning is that the generated code should need that information, only to link to member functions). Again, how much havoc could this cause?
e.g. IGPUProgrammingServices.h:
s32 addHighLevelShaderMaterial(
const c8* vertexShaderProgram,
const c8* vertexShaderEntryPointName/*="main"*/,
E_VERTEX_SHADER_TYPE vsCompileTarget/*=EVST_VS_1_1*/,
const c8* pixelShaderProgram=0,
...
CIndexBuffer.h:
public:
//IIndexList *Indices;
...and so on like that. Other changes include replacing some template parameter types with their typedefs and removing the packed attribute from some structs. Again, it seems like there should be no problem if the altered struct declarations are never actually used in machine code (just to generate names to link to accessor functions in the main library), but is this reliably the case? Ever the case?
This is technically undefined behavior.
3.2/5:
There can be more than one definition of a class type, [... or other things that should be defined in header files ...] in a program provided that each definition appears in a different translation unit, and provided the definitions satisfy the following requirements. Given such an entity named D defined in more than one translation unit, then
each definition of D shall consist of the same sequence of tokens; and
...
... If the definitions of D satisfy all these requirements, then the program shall behave as if there were a single definition of D. If the definitions of D do not satisfy these requirements, then the behavior is undefined.
Essentially, you are changing the first token from struct to class, and inserting tokens public and : as appropriate. The Standard doesn't allow that.
But in all compilers I'm familiar with, this will be fine in practice.
Other similar changes I'm making include removing some default values from parameters (to prevent ambiguity)
This actually is formally allowed, if the declaration doesn't happen to be within a class definition. Different translation units and even different scopes within a TU can define different default function arguments. So you're probably fine there too.
Other changes include replacing some template parameter types with their typedefs
Also formally allowed outside of a class definition: two declarations of a function that use different ways of naming the same type refer to the same function.
... removing field declarations ... and removing the packed attribute from some structs
Now you're in severe danger territory, though. I'm not familiar with SWIG, but if you do this sort of thing, you'd better be darn sure the code using these "wrong" definitions never:
create or destroy an object of the class type
define a type that inherits or contains a member of the class type
use a non-static data member of the class
call an inline or template function that uses a non-static data member of the class
call a virtual member function of the class type or a derived type
try to find sizeof or alignof the class type
A very specific corner case that MSVC disallows via Compiler Error 2688 is admitted by Microsoft to be non-standard behavior. Does anyone know why MSVC++ has this specific limitation?
The fact that it involves simultaneous usage of three language features ("virtual base classes", "covariant return types", and "variable number of arguments", according to the description in the second linked page) that are semantically orthogonal and fully supported separately seems to imply that this is not a parsing or semantic issue, but a corner case in the Microsoft C++ ABI. In particular, the fact that a "variable number of arguments" is involved seems to (?) suggest that the C++ ABI is using an implicit trailing parameter to implement the combination of the other two features, but can't because there's no fixed place to put that parameter when the function is var arg.
Does anyone have enough knowledge of the Microsoft C++ ABI to confirm whether this is the case, and explain what this implicit trailing argument is used for (or what else is going on, if my guess is incorrect)? The C++ ABI is not documented by Microsoft but I know that some people outside of Microsoft have done work to match the ABI for various reasons so I'm hoping someone can explain what is going on.
Also, Microsoft's documentation is a bit inconsistent; the second page linked says:
Virtual base classes are not supported as covariant return types when the virtual function has a variable number of arguments.
but the first page more broadly states:
covariant returns with multiple or virtual inheritance not supported for varargs functions
Does anyone know what the real story is? I can do some experimentation to find out, but I'm guessing that the actual corner case is neither of these, exactly, but has to do with the specifics of the class hierachy in a way that the documenters decided to gloss over. My guess it that it has to do with the need for a pointer adjustment in the virtual thunk, but I'm hoping someone with deeper knowledge of the situation than me can explain what's going on behind the hood.
I can tell you with authority that MSVC's C++ ABI uses implicit extra parameters to do things that in other ABIs (namely Itanium) implement multiple separate functions to handle, so it's not hard to imagine that one is being used here (or would be, if the case were supported).
I don't know for sure what's happening in this case, but it seems plausible that an implicit extra parameter is being passed to tell the thunk implementing the virtual function whether a downcast to the covariant return type class is required (or, more likely, whether an upcast back to the base class is required, since the actual implementing function probably returns the derived class), and that this extra parameter goes last so that it can be ignored by the base classes (which wouldn't know anything about the covariant return).
This implies that the unsupported corner case occurs always when a virtual base class is the original return type (since a thunk will always be required to the derived class) which is what is described in the first quote; it would also happen in some, but not all, cases involving multiple inheritance (which may be why it's included in the second quote, but not the first).
I have a query in regard to the explaination provided here http://www.parashift.com/c++-faq/virtual-functions.html#faq-20.4
In the sample code the function mycode(Base *p), calls virt3 method as p->virt3(). Here how exactly the compiler know that virt3 is found in third slot of vtable? How do it compare and with what ?
When the compiler sees the definition of Base it decides the layout of its vtable according to some algorithm1, which is common to all its derived classes as far as methods inherited from Base are concerned (derived classes may add other virtual methods, but they are put into the vtable after the stuff inherited from Base).
Thus, when the compiler sees p->virt3(), it already knows that for any object that inherits from Base the pointer to the correct virt3 is e.g. in the third slot of the vtable (because that's how it laid out the vtable of Base at the moment of its definition), so it can correctly generate the code for the virtual call.
Long story short (driving inspiration from #David Rodríguez's comment): it knows where it stays because he decided it before.
1. The standard do not mandate any particular algorithm (actually, it doesn't say anything about how the C++ ABI should be implenented), but there are several widespread C++ ABI specifications, notably the COM ABI on Windows and the Itanium ABI on Linux (and in general for gcc). Obviously, given the same class definition, the algorithm must give the same vtable layout every time, otherwise it would be impossible to link together different object modules.
The layout of the vtable is specified by the Itanium C++ ABI, followed by many compilers including GCC. The compiler itself doesn't decide where the function pointers go (though I suppose it does decide to abide by the ABI!).
The order of the virtual function pointers in a virtual table is the order of declaration of the corresponding member functions in the class.
(Example.)
COM — used by Visual Studio — also emits vtable pointers in source order (though I can't find normative documentation to prove that).
Also, because the function name doesn't even exist at runtime (but a function pointer), the layout of the vtable at compile-time doesn't really matter. The function call translation works in just the same way that a normal function call translation works: the compiler is already mapping the function name to an address in its internal machinery. The only difference is that the mapping here is to a location in the vtable, rather than to the start of the actual function code.
This also addresses your concern about interoperability, to some extent.
Do remember, though, that this is all implementation machinery and C++ itself has no knowledge that virtual tables even exist.
The compiler has a well defined algorithm for allocating the entries in the vtable so that the order of the entries will always be the same regardless of which translation unit is being processed. Internal to the compiler is a mapping between the function names and their location in the vtable so the compiler can do the correct transformation between function call and vtable index.
It is important, therefore, that changes to the definition of a class with virtual functions causes all source files that are dependent on the class to be recompiled, otherwise bad things could happen.