Does adding enumerators into enum break ABI? - c++

In particular, i got following code in library interface:
typedef enum
{
state1,
state2,
state3,
state4,
state5,
state_error = -1,
} State;
I strictly forbidden to break ABI. However, I want to add state6 and state7. Will it break ABI?
I found here some tip, but i somewhat doubt if it`s my case?
You can...
append new enumerators to an existing enum.
Exeption: if that leads to the compiler choosing a larger underlying type for the enum,that makes the change binary-incompatible. Unfortunately, compilers have some leeway to choose the underlying type, so from an API-design perspective it's recommended to add a Max.... enumerator with an explicit large value (=255, =1<<15, etc) to create an interval of numeric enumerator values that is guaranteed to fit into the chosen underlying type, whatever that may be.

Your question is a nice example why long-term maintaining of ABI compatibility is a difficult task. The core of the problem here is that the compatibility depends not just on the given type, but also on how it is used in function/method prototypes or complex types (e.g. structures, unions etc.).
(1) If the enumeration is used strictly as an input into the library (e.g. as a parameter of a function which just changes of behavior the function/library), then it keeps the compatibility: You changed the contract in a way which can never hurt the customer i.e. the calling application. Old applications shall never use the new value and will get the old behavior, new applications just get more options.
(2) If the enumeration is used anywhere as an output from the library (e.g. return value or function filling some address provided by caller aka an output parameter), the change would break the ABI. Consider the enumeration to be a contract saying "the application never sees values other then those listed". Adding new enum member would break this contract because old applications could now see values they never counted with.
That is at least, if there are no measures to protect old applications from falling into these troubles. Generally speaking, the library still can output the new value, but never for any valid input potentially provided by the old applications.
There are some design patterns allowing such enum expansions:
E.g. the library can provide an initialization function which allows to specify version of ABI the application is ready for. Old application ask for version 1.0 and never get the new value on input; newer application specify 1.1. or 2.0 or if the new enum value as added in the version 1.1, and then it may get the new value.)
Or, if a function DoSomething() is getting some flags on input, you may add a new flag where application can specify it's ready to see the new output value.
Or, if that's not possible, new version of the library may add a new function DoSomethingEx() which provides the more complex behavior than the original DoSomething(). DoSomethingEx() now can return the new enum value, the DoSomething() cannot.
As a side note if you ever need to add such DoSomethingEx(), do it in a way that allows similar expansions in the future. For consistency, it's usually a good idea to design it so that DoSomethingEx() with default flags (usually zero) behaves the same way DoSomething() and only with some new flag(s) it offers a different and more complex behavior.
Drawback of course is that the library implementation has to check what the application is ready for and provide a behavior compatible for expectations of old applications. It does not seem as much but over time and many versions of the library, there may be dozens of such checks accumulated in the library implementation, making it more complex and harder to maintain.

The quote actually is your case. Simple add the new enum values at the end (but before the state_error as it has a different value) and it should be binary compatible, unless, as mentioned in the quote you provided, the compiler chooses to use a different sized type, which doesn't seem likely in the case of such a small enum.
The best way is to try and check: a simple sizeof(State) executed before and after the changes should be enough (though you also might want to check if the values are still the same).

Take a look at the highest-valued enumerator-id: state3 is 2.
That means, even if the compiler should have chosen char as the underlying type, you can comfortably fit 100+ additional enumerator-ids there, without risking damage to binary compatibility.
That pre-supposes that the users supply a value of the iterator, instead of ever reading one, though.

Related

Flex/Bison: cannot use semantic_type

I try to create a c++ flex/bison parser. I used this tutorial as a starting point and did not change any bison/flex configurations. I am stuck now to the point of trying to unit test the lexer.
I have a function in my unit tests that directly calls yylex, and checks the result of it:
private: static void checkIntToken(MyScanner &scanner, Compiler *comp, unsigned long expected, unsigned char size, char isUnsigned, unsigned int line, const std::string &label) {
yy::MyParser::location_type loc;
yy::MyParser::semantic_type semantic; // <---- is seems like the destructor of this variable causes the crash
int type = scanner.yylex(&semantic, &loc, comp);
Assert::equals(yy::MyParser::token::INT, type, label + "__1");
MyIntToken* token = semantic.as<MyIntToken*>();
Assert::equals(expected, token->value, label + "__2");
Assert::equals(size, token->size, label + "__3");
Assert::equals(isUnsigned, token->isUnsigned, label + "__4");
Assert::equals(line, loc.begin.line, label + "__5");
//execution comes to this point, and then, program crashes
}
The error message is:
program: ../src/__autoGenerated__/MyParser.tab.hh:190: yy::variant<32>::~variant() [S = 32]: Assertion `!yytypeid_' failed.
I have tried to follow the logic in the auto-generated bison files, and make some sense out of it. But I did not succeed on that and ultimately gave up. I searched then for any advice on the web about this error message but did not find any.
The location indicated by the error has the following code:
~variant (){
YYASSERT (!yytypeid_);
}
EDIT: The problem disappears only if I remove the
%define parse.assert
option from the bison file. But I am not sure if this is a good idea...
What is the proper way to obtain the value of the token generated by flex, for unit testing purposes?
Note: I've tried to explain bison variant types to the best of my knowledge. I hope it is accurate but I haven't used them aside from some toy experiments. It would be an error to assume that this explanation in any way implies an endorsement of the interface.
The so-called "variant" type provided by bison's C++ interface is not a general-purpose variant type. That was a deliberate decision based on the fact that the parser is always able to figure out the semantic type associated with a semantic value on the parser stack. (This fact also allows a C union to be used safely within the parser.) Recording type information within the "variant" would therefore be redundant. So they don't. In that sense, it is not really a discriminated union, despite what one might expect of a type named "variant".
(The bison variant type is a template with an integer (non-type) template argument. That argument is the size in bytes of the largest type which is allowed in the variant; it does not in any other way specify the possible types. The semantic_type alias serves to ensure that the same template argument is used for every bison variant object in the parser code.)
Because it is not a discriminated union, its destructor cannot destruct the current value; it has no way to know how to do that.
This design decision is actually mentioned in the (lamentably insufficient) documentation for the Bison "variant" type. (When reading this, remember that it was originally written before std::variant existed. These days, it would be std::variant which was being rejected as "redundant", although it is also possible that the existence of std::variant might have had the happy result of revisiting this design decision). In the chapter on C++ Variant Types, we read:
Warning: We do not use Boost.Variant, for two reasons. First, it appeared unacceptable to require Boost on the user’s machine (i.e., the machine on which the generated parser will be compiled, not the machine on which bison was run). Second, for each possible semantic value, Boost.Variant not only stores the value, but also a tag specifying its type. But the parser already “knows” the type of the semantic value, so that would be duplicating the information.
Therefore we developed light-weight variants whose type tag is external (so they are really like unions for C++ actually).
And indeed they are. So any use of a bison "variant" must have a definite type:
You can build a variant with an argument of the type to build. (This is the only case where you don't need a template parameter, because the type is deduced from the argument. You would have to use an explicit template parameter only if the argument were not of the precise type; for example, an integer of lesser rank.)
You can get a reference to the value of known type T with as<T>. (This is undefined behaviour if the value has a different type.)
You can destruct the value of known type T with destroy<T>.
You can copy or move the value from another variant of known type T with copy<T> or move<T>. (move<T> involves constructing and then destructing a T(), so you might not want to do it if T had an expensive default constructor. On the whole, I'm not convinced by the semantics of the move method. And its name conflicts semantically with std::move, but again it came first.)
You can swap the values of two variants which both have the same known type T with swap<T>.
Now, the generated parser understands all these restrictions, and it always knows the real types of the "variants" it has at its disposal. But you might come along and try to do something with one of these objects in a way that violates a constraint. Since the object really doesn't have any way to check the constraint, you'll end up with undefined behaviour which will probably have some disastrous eventual consequence.
So they also implemented an option which allows the "variant" to check the constraints. Unsurprisingly, this consists of adding a discriminator. But since the discriminator is only used to validate and not to modify behaviour, it is not a small integer which chooses between a small number of known alternatives, but rather a pointer to a std::typeid (or NULL if the variant does not yet contain a value.) (To be fair, in most cases alignment constraints mean that using a pointer for this purpose is no more expensive than using a small enum. All the same...)
So that's what you're running into. You enabled assertions with %define parse.assert; that option was provided specifically to prevent you from doing what you are trying to do, which is let the variant object's destructor run before the variant's value is explicitly destructed.
So the "correct" way to avoid the problem is to insert an explicit call at the end of the scope:
// execution comes to this point, and then, without the following
// call, the program will fail on an assertion
semantic.destroy<MyIntType*>();
}
With the parse assertion enabled, the variant object will be able to verify that the types specified as template parameters to semantic.as<T> and semantic.destroy<T> are the same types as the value stored in the object. (Without parse.assert, that too is your responsibility.)
Warning: opinion follows.
In case anyone reading this cares, my preference for using real std::variant types comes from the fact that it is actually quite common for the semantic value of an AST node to require a discriminated union. The usual solution (in C++) is to construct a type hierarchy which is, in some ways, entirely artificial, and it is quite possible that std::variant can better express the semantics.
In practice, I use the C interface and my own discriminated union implementation.

What is the actual purpose of std::type_info::name()?

Today a colleague of mine came and asked me the question as mentioned in the title.
He's currently trying to reduce the binaries footprint of a codebase, that is also used on small targets (like Cortex M3 and alike). Apparently they have decided to compile with RTTI switched on (GCC actually), to support proper exception handling.
Well, his major complaint was why std::type_info::name() is actually needed at all for support of RTTI, and asked, if I know a way to just suppress generation of the string literals needed to support this, or at least to shorten them.
std::type_info::name
const char* name() const;
Returns an implementation defined null-terminated character string containing the name of the type. No guarantees are given, in particular, the returned string can be identical for several types and change between invocations of the same program.
A ,- however compiler specific -, implementation of e.g. the dynamic_cast<> operator would not use this information, but rather something like a hash-tag for type determination (similar for catch() blocks with exception handling).
I think the latter is clearly expressed by the current standard definitions for
std::type_info::hash_code
std::type_index
I had to agree, that I also don't really see a point of using std::type_info::name(), other than for debugging (logging) purposes. I wasn't a 100% sure that exception handling will work just without RTTI with current versions of GCC (I think they're using 4.9.1), so I hesitated to recommend simply switching off RTTI.
Also it's the case that dynamic_casts<> are used in their code base, but for these, I just recommended not to use it, in favor of static_cast (they don't really have something like plugins, or need for runtime type detection other than assertions).
Question:
Are there real life, production code level use cases for std::type_info::name() other than logging?
Sub-Questions (more concrete):
Does anyone have an idea, how to overcome (work around) the generation of these useless string literals (under assumption they'll never be used)?
Is RTTI really (still) needed to support exception handling with GCC?
(This part is well solved now by #Sehe's answer, and I have accepted it. The other sub-question still remains for the left over generated std::type_info instances for any exceptions used in the code. We're pretty sure, that these literals are never used anywhere)
Bit of related: Strip unused runtime functions which bloat executable (GCC)
Isolating this bit:
The point is, if they switch off RTTI, will exception handling still work properly in GCC? — πάντα ῥεῖ 1 hour ago
The answer is yes:
-fno-rtti
Disable generation of information about every class with virtual functions for use by the C++ runtime type identification features (dynamic_cast and typeid). If you don't use those parts of the language, you can save some space by using this flag. Note that exception handling uses the same information, but it will generate it as needed. The dynamic_cast operator can still be used for casts that do not require runtime type information, i.e. casts to void * or to unambiguous base classes.
Are there real life, production code level use cases for std::type_info::name() other than logging?
The Itanium ABI describes how operator== for std::type_info objects can be easily implemented in terms of testing strings returned from std::type_info::name() for pointer equality.
In a non-flat address space, where it might be possible to have multiple type_info objects for the same type (e.g. because a dynamic library has been loaded with RTLD_LOCAL) the implementation of operator== might need to use strcmp to determine if two types are the same.
So the name() function is used to determine if two type_info objects refer to the same type. For examples of real use cases, that's typically used in at least two places in the standard library, in std::function<F>::target<T>() and std::get_deleter<D>(const std::shared_ptr<T>&).
If you're not using RTTI then all that's irrelevant, as you won't have any type_info objects anyway (and consequently in libstdc++ the function::target and get_deleter functions can't be used).
I think GCC's exception-handling code uses the addresses of type_info objects themselves, not the addresses of the strings returned by name(), so if you use exceptions but no RTTI the name() strings aren't needed.

What is std::mbstate_t?

I'm creating a custom locale by deriving from std::codecvt. Most of the methods I'm supposed to implement are pretty straight forward, except for this std::mbstate_t. On my compiler, vs2010, it's declared as an int. But, google tells me it's a POD type, it's sometimes a union (of what I don't know) or a struct (again I can't find it).
As I understand it, std::mbstate_t is a placeholder for partial convertions. And, I think, it comes into play when std::codecvt::on_out() requires more space to write the output, which in turn will call std::codecvt::do_unshift(). Please correct me if my assumptions are wrong.
I've read another post about storing pointers, though the post doesn't have an adequate answer. I've also read this example which presumes it to be a 32bit type although the standard states an int to be no less than 16bits.
My question. What can I safely store in std::mbstate_t? Can I safely replace it with another type? The answer to the above post suggests replacing it, but the following comment says otherwise.
I think that /the/ book concerning these things is C++ IOStreams and Locales by Langer and Kreft, if you seriously want to mess with these things, try to get hold of a copy. Now, coming back to your question, the mbstate_t is used to hold the state of the conversion. Normally, you would store this inside the conversion facet, but since the facets are immutable, you need to store it externally. In practice, that is used when you need more than a sequence of bytes to determine the according character, the Linux manpage of mbsinit() gives ISO-2022 and UTF-7 as examples for such encodings. Note that this does not affect UTF-8, where a single Unicode codepoint is always encoded by a sequence of bytes and without anything before or after that affecting the results. Partial UTF-8 sequences are also not handled by that, do_in() returns partial instead.
Now, what can you store in the mbstate_t? Since the actual type is undefined and the number of functions to manipulate it are very limited, there is nothing you can do with it at first. However, nothing else does anything with that state either, so you can do some ugly hacking on it. This might require a few #ifdef depending on the standard library but then you can simply (ab)use the fact that it's a POD (ints and unions are also PODs) to store pretty much any type of POD that is not larger. This won't win you a beauty price and the code won't work on any system automatically, but I think in this case it's unavoidable and the work for porting is also limited.
Finally, can you replace it? This type is part of std::char_traits which in turn affect really all strings and streams, so you need to replace them throughout your program or convert. Further, if you now create a new char_traits class, you still can't easily instantiate e.g. basic_string with it, because there is no guarantee that a general basic_string template even exists, it is only required that the two specializations for char and wchar_t (and some more for C++11) exist. Ditto for streams. In short, no you can't replace mbstate_t.

Will C++ compiler generate code for each template type?

I have two questions about templates in C++. Let's imagine I have written a simple List and now I want to use it in my program to store pointers to different object types (A*, B* ... ALot*). My colleague says that for each type there will be generated a dedicated piece of code, even though all pointers in fact have the same size.
If this is true, can somebody explain me why? For example in Java generics have the same purpose as templates for pointers in C++. Generics are only used for pre-compile type checking and are stripped down before compilation. And of course the same byte code is used for everything.
Second question is, will dedicated code be also generated for char and short (considering that they both have the same size and there are no specialization).
If this makes any difference, we are talking about embedded applications.
I have found a similar question, but it did not completely answer my question: Do C++ template classes duplicate code for each pointer type used?
Thanks a lot!
I have two questions about templates in C++. Let's imagine I have written a simple List and now I want to use it in my program to store pointers to different object types (A*, B* ... ALot*). My colleague says that for each type there will be generated a dedicated piece of code, even though all pointers in fact have the same size.
Yes, this is equivalent to having both functions written.
Some linkers will detect the identical functions, and eliminate them. Some libraries are aware that their linker doesn't have this feature, and factor out common code into a single implementation, leaving only a casting wrapper around the common code. Ie, a std::vector<T*> specialization may forward all work to a std::vector<void*> then do casting on the way out.
Now, comdat folding is delicate: it is relatively easy to make functions you think are identical, but end up not being the same, so two functions are generated. As a toy example, you could go off and print the typename via typeid(x).name(). Now each version of the function is distinct, and they cannot be eliminated.
In some cases, you might do something like this thinking that it is a run time property that differs, and hence identical code will be created, and the identical functions eliminated -- but a smart C++ compiler might figure out what you did, use the as-if rule and turn it into a compile-time check, and block not-really-identical functions from being treated as identical.
If this is true, can somebody explain me why? For example in Java generics have the same purpose as templates for pointers in C++. Generics are only used for per-compile type checking and are stripped down before compilation. And of course the same byte code is used for everything.
No, they aren't. Generics are roughly equivalent to the C++ technique of type erasure, such as what std::function<void()> does to store any callable object. In C++, type erasure is often done via templates, but not all uses of templates are type erasure!
The things that C++ does with templates that are not in essence type erasure are generally impossible to do with Java generics.
In C++, you can create a type erased container of pointers using templates, but std::vector doesn't do that -- it creates an actual container of pointers. The advantage to this is that all type checking on the std::vector is done at compile time, so there doesn't have to be any run time checks: a safe type-erased std::vector may require run time type checking and the associated overhead involved.
Second question is, will dedicated code be also generated for char and short (considering that they both have the same size and there are no specialization).
They are distinct types. I can write code that will behave differently with a char or short value. As an example:
std::cout << x << "\n";
with x being a short, this print an integer whose value is x -- with x being a char, this prints the character corresponding to x.
Now, almost all template code exists in header files, and is implicitly inline. While inline doesn't mean what most folk think it means, it does mean that the compiler can hoist the code into the calling context easily.
If this makes any difference, we are talking about embedded applications.
What really makes a difference is what your particular compiler and linker is, and what settings and flags they have active.
The answer is maybe. In general, each instantiation of a
template is a unique type, with a unique implementation, and
will result in a totally independent instance of the code.
Merging the instances is possible, but would be considered
"optimization" (under the "as if" rule), and this optimization
isn't wide spread.
With regards to comparisons with Java, there are several points
to keep in mind:
C++ uses value semantics by default. An std::vector, for
example, will actually insert copies. And whether you're
copying a short or a double does make a difference in the
generated code. In Java, short and double will be boxed,
and the generated code will clone a boxed instance in some way;
cloning doesn't require different code, since it calls a virtual
function of Object, but physically copying does.
C++ is far more powerful than Java. In particular, it allows
comparing things like the address of functions, and it requires
that the functions in different instantiations of templates have
different addresses. Usually, this is not an important point,
and I can easily imagine a compiler with an option which tells
it to ignore this point, and to merge instances which are
identical at the binary level. (I think VC++ has something like
this.)
Another issue is that the implementation of a template in C++
must be present in the header file. In Java, of course,
everything must be present, always, so this issue affects all
classes, not just template. This is, of course, one of the
reasons why Java is not appropriate for large applications. But
it means that you don't want any complicated functionality in a
template; doing so loses one of the major advantages of C++,
compared to Java (and many other languages). In fact, it's not
rare, when implementing complicated functionality in templates,
to have the template inherit from a non-template class which
does most of the implementation in terms of void*. While
implementing large blocks of code in terms of void* is never
fun, it does have the advantage of offering the best of both
worlds to the client: the implementation is hidden in compiled
files, invisible in any way, shape or manner to the client.

Creating serializeable unique compile-time identifiers for arbitrary UDT's

I would like a generic way to create unique compile-time identifiers for any C++ user defined types.
for example:
unique_id<my_type>::value == 0 // true
unique_id<other_type>::value == 1 // true
I've managed to implement something like this using preprocessor meta programming, the problem is, serialization is not consistent. For instance if the class template unique_id is instantiated with other_type first, then any serialization in previous revisions of my program will be invalidated.
I've searched for solutions to this problem, and found several ways to implement this with non-consistent serialization if the unique values are compile-time constants. If RTTI or similar methods, like boost::sp_typeinfo are used, then the unique values are obviously not compile-time constants and extra overhead is present. An ad-hoc solution to this problem would be, instantiating all of the unique_id's in a separate header in the correct order, but this causes additional maintenance and boilerplate code, which is not different than using an enum unique_id{my_type, other_type};.
A good solution to this problem would be using user-defined literals, unfortunately, as far as I know, no compiler supports them at this moment. The syntax would be 'my_type'_id; 'other_type'_id; with udl's.
I'm hoping somebody knows a trick that allows implementing serialize-able unique identifiers in C++ with the current standard (C++03/C++0x), I would be happy if it works with the latest stable MSVC and GNU-G++ compilers, although I expect if there is a solution, it's not portable.
I would like to make clear, that using mpl::set or similar constructs like mpl::vector and filtering, does not solve this problem, because the scope of the meta-set/vector is limited and actually causes more problems than just preprocessor meta programming.
A while back I added a build step to one project of mine, which allowed me to write #script_name(args) in a C++ source file and have it automatically replaced with the output of the associated script, for instance ./script_name.pl args or ./script_name.py args.
You may balk at the idea of polluting the language into nonstandard C++, but all you'd have to do is write #sha1(my_type) to get the unique integer hash of the class name, regardless of build order and without the need for explicit instantiation.
This is just one of many possible nonstandard solutions, and I think a fairly clean one at that. There's currently no great way to impose an arbitrary, consistent ordering on your classes without just specifying it explicitly, so I recommend you simply give in and go the explicit instantiation route; there's nothing really wrong with centralising the information, but as you said it's not all that different from an enumeration, which is what I'd actually use in this situation.
Persistence of data is a very interesting problem.
My first question would be: do you really want serialization ? If you are willing to investigate an alternative, then jump to the next section.
If you're still there, I think you have not given the typeid solution all its due.
// static detection
template <typename T>
size_t unique_id()
{
static size_t const id = some_hash(typeid(T)); // or boost::sp_typeinfo
return id;
}
// dynamic detection
template <typename T>
size_t unique_id(T const& t)
{
return some_hash(typeid(t)); // no memoization possible
}
Note: I am using a local static to avoid the order of initialization issue, in case this value is required before main is entered
It's pretty similar to your unique_id<some_type>::value, and even though it's computed at runtime, it's only computed once, and the result (for the static detection) is then memoized for future calls.
Also note that it's fully generic: no need to explicitly write the function for each type.
It may seem silly, but the issue of serialization is that you have a one-to-one mapping between the type and its representation:
you need to version the representation, so as to be able to decode "older" versions
dealing with forward compatibility is pretty hard
dealing with cyclic reference is pretty hard (some framework handle it)
and then there is the issue of moving information from one to another --> deserializing older versions becomes messy and frustrating
For persistent saves, I usually recommend using a dedicated BOM. Think of the saved data as a message to your future self. And I usually go the extra mile and proposes the awesome Google Proto Buffer library:
Backward and Forward compatibility baked-in
Several format outputs -> human readable (for debug) or binary
Several languages can read/write the same messages (C++, Java, Python)
Pretty sure that you will have to implement your own extension to make this happen, I've not seen nor heard of any such construct for compile-time. MSVC offers __COUNTER__ for the preprocessor but I know of no template equivalent.