Does the "type" of a struct change from computer to computer? - c++

Let's assume I have this code;
class Ingredients{
public:
Ingredients(int size,string name);
int getsize();
private:
string name;
int size;
};
struct Chain{
Ingredients* ing;
Chain* next;
}
And in my main;
int main()
{
cout<<typeid(Chain).name()<<endl;
cout<<typeid(Chain->ing).name()<<endl;
cout<<typeid(Chain->next).name()<<endl;
}
my headers are;
#include <iostream>
#include <typeinfo>
using namespace std;
and finally outputs;
P8Chain
P12Ingredients
P8Chain
so my question is, will this types are reliable for using it in a code? If the types are changing (because of the P8 and P12 things I am not sure it would be the same) from computer to comp. this types wouldn't be reliable. What are your opinions?
Also they are not changing on every run.

They depend on your compiler, so don't use them inside your code.
The C++ standard says the following concerning typeid (section 5.2.8):
The result of a typeid expression is an lvalue of static type const std::type_info and dynamic type const std::type_info or const name where name is an implementation-defined class derived from std::type_info.
What you can do if you want some sort of RTTI is
if (typeid(myobject) == typeid(Chain)) {
do_something();
}

It depends on what you mean by "type". The more or less standard definition of type is the set of values and operations the type can take, and this will change from one machine to the next, because the size of int will change, or the maximum length of a string. On the other hand, there is a very real sense that type is what the compiler and the C++ standard consider it to be, which very roughly would correspond to, or at least be identified by the scoped name. Finally, the std::type_info::name() function is seriously underspecified. At best, it can be useful for debugging (e.g. logging the actual derived class a function was called with), and not all compilers provide even that. As far as the standard is concerned, a compiler could always return an empty string, and still be conform.

According to the standards the name() is implementation defined (as well as according to Stroustrup book - see p 415 on 3th edition)

The word "type" has multiple meanings.
In terms of type theory, a C++ struct doesn't really define a type at all.
More usefully, the C++ language standard talks about types in a way that can be taken rigorously, even if it never quite rigorously defines the term. In those terms, the struct declaration does define a unique and consistent type.
Maybe even more usefully, to your C++ compiler (and C linker), a type is represented by things like a memory layout, a mangled name, a list of member names and types, pointers to special and normal member functions, possibly pointers to vtable and/or rtti info, etc. This will not be the same from implementation to implementation. Between builds with the same implementation, some details (like where the pointers point) may change even if no relevant code changes, but you could probably define a useful subset of information that you could usefully call a "type" that doesn't change.
Beyond that, the type_info instance defined by section 18.5.1 of the standard cannot change within the bounds of what's explicitly defined, and the result of typeid as defined by section 5.2.8 as to be that instance or a compatible object that still compares equal to it. So, it sounds to me like, if it were possible to load up the type_info instances from two different runs at the same time, operator== would have to return true. However, it's not actually possible to load up type_info instances from two different runs (there's no requirement that they be serializable in any way, for example), so this may not be relevant.
Finally, the name displayed by typeid().name(), as defined by section 18.5.1, is just any implementation-defined NTBS. It could change between builds, runs, even calls within the same run. It could always be empty. Practically, it'll often be something vaguely useful for debugging, but that isn't guaranteed—and, even if it were, that wouldn't help, because "vaguely useful for debugging" doesn't have to mean "unique within a run and persistent across runs".
If you're asking about a specific compiler, the documentation for the compiler may give stricter guarantees than the standard requires. For example, I believe that on various platforms g++ guarantees that it'll use the C++ ABI defined at CodeSourcery, and will not change ABI versions within minor compiler versions, and will use the mangled names defined in the ABI as the type_info names. This means taking the binary to another computer won't affect the names, and even recompiling the source on another computer with the same platform and g++ version won't affect the names.

Related

How is typeid implemented?

I've read that it is implementation specific depending on your compiler, but what might be one way it is implemented? I am asking mainly because I want to know how it creates a name for a type. I'm relatively knew to programming and to C++ but what I've seen so far, it doesn't seem possible to turn a type into a string before runtime.
I was thinking the name generation could be done with macros and token-pasting sort of like this:
#define Put_In_Quotes(input) #input
template<class T>
const char* type_name(T data_type){
return Put_In_Quotes(T);
}
But I think this would simply return the literal string "T" instead of the type name. Not to mention it doesn't explain how typeid lets you enter just types and not values for its datatype parameter, eg. typeid(int).name()
All answers or guides to more information are greatly appreciated
Welcome to SE!
The compiler knows all the possible types your program might employ just like it knows which template forms your program needs instantiated. (As for the particular name strings it generates, different compilers use all sorts of conventions for how they name things internally.)
In simple terms, the typeid operator just returns a type_info object, right? Broadly speaking, that object is essentially all there is to implementing a type's RTTI: a type_info kept in the vtable, which is returned by typeid.
You mentioned you were new to C++, but if you look up how to do a vtable dump for your compiler, you can examine the actual data yourself. The formatting specifics of every compiler will differ, of course, but for a polymorphic type, you'll usually find this info right alongside the rest of that type's vtable data. Depending on how your compiler formats that output, it may look something like an additional struct member of that type and may (usually?) immediately precede or follow the rest of the type data. Static types will have similar data residing elsewhere.

Flex/Bison: cannot use semantic_type

I try to create a c++ flex/bison parser. I used this tutorial as a starting point and did not change any bison/flex configurations. I am stuck now to the point of trying to unit test the lexer.
I have a function in my unit tests that directly calls yylex, and checks the result of it:
private: static void checkIntToken(MyScanner &scanner, Compiler *comp, unsigned long expected, unsigned char size, char isUnsigned, unsigned int line, const std::string &label) {
yy::MyParser::location_type loc;
yy::MyParser::semantic_type semantic; // <---- is seems like the destructor of this variable causes the crash
int type = scanner.yylex(&semantic, &loc, comp);
Assert::equals(yy::MyParser::token::INT, type, label + "__1");
MyIntToken* token = semantic.as<MyIntToken*>();
Assert::equals(expected, token->value, label + "__2");
Assert::equals(size, token->size, label + "__3");
Assert::equals(isUnsigned, token->isUnsigned, label + "__4");
Assert::equals(line, loc.begin.line, label + "__5");
//execution comes to this point, and then, program crashes
}
The error message is:
program: ../src/__autoGenerated__/MyParser.tab.hh:190: yy::variant<32>::~variant() [S = 32]: Assertion `!yytypeid_' failed.
I have tried to follow the logic in the auto-generated bison files, and make some sense out of it. But I did not succeed on that and ultimately gave up. I searched then for any advice on the web about this error message but did not find any.
The location indicated by the error has the following code:
~variant (){
YYASSERT (!yytypeid_);
}
EDIT: The problem disappears only if I remove the
%define parse.assert
option from the bison file. But I am not sure if this is a good idea...
What is the proper way to obtain the value of the token generated by flex, for unit testing purposes?
Note: I've tried to explain bison variant types to the best of my knowledge. I hope it is accurate but I haven't used them aside from some toy experiments. It would be an error to assume that this explanation in any way implies an endorsement of the interface.
The so-called "variant" type provided by bison's C++ interface is not a general-purpose variant type. That was a deliberate decision based on the fact that the parser is always able to figure out the semantic type associated with a semantic value on the parser stack. (This fact also allows a C union to be used safely within the parser.) Recording type information within the "variant" would therefore be redundant. So they don't. In that sense, it is not really a discriminated union, despite what one might expect of a type named "variant".
(The bison variant type is a template with an integer (non-type) template argument. That argument is the size in bytes of the largest type which is allowed in the variant; it does not in any other way specify the possible types. The semantic_type alias serves to ensure that the same template argument is used for every bison variant object in the parser code.)
Because it is not a discriminated union, its destructor cannot destruct the current value; it has no way to know how to do that.
This design decision is actually mentioned in the (lamentably insufficient) documentation for the Bison "variant" type. (When reading this, remember that it was originally written before std::variant existed. These days, it would be std::variant which was being rejected as "redundant", although it is also possible that the existence of std::variant might have had the happy result of revisiting this design decision). In the chapter on C++ Variant Types, we read:
Warning: We do not use Boost.Variant, for two reasons. First, it appeared unacceptable to require Boost on the user’s machine (i.e., the machine on which the generated parser will be compiled, not the machine on which bison was run). Second, for each possible semantic value, Boost.Variant not only stores the value, but also a tag specifying its type. But the parser already “knows” the type of the semantic value, so that would be duplicating the information.
Therefore we developed light-weight variants whose type tag is external (so they are really like unions for C++ actually).
And indeed they are. So any use of a bison "variant" must have a definite type:
You can build a variant with an argument of the type to build. (This is the only case where you don't need a template parameter, because the type is deduced from the argument. You would have to use an explicit template parameter only if the argument were not of the precise type; for example, an integer of lesser rank.)
You can get a reference to the value of known type T with as<T>. (This is undefined behaviour if the value has a different type.)
You can destruct the value of known type T with destroy<T>.
You can copy or move the value from another variant of known type T with copy<T> or move<T>. (move<T> involves constructing and then destructing a T(), so you might not want to do it if T had an expensive default constructor. On the whole, I'm not convinced by the semantics of the move method. And its name conflicts semantically with std::move, but again it came first.)
You can swap the values of two variants which both have the same known type T with swap<T>.
Now, the generated parser understands all these restrictions, and it always knows the real types of the "variants" it has at its disposal. But you might come along and try to do something with one of these objects in a way that violates a constraint. Since the object really doesn't have any way to check the constraint, you'll end up with undefined behaviour which will probably have some disastrous eventual consequence.
So they also implemented an option which allows the "variant" to check the constraints. Unsurprisingly, this consists of adding a discriminator. But since the discriminator is only used to validate and not to modify behaviour, it is not a small integer which chooses between a small number of known alternatives, but rather a pointer to a std::typeid (or NULL if the variant does not yet contain a value.) (To be fair, in most cases alignment constraints mean that using a pointer for this purpose is no more expensive than using a small enum. All the same...)
So that's what you're running into. You enabled assertions with %define parse.assert; that option was provided specifically to prevent you from doing what you are trying to do, which is let the variant object's destructor run before the variant's value is explicitly destructed.
So the "correct" way to avoid the problem is to insert an explicit call at the end of the scope:
// execution comes to this point, and then, without the following
// call, the program will fail on an assertion
semantic.destroy<MyIntType*>();
}
With the parse assertion enabled, the variant object will be able to verify that the types specified as template parameters to semantic.as<T> and semantic.destroy<T> are the same types as the value stored in the object. (Without parse.assert, that too is your responsibility.)
Warning: opinion follows.
In case anyone reading this cares, my preference for using real std::variant types comes from the fact that it is actually quite common for the semantic value of an AST node to require a discriminated union. The usual solution (in C++) is to construct a type hierarchy which is, in some ways, entirely artificial, and it is quite possible that std::variant can better express the semantics.
In practice, I use the C interface and my own discriminated union implementation.

What is the actual purpose of std::type_info::name()?

Today a colleague of mine came and asked me the question as mentioned in the title.
He's currently trying to reduce the binaries footprint of a codebase, that is also used on small targets (like Cortex M3 and alike). Apparently they have decided to compile with RTTI switched on (GCC actually), to support proper exception handling.
Well, his major complaint was why std::type_info::name() is actually needed at all for support of RTTI, and asked, if I know a way to just suppress generation of the string literals needed to support this, or at least to shorten them.
std::type_info::name
const char* name() const;
Returns an implementation defined null-terminated character string containing the name of the type. No guarantees are given, in particular, the returned string can be identical for several types and change between invocations of the same program.
A ,- however compiler specific -, implementation of e.g. the dynamic_cast<> operator would not use this information, but rather something like a hash-tag for type determination (similar for catch() blocks with exception handling).
I think the latter is clearly expressed by the current standard definitions for
std::type_info::hash_code
std::type_index
I had to agree, that I also don't really see a point of using std::type_info::name(), other than for debugging (logging) purposes. I wasn't a 100% sure that exception handling will work just without RTTI with current versions of GCC (I think they're using 4.9.1), so I hesitated to recommend simply switching off RTTI.
Also it's the case that dynamic_casts<> are used in their code base, but for these, I just recommended not to use it, in favor of static_cast (they don't really have something like plugins, or need for runtime type detection other than assertions).
Question:
Are there real life, production code level use cases for std::type_info::name() other than logging?
Sub-Questions (more concrete):
Does anyone have an idea, how to overcome (work around) the generation of these useless string literals (under assumption they'll never be used)?
Is RTTI really (still) needed to support exception handling with GCC?
(This part is well solved now by #Sehe's answer, and I have accepted it. The other sub-question still remains for the left over generated std::type_info instances for any exceptions used in the code. We're pretty sure, that these literals are never used anywhere)
Bit of related: Strip unused runtime functions which bloat executable (GCC)
Isolating this bit:
The point is, if they switch off RTTI, will exception handling still work properly in GCC? — πάντα ῥεῖ 1 hour ago
The answer is yes:
-fno-rtti
Disable generation of information about every class with virtual functions for use by the C++ runtime type identification features (dynamic_cast and typeid). If you don't use those parts of the language, you can save some space by using this flag. Note that exception handling uses the same information, but it will generate it as needed. The dynamic_cast operator can still be used for casts that do not require runtime type information, i.e. casts to void * or to unambiguous base classes.
Are there real life, production code level use cases for std::type_info::name() other than logging?
The Itanium ABI describes how operator== for std::type_info objects can be easily implemented in terms of testing strings returned from std::type_info::name() for pointer equality.
In a non-flat address space, where it might be possible to have multiple type_info objects for the same type (e.g. because a dynamic library has been loaded with RTLD_LOCAL) the implementation of operator== might need to use strcmp to determine if two types are the same.
So the name() function is used to determine if two type_info objects refer to the same type. For examples of real use cases, that's typically used in at least two places in the standard library, in std::function<F>::target<T>() and std::get_deleter<D>(const std::shared_ptr<T>&).
If you're not using RTTI then all that's irrelevant, as you won't have any type_info objects anyway (and consequently in libstdc++ the function::target and get_deleter functions can't be used).
I think GCC's exception-handling code uses the addresses of type_info objects themselves, not the addresses of the strings returned by name(), so if you use exceptions but no RTTI the name() strings aren't needed.

Will C++ compiler generate code for each template type?

I have two questions about templates in C++. Let's imagine I have written a simple List and now I want to use it in my program to store pointers to different object types (A*, B* ... ALot*). My colleague says that for each type there will be generated a dedicated piece of code, even though all pointers in fact have the same size.
If this is true, can somebody explain me why? For example in Java generics have the same purpose as templates for pointers in C++. Generics are only used for pre-compile type checking and are stripped down before compilation. And of course the same byte code is used for everything.
Second question is, will dedicated code be also generated for char and short (considering that they both have the same size and there are no specialization).
If this makes any difference, we are talking about embedded applications.
I have found a similar question, but it did not completely answer my question: Do C++ template classes duplicate code for each pointer type used?
Thanks a lot!
I have two questions about templates in C++. Let's imagine I have written a simple List and now I want to use it in my program to store pointers to different object types (A*, B* ... ALot*). My colleague says that for each type there will be generated a dedicated piece of code, even though all pointers in fact have the same size.
Yes, this is equivalent to having both functions written.
Some linkers will detect the identical functions, and eliminate them. Some libraries are aware that their linker doesn't have this feature, and factor out common code into a single implementation, leaving only a casting wrapper around the common code. Ie, a std::vector<T*> specialization may forward all work to a std::vector<void*> then do casting on the way out.
Now, comdat folding is delicate: it is relatively easy to make functions you think are identical, but end up not being the same, so two functions are generated. As a toy example, you could go off and print the typename via typeid(x).name(). Now each version of the function is distinct, and they cannot be eliminated.
In some cases, you might do something like this thinking that it is a run time property that differs, and hence identical code will be created, and the identical functions eliminated -- but a smart C++ compiler might figure out what you did, use the as-if rule and turn it into a compile-time check, and block not-really-identical functions from being treated as identical.
If this is true, can somebody explain me why? For example in Java generics have the same purpose as templates for pointers in C++. Generics are only used for per-compile type checking and are stripped down before compilation. And of course the same byte code is used for everything.
No, they aren't. Generics are roughly equivalent to the C++ technique of type erasure, such as what std::function<void()> does to store any callable object. In C++, type erasure is often done via templates, but not all uses of templates are type erasure!
The things that C++ does with templates that are not in essence type erasure are generally impossible to do with Java generics.
In C++, you can create a type erased container of pointers using templates, but std::vector doesn't do that -- it creates an actual container of pointers. The advantage to this is that all type checking on the std::vector is done at compile time, so there doesn't have to be any run time checks: a safe type-erased std::vector may require run time type checking and the associated overhead involved.
Second question is, will dedicated code be also generated for char and short (considering that they both have the same size and there are no specialization).
They are distinct types. I can write code that will behave differently with a char or short value. As an example:
std::cout << x << "\n";
with x being a short, this print an integer whose value is x -- with x being a char, this prints the character corresponding to x.
Now, almost all template code exists in header files, and is implicitly inline. While inline doesn't mean what most folk think it means, it does mean that the compiler can hoist the code into the calling context easily.
If this makes any difference, we are talking about embedded applications.
What really makes a difference is what your particular compiler and linker is, and what settings and flags they have active.
The answer is maybe. In general, each instantiation of a
template is a unique type, with a unique implementation, and
will result in a totally independent instance of the code.
Merging the instances is possible, but would be considered
"optimization" (under the "as if" rule), and this optimization
isn't wide spread.
With regards to comparisons with Java, there are several points
to keep in mind:
C++ uses value semantics by default. An std::vector, for
example, will actually insert copies. And whether you're
copying a short or a double does make a difference in the
generated code. In Java, short and double will be boxed,
and the generated code will clone a boxed instance in some way;
cloning doesn't require different code, since it calls a virtual
function of Object, but physically copying does.
C++ is far more powerful than Java. In particular, it allows
comparing things like the address of functions, and it requires
that the functions in different instantiations of templates have
different addresses. Usually, this is not an important point,
and I can easily imagine a compiler with an option which tells
it to ignore this point, and to merge instances which are
identical at the binary level. (I think VC++ has something like
this.)
Another issue is that the implementation of a template in C++
must be present in the header file. In Java, of course,
everything must be present, always, so this issue affects all
classes, not just template. This is, of course, one of the
reasons why Java is not appropriate for large applications. But
it means that you don't want any complicated functionality in a
template; doing so loses one of the major advantages of C++,
compared to Java (and many other languages). In fact, it's not
rare, when implementing complicated functionality in templates,
to have the template inherit from a non-template class which
does most of the implementation in terms of void*. While
implementing large blocks of code in terms of void* is never
fun, it does have the advantage of offering the best of both
worlds to the client: the implementation is hidden in compiled
files, invisible in any way, shape or manner to the client.

Which standard c++ classes cannot be reimplemented in c++?

I was looking through the plans for C++0x and came upon std::initializer_list for implementing initializer lists in user classes. This class could not be implemented in C++
without using itself, or else using some "compiler magic". If it could, it wouldn't be needed since whatever technique you used to implement initializer_list could be used to implement initializer lists in your own class.
What other classes require some form of "compiler magic" to work? Which classes are in the Standard Library that could not be implemented by a third-party library?
Edit: Maybe instead of implemented, I should say instantiated. It's more the fact that this class is so directly linked with a language feature (you can't use initializer lists without initializer_list).
A comparison with C# might clear up what I'm wondering about: IEnumerable and IDisposable are actually hard-coded into language features. I had always assumed C++ was free of this, since Stroustrup tried to make everything implementable in libraries. So, are there any other classes / types that are inextricably bound to a language feature.
std::type_info is a simple class, although populating it requires typeinfo: a compiler construct.
Likewise, exceptions are normal objects, but throwing exceptions requires compiler magic (where are the exceptions allocated?).
The question, to me, is "how close can we get to std::initializer_lists without compiler magic?"
Looking at wikipedia, std::initializer_list<typename T> can be initialized by something that looks a lot like an array literal. Let's try giving our std::initializer_list<typename T> a conversion constructor that takes an array (i.e., a constructor that takes a single argument of T[]):
namespace std {
template<typename T> class initializer_list {
T internal_array[];
public:
initializer_list(T other_array[]) : internal_array(other_array) { };
// ... other methods needed to actually access internal_array
}
}
Likewise, a class that uses a std::initializer_list does so by declaring a constructor that takes a single std::initializer_list argument -- a.k.a. a conversion constructor:
struct my_class {
...
my_class(std::initializer_list<int>) ...
}
So the line:
my_class m = {1, 2, 3};
Causes the compiler to think: "I need to call a constructor for my_class; my_class has a constructor that takes a std::initializer_list<int>; I have an int[] literal; I can convert an int[] to a std::initializer_list<int>; and I can pass that to the my_class constructor" (please read to the end of the answer before telling me that C++ doesn't allow two implicit user-defined conversions to be chained).
So how close is this? First, I'm missing a few features/restrictions of initializer lists. One thing I don't enforce is that initializer lists can only be constructed with array literals, while my initializer_list would also accept an already-created array:
int arry[] = {1, 2, 3};
my_class = arry;
Additionally, I didn't bother messing with rvalue references.
Finally, this class only works as the new standard says it should if the compiler implicitly chains two user-defined conversions together. This is specifically prohibited under normal cases, so the example still needs compiler magic. But I would argue that (1) the class itself is a normal class, and (2) the magic involved (enforcing the "array literal" initialization syntax and allowing two user-defined conversions to be implicitly chained) is less than it seems at first glance.
The only other one I could think of was the type_info class returned by typeid. As far as I can tell, VC++ implements this by instantiating all the needed type_info classes statically at compile time, and then simply casting a pointer at runtime based on values in the vtable. These are things that could be done using C code, but not in a standard-conforming or portable way.
All classes in the standard library, by definition, must be implemented in C++. Some of them hide some obscure language/compiler constructs, but still are just wrappers around that complexity, not language features.
Anything that the runtime "hooks into" at defined points is likely not to be implementable as a portable library in the hypothetical language "C++, excluding that thing".
So for instance I think atexit() in <cstdlib> can't be implemented purely as a library, since there is no other way in C++ to ensure it is called at the right time in the termination sequence, that is before any global destructor.
Of course, you could argue that C features "don't count" for this question. In which case std::unexpected may be a better example, for exactly the same reason. If it didn't exist, there would be no way to implement it without tinkering with the exception code emitted by the compiler.
[Edit: I just noticed the questioner actually asked what classes can't be implemented, not what parts of the standard library can't be implemented. So actually these examples don't strictly answer the question.]
C++ allows compilers to define otherwise undefined behavior. This makes it possible to implement the Standard Library in non-standard C++. For instance, "onebyone" wonders about atexit(). The library writers can assume things about the compiler that makes their non-portable C++ work OK for their compiler.
MSalter points out printf/cout/stdout in a comment. You could implement any one of them in terms of the one of the others (I think), but you can't implement the whole set of them together without OS calls or compiler magic, because:
These are all the ways of accessing the process's standard output stream. You have to stuff the bytes somewhere, and that's implementation-specific in the absence of these things. Unless I've forgotten another way of accessing it, but the point is you can't implement standard output other than through implementation-specific "magic".
They have "magic" behaviour in the runtime, which I think could not be perfectly imitated by a pure library. For example, you couldn't just use static initialization to construct cout, because the order of static initialization between compilation units is not defined, so there would be no guarantee that it would exist in time to be used by other static initializers. stdout is perhaps easier, since it's just fd 1, so any apparatus supporting it can be created by the calls it's passed into when they see it.
I think you're pretty safe on this score. C++ mostly serves as a thick layer of abstraction around C. Since C++ is also a superset of C itself, the core language primitives are almost always implemented sans-classes (in a C-style). In other words, you're not going to find many situations like Java's Object which is a class which has special meaning hard-coded into the compiler.
Again from C++0x, I think that threads would not be implementable as a portable library in the hypothetical language "C++0x, with all the standard libraries except threads".
[Edit: just to clarify, there seems to be some disagreement as to what it would mean to "implement threads". What I understand it to mean in the context of this question is:
1) Implement the C++0x threading specification (whatever that turns out to be). Note C++0x, which is what I and the questioner are both talking about. Not any other threading specification, such as POSIX.
2) without "compiler magic". This means not adding anything to the compiler to help your implementation work, and not relying on any non-standard implementation details (such as a particular stack layout, or a means of switching stacks, or non-portable system calls to set a timed interrupt) to produce a thread library that works only on a particular C++ implementation. In other words: pure, portable C++. You can use signals and setjmp/longjmp, since they are portable, but my impression is that's not enough.
3) Assume a C++0x compiler, except that it's missing all parts of the C++0x threading specification. If all it's missing is some data structure (that stores an exit value and a synchronisation primitive used by join() or equivalent), but the compiler magic to implement threads is present, then obviously that data structure could be added as a third-party portable component. But that's kind of a dull answer, when the question was about which C++0x standard library classes require compiler magic to support them. IMO.]