Related
In C++, it's well-defined to read from an union member which was most recently written, aka the active union member.
My question is whether std::memcpying a whole union object, as opposed to copying a particular union member, to an uninitialized memory area will preserve the active union member.
union A {
int x;
char y[4];
};
A a;
a.y[0] = 'U';
a.y[1] = 'B';
a.y[2] = '?';
a.y[3] = '\0';
std::byte buf[sizeof(A)];
std::memcpy(buf, &a, sizeof(A));
A& a2 = *reinterpret_cast<A*>(buf);
std::cout << a2.y << '\n'; // is `A::y` the active member of `a2`?
The assignments you have are okay because the assignment to non-class member a.y "begins its lifetime". However, your std::memcpy does not do that, so any accesses to members of a2 are invalid. So, you are relying on the consequences of undefined behaviour. Technically. In practice most toolchains are rather lax about aliasing and lifetime of primitive-type union members.
Unfortunately, there's more UB here, in that you're violating aliasing for the union itself: you can pretend that a T is a bunch of bytes, but you can't pretend that a bunch of bytes is a T, no matter how much reinterpret_casting you do. You could instantiate an A a2 normally and std::copy/std::memcpy over it from a, and then you're just back to the union member lifetime problem, if you care about that. But, I guess, if this option were open to you, you'd just be writing A a2 = a in the first placeā¦
My question is whether std::memcpying a whole union object, as opposed to copying a particular union member, to an uninitialized memory area will preserve the active union member.
It'll be copied as expected.
It's how you're reading the result that may, or may not, make your program have undefined behavior.
Using std::memcpy copies char's from one source to a destination. Raw memory copying is ok. Reading from memory as something it wasn't initialized to be is not ok.
So far as I can tell, the C++ Standard makes no distinction between the following two functions on platforms where the sizes of int and foo would happen to be identical [as would typically be the case]
struct s1 { int x; };
struct s2 { int x; };
union foo { s1 a; s2 b; } u1, u2;
void test1(void)
{
u1.a.x = 1;
u2.b.x = 2;
std::memcpy(&u1, &u2, sizeof u1);
}
void test2(void)
{
u1.a = 1;
u2.b = 2;
std::memcpy(&u1.a.x, &u2.b.x, sizeof u1.a.x);
}
If a union of trivially-copyable types is a trivially-copyable type, that would suggest that the active member of u1 after the memcpy in test1 should be b. In the equivalent function test2, however, copying all the bytes from an int object into an one that's part of active union member s1.a should leave the active union member as a.
IMHO, this issue could be easily resolved by recognizing that a union may have multiple "potentially active" members, and allowing certain actions to be performed on any member that is at least potentially active (rather than limiting them to one particular active member). That would among other things allow the Common Initial Sequence rule to be made much clearer more useful, without unduly inhibiting optimizations, by specifying that the act of taking the address of a union member makes it "at least potentially" active, until the next time the union is written via non-character-based access, and providing that common-initial-sequence inspection or bytewise writing of a potentially active union member is allowed, but does not change the active member.
Unfortunately, when the Standard was first written, there was no effort to explore all of the relevant corner cases, much less reach a consensus about how they should be handled. At the time, I don't think there would have been opposition to the idea of officially accommodating multiple potentially-active members, since most compiler designs would naturally accommodate that without difficulty. Unfortunately, some compilers have evolved in a way that would make support for such constructs more difficult than it would have been if accommodated from the start, and their maintainers would block any changes that would contradict their design decisions, even if the Standard was never intended to allow such decisions in the first place.
Before I answer your question, I think your code should add this:
static_assert(std::is_trivial<A>());
Because in order to keep compatibility with C, trivial types get extra guarantees. For example, the requirement of running an object's constructor before using it (See https://eel.is/c++draft/class.cdtor) applies only to an object whose constructor is not trivial.
Because your union is trivial, your code is fine up to and including the memcpy. Where you run into trouble is *reinterpret_cast<A*>(buf);
Specifically, you are using an A object before its lifetime has begin.
As stated in https://eel.is/c++draft/basic.life , lifetime begins when storage with proper alignment and size for the type has been obtained, and its initialization is complete. A trivial type has "vacuous" initialization, so no problem there, however the storage is a problem.
When your example gets storage for buf,
std::byte buf[sizeof(A)];
It does not obtain proper alignment for the type. You'd need to change that line to:
alignas(A) std::byte buf[sizeof(A)];
My company uses a messaging server which gets a message into a const char* and then casts it to the message type.
I've become concerned about this after asking this question. I'm not aware of any bad behavior in the messaging server. Is it possible that const variables do not incur aliasing problems?
For example say that foo is defined in MessageServer in one of these ways:
As a parameter: void MessageServer(const char* foo)
Or as const variable at the top of MessageServer: const char* foo = PopMessage();
Now MessageServer is a huge function, but it never assigns anything to foo, however at 1 point in MessageServer's logic foo will be cast to the selected message type.
auto bar = reinterpret_cast<const MessageJ*>(foo);
bar will only be read from subsequently, but will be used extensively for object setup.
Is an aliasing problem possible here, or does the fact that foo is only initialized, and never modified save me?
EDIT:
Jarod42's answer finds no problem with casting from a const char* to a MessageJ*, but I'm not sure this makes sense.
We know this is illegal:
MessageX* foo = new MessageX;
const auto bar = reinterpret_cast<MessageJ*>(foo);
Are we saying this somehow makes it legal?
MessageX* foo = new MessageX;
const auto temp = reinterpret_cast<char*>(foo);
auto bar = reinterpret_cast<const MessageJ*>(temp);
My understanding of Jarod42's answer is that the cast to temp makes it legal.
EDIT:
I've gotten some comments with relation to serialization, alignment, network passing, and so on. That's not what this question is about.
This is a question about strict aliasing.
Strict aliasing is an assumption, made by the C (or C++) compiler, that dereferencing pointers to objects of different types will never refer to the same memory location (i.e. alias eachother.)
What I'm asking is: Will the initialization of a const object, by casting from a char*, ever be optimized below where that object is cast to another type of object, such that I am casting from uninitialized data?
First of all, casting pointers does not cause any aliasing violations (although it might cause alignment violations).
Aliasing refers to the process of reading or writing an object through a glvalue of different type than the object.
If an object has type T, and we read/write it via a X& and a Y& then the questions are:
Can X alias T?
Can Y alias T?
It does not directly matter whether X can alias Y or vice versa, as you seem to focus on in your question. But, the compiler can infer if X and Y are completely incompatible that there is no such type T that can be aliased by both X and Y, therefore it can assume that the two references refer to different objects.
So, to answer your question, it all hinges on what PopMessage does. If the code is something like:
const char *PopMessage()
{
static MessageJ foo = .....;
return reinterpret_cast<const char *>(&foo);
}
then it is fine to write:
const char *ptr = PopMessage();
auto bar = reinterpret_cast<const MessageJ*>(foo);
auto baz = *bar; // OK, accessing a `MessageJ` via glvalue of type `MessageJ`
auto ch = ptr[4]; // OK, accessing a `MessageJ` via glvalue of type `char`
and so on. The const has nothing to do with it. In fact if you did not use const here (or you cast it away) then you could also write through bar and ptr with no problem.
On the other hand, if PopMessage was something like:
const char *PopMessage()
{
static char buf[200];
return buf;
}
then the line auto baz = *bar; would cause UB because char cannot be aliased by MessageJ. Note that you can use placement-new to change the dynamic type of an object (in that case, char buf[200] is said to have stopped existing, and the new object created by placement-new exists and its type is T).
My company uses a messaging server which gets a message into a const char* and then casts it to the message type.
So long as you mean that it does a reinterpret_cast (or a C-style cast that devolves to a reinterpret_cast):
MessageJ *j = new MessageJ();
MessageServer(reinterpret_cast<char*>(j));
// or PushMessage(reinterpret_cast<char*>(j));
and later takes that same pointer and reinterpret_cast's it back to the actual underlying type, then that process is completely legitimate:
MessageServer(char *foo)
{
if (somehow figure out that foo is actually a MessageJ*)
{
MessageJ *bar = reinterpret_cast<MessageJ*>(foo);
// operate on bar
}
}
// or
MessageServer()
{
char *foo = PopMessage();
if (somehow figure out that foo is actually a MessageJ*)
{
MessageJ *bar = reinterpret_cast<MessageJ*>(foo);
// operate on bar
}
}
Note that I specifically dropped the const's from your examples as their presence or absence doesn't matter. The above is legitimate when the underlying object that foo points at actually is a MessageJ, otherwise it is undefined behavior. The reinterpret_cast'ing to char* and back again yields the original typed pointer. Indeed, you could reinterpret_cast to a pointer of any type and back again and get the original typed pointer. From this reference:
Only the following conversions can be done with reinterpret_cast ...
6) An lvalue expression of type T1 can be converted to reference to another type T2. The result is an lvalue or xvalue referring to the same object as the original lvalue, but with a different type. No temporary is created, no copy is made, no constructors or conversion functions are called. The resulting reference can only be accessed safely if allowed by the type aliasing rules (see below) ...
Type aliasing
When a pointer or reference to object of type T1 is reinterpret_cast (or C-style cast) to a pointer or reference to object of a different type T2, the cast always succeeds, but the resulting pointer or reference may only be accessed if both T1 and T2 are standard-layout types and one of the following is true:
T2 is the (possibly cv-qualified) dynamic type of the object ...
Effectively, reinterpret_cast'ing between pointers of different types simply instructs the compiler to reinterpret the pointer as pointing at a different type. More importantly for your example though, round-tripping back to the original type again and then operating on it is safe. That is because all you've done is instructed the compiler to reinterpret a pointer as pointing at a different type and then told the compiler again to reinterpret that same pointer as pointing back at the original, underlying type.
So, the round trip conversion of your pointers is legitimate, but what about potential aliasing problems?
Is an aliasing problem possible here, or does the fact that foo is only initialized, and never modified save me?
The strict aliasing rule allows compilers to assume that references (and pointers) to unrelated types do not refer to the same underlying memory. This assumption allows lots of optimizations because it decouples operations on unrelated reference types as being completely independent.
#include <iostream>
int foo(int *x, long *y)
{
// foo can assume that x and y do not alias the same memory because they have unrelated types
// so it is free to reorder the operations on *x and *y as it sees fit
// and it need not worry that modifying one could affect the other
*x = -1;
*y = 0;
return *x;
}
int main()
{
long a;
int b = foo(reinterpret_cast<int*>(&a), &a); // violates strict aliasing rule
// the above call has UB because it both writes and reads a through an unrelated pointer type
// on return b might be either 0 or -1; a could similarly be arbitrary
// technically, the program could do anything because it's UB
std::cout << b << ' ' << a << std::endl;
return 0;
}
In this example, thanks to the strict aliasing rule, the compiler can assume in foo that setting *y cannot affect the value of *x. So, it can decide to just return -1 as a constant, for example. Without the strict aliasing rule, the compiler would have to assume that altering *y might actually change the value of *x. Therefore, it would have to enforce the given order of operations and reload *x after setting *y. In this example it might seem reasonable enough to enforce such paranoia, but in less trivial code doing so will greatly constrain reordering and elimination of operations and force the compiler to reload values much more often.
Here are the results on my machine when I compile the above program differently (Apple LLVM v6.0 for x86_64-apple-darwin14.1.0):
$ g++ -Wall test58.cc
$ ./a.out
0 0
$ g++ -Wall -O3 test58.cc
$ ./a.out
-1 0
In your first example, foo is a const char * and bar is a const MessageJ * reinterpret_cast'ed from foo. You further stipulate that the object's underlying type actually is a MessageJ and that no reads are done through the const char *. Instead, it is only casted to the const MessageJ * from which only reads are then done. Since you do not read nor write through the const char * alias, then there can be no aliasing optimization problem with your accesses through your second alias in the first place. This is because there are no potentially conflicting operations performed on the underlying memory through your aliases of unrelated types. However, even if you did read through foo, then there could still be no potential problem as such accesses are allowed by the type aliasing rules (see below) and any ordering of reads through foo or bar would yield the same results because there are no writes occurring here.
Let us now drop the const qualifiers from your example and presume that MessageServer does do some write operations on bar and furthermore that the function also reads through foo for some reason (e.g. - prints a hex dump of memory). Normally, there might be an aliasing problem here as we have reads and writes happening through two pointers to the same memory through unrelated types. However, in this specific example, we are saved by the fact that foo is a char*, which gets special treatment by the compiler:
Type aliasing
When a pointer or reference to object of type T1 is reinterpret_cast (or C-style cast) to a pointer or reference to object of a different type T2, the cast always succeeds, but the resulting pointer or reference may only be accessed if both T1 and T2 are standard-layout types and one of the following is true: ...
T2 is char or unsigned char
The strict-aliasing optimizations that are allowed for operations through references (or pointers) of unrelated types are specifically disallowed when a char reference (or pointer) is in play. The compiler instead must be paranoid that operations through the char reference (or pointer) can affect and be affected by operations done through other references (or pointers). In the modified example where reads and writes operate on both foo and bar, you can still have defined behavior because foo is a char*. Therefore, the compiler is not allowed to optimize to reorder or eliminate operations on your two aliases in ways that conflict with the serial execution of the code as written. Similarly, it is forced to be paranoid about reloading values that may have been affected by operations through either alias.
The answer to your question is that, so long as your functions are properly round tripping pointers to a type through a char* back to its original type, then your function is safe, even if you were to interleave reads (and potentially writes, see caveat at end of EDIT) through the char* alias with reads+writes through the underlying type alias.
These two technical references (3.10.10) are useful for answering your question. These other references help give a better understanding of the technical information.
====
EDIT: In the comments below, zmb objects that while char* can legitimately alias a different type, that the converse is not true as several sources seem to say in varying forms: that the char* exception to the strict aliasing rule is an asymmetric, "one-way" rule.
Let us modify my above strict-aliasing code example and ask would this new version similarly result in undefined behavior?
#include <iostream>
char foo(char *x, long *y)
{
// can foo assume that x and y cannot alias the same memory?
*x = -1;
*y = 0;
return *x;
}
int main()
{
long a;
char b = foo(reinterpret_cast<char*>(&a), &a); // explicitly allowed!
// if this is defined behavior then what must the values of b and a be?
std::cout << (int) b << ' ' << a << std::endl;
return 0;
}
I argue that this is defined behavior and that both a and b must be zero after the call to foo. From the C++ standard (3.10.10):
If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined:^52
the dynamic type of the object ...
a char or unsigned char type ...
^52: The intent of this list is to specify those circumstances in which an object may or may not be aliased.
In the above program, I am accessing the stored value of an object through both its actual type and a char type, so it is defined behavior and the results have to comport with the serial execution of the code as written.
Now, there is no general way for the compiler to always statically know in foo that the pointer x actually aliases y or not (e.g. - imagine if foo was defined in a library). Maybe the program could detect such aliasing at run time by examining the values of the pointers themselves or consulting RTTI, but the overhead this would incur wouldn't be worth it. Instead, the better way to generally compile foo and allow for defined behavior when x and y do happen to alias one another is to always assume that they could (i.e. - disable strict alias optimizations when a char* is in play).
Here's what happens when I compile and run the above program:
$ g++ -Wall test59.cc
$ ./a.out
0 0
$ g++ -O3 -Wall test59.cc
$ ./a.out
0 0
This output is at odds with the earlier, similar strict-aliasing program's. This is not dispositive proof that I'm right about the standard, but the different results from the same compiler provides decent evidence that I may be right (or, at least that one important compiler seems to understand the standard the same way).
Let's examine some of the seemingly conflicting sources:
The converse is not true. Casting a char* to a pointer of any type other than a char* and dereferencing it is usually in volation of the strict aliasing rule. In other words, casting from a pointer of one type to pointer of an unrelated type through a char* is undefined.
The bolded bit is why this quote doesn't apply to the problem addressed by my answer nor the example I just gave. In both my answer and the example, the aliased memory is being accessed both through a char* and the actual type of the object itself, which can be defined behavior.
Both C and C++ allow accessing any object type via char * (or specifically, an lvalue of type char). They do not allow accessing a char object via an arbitrary type. So yes, the rule is a "one way" rule."
Again, the bolded bit is why this statement doesn't apply to my answers. In this and similar counter-examples, an array of characters is being accessed through a pointer of an unrelated type. Even in C, this is UB because the character array might not be aligned according to the aliased type's requirements, for example. In C++, this is UB because such access does not meet any of the type aliasing rules as the underlying type of the object actually is char.
In my examples, we first have a valid pointer to a properly constructed type that is then aliased by a char* and then reads and writes through these two aliased pointers are interleaved, which can be defined behavior. So, there seems to be some confusion and conflation out there between the strict aliasing exception for char and not accessing an underlying object through an incompatible reference.
int value;
int *p = &value;
char *q = reinterpret_cast<char*>(&value);
Both p and p refer to the same address, they are aliasing the same memory. What the language does is provide a set of rules defining the behaviors that are guaranteed: write through p read through q fine, other way around not fine.
The standard and many examples clearly state that "write through q, then read through p (or value)" can be well defined behavior. What is not as abundantly clear, but what I'm arguing for here, is that "write through p (or value), then read through q" is always well defined. I claim even further, that "reads and writes through p (or value) can be arbitrarily interleaved with reads and writes to q" with well defined behavior.
Now there is one caveat to the previous statement and why I kept sprinkling the word "can" throughout the above text. If you have a type T reference and a char reference that alias the same memory, then arbitrarily interleaving reads+writes on the T reference with reads on the char reference is always well defined. For example, you might do this to repeatedly print out a hex dump of the underlying memory as you modify it multiple times through the T reference. The standard guarantees that strict aliasing optimizations will not be applied to these interleaved accesses, which otherwise might give you undefined behavior.
But what about writes through a char reference alias? Well, such writes may or may not be well defined. If a write through the char reference violates an invariant of the underlying T type, then you can get undefined behavior. If such a write improperly modified the value of a T member pointer, then you can get undefined behavior. If such a write modified a T member value to a trap value, then you can get undefined behavior. And so on. However, in other instances, writes through the char reference can be completely well defined. Rearranging the endianness of a uint32_t or uint64_t by reading+writing to them through an aliased char reference is always well defined, for example. So, whether such writes are completely well defined or not depends on the particulars of the writes themselves. Regardless, the standard guarantees that its strict aliasing optimizations will not reorder or eliminate such writes w.r.t. other operations on the aliased memory in a manner that itself could lead to undefined behavior.
So my understanding is that you are doing something like that:
enum MType { J,K };
struct MessageX { MType type; };
struct MessageJ {
MType type{ J };
int id{ 5 };
//some other members
};
const char* popMessage() {
return reinterpret_cast<char*>(new MessageJ());
}
void MessageServer(const char* foo) {
const MessageX* msgx = reinterpret_cast<const MessageX*>(foo);
switch (msgx->type) {
case J: {
const MessageJ* msgJ = reinterpret_cast<const MessageJ*>(foo);
std::cout << msgJ->id << std::endl;
}
}
}
int main() {
const char* foo = popMessage();
MessageServer(foo);
}
If that is correct, then the expression msgJ->id is ok (as would be any access to foo), as msgJ has the correct dynamic type. msgx->type on the other hand does incur UB, because msgx has a unrelated type. The fact that the the pointer to MessageJ was cast to const char* in between is completely irrelevant.
As was cited by others, here is the relevant part in the standard (the "glvalue" is the result of dereferencing the pointer):
If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined:52
the dynamic type of the object,
a cv-qualified version of the dynamic type of the object,
a type similar (as defined in 4.4) to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object,
an aggregate or union type that includes one of the aforementioned types among its elements or nonstatic data members (including, recursively, an element or non-static data member of a subaggregate or contained union),
a type that is a (possibly cv-qualified) base class type of the dynamic type of the object,
a char or unsigned char type.
As far as the discussion "cast to char*" vs "cast from char*" is concerned:
You might know that the standard doesn't talk about strict aliasing as such, it only provides the list above. Strict aliasing is one analysis technique based on that list for compilers to determine which pointers can potentially alias each other. As far as optimizations are concerned, it doesn't make a difference, if a pointer to a MessageJ object was cast to char* or vice versa. The compiler cannot (without further analysis) assume that a char* and MessageX* point to distinct objects and will not perform any optimizations (e.g. reordering) based on that.
Of course that doesn't change the fact that accessing a char array via a pointer to a different type would still be UB in C++ (I assume mostly due to alignment issues) and the compiler might perform other optimizations that could ruin your day.
EDIT:
What I'm asking is: Will the initialization of a const object, by
casting from a char*, ever be optimized below where that object is
cast to another type of object, such that I am casting from
uninitialized data?
No it will not. Aliasing analysis doesn't influence how the pointer itself is handled, but the access through that pointer. The compiler will NOT reorder the write access (store memory address in the pointer variable) with the read access (copy to other variable / load of address in order to access the memory location) to the same variable.
There is no aliasing problem as you use (const)char* type, see the last point of:
If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined:
the dynamic type of the object,
a cv-qualified version of the dynamic type of the object,
a type similar (as defined in 4.4) to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object,
an aggregate or union type that includes one of the aforementioned types among -its elements or non-static data members (including, recursively, an element or non-static data member of a subaggregate or contained union),
a type that is a (possibly cv-qualified) base class type of the dynamic type of the object,
a char or unsigned char type.
The other answer answered the question well enough (it's a direct quotation from the C++ standard in https://isocpp.org/files/papers/N3690.pdf page 75), so I'll just point out other problems in what you're doing.
Note that your code may run into alignment problems. For example, if the alignment of MessageJ is 4 or 8 bytes (typical on 32-bit and 64-bit machines), strictly speaking, it is undefined behaviour to access an arbitrary character array pointer as a MessageJ pointer.
You won't run into any problems on x86/AMD64 architectures as they allow unaligned access. However, someday you may find that the code you're developing is ported to a mobile ARM architecture and the unaligned access would be a problem then.
It therefore seems you're doing something you shouldn't be doing. I would consider using serialization instead of accessing a character array as a MessageJ type. The only problem isn't potential alignment problems, an additional problem is that the data may have a different representation on 32-bit and 64-bit architectures.
If I have the following struct:
struct Foo { int a; };
Is the code bellow conforming with the C++ Standard? I mean, can't it generate an "Undefined Behavior"?
Foo foo;
int ifoo;
foo = *reinterpret_cast<Foo*>(&ifoo);
void bar(int value);
bar(*reinterpret_cast<int*>(&foo));
auto fptr = static_cast<void(*)(...)>(&bar);
fptr(foo);
9.2/20 in N3290 says
A pointer to a standard-layout struct object, suitably converted using a reinterpret_cast, points to its initial member (or if that member is a bit-field, then to the unit in which it resides) and vice versa.
And your Foo is a standard-layout class.
So your second cast is correct.
I see no guarantee that the first one is correct (and I've used architecture where a char had weaker alignment restriction than a struct containing just a char, on such an architecture, it would be problematic). What the standard guarantee is that if you have a pointer to int which really point to the first element of a struct, you can reinterpret_cast it back to pointer to the struct.
Likewise, I see nothing which would make your third one defined if it was a reinterpret_cast (I'm pretty sure that some ABI use different convention to pass structs and basic types, so it is highly suspicious and I'd need a clear mention in the standard to accept it) and I'm quite sure that nothing allow static_cast between pointers to functions.
As long as you access only the first element of a struct, it's considered to be safe, since there's no padding before the first member of a struct. In fact, this trick is used, for example, in the Objecive-C runtime, where a generic pointer type is defined as:
typedef struct objc_object {
Class isa;
} *id;
and in runtime, real objecs (which are still bare struct pointers) have memory layouts like this:
struct {
Class isa;
int x; // random other data as instance variables
} *CustomObject;
and the runtime accesses the class of an actual object using this method.
Foo is a plain-old-data structure, which means it contains nothing but the data you explicitely store in it. In this case: an int.
Thus the memory layout for an int and Foo are the same.
You can typecast from one to the other without problems. Whether it's a clever idea to use this kind of stuff is a different question.
PS:
This usually works, but not necessarily due to different alignment restrictions. See AProgrammer's answer.
We just upgraded our compiler to gcc 4.6 and now we get some of these warnings. At the moment our codebase is not in a state to be compiled with c++0x and anyway, we don't want to run this in prod (at least not yet) - so I needed a fix to remove this warning.
The warnings occur typically because of something like this:
struct SomeDataPage
{
// members
char vData[SOME_SIZE];
};
later, this is used in the following way
SomeDataPage page;
new(page.vData) SomeType(); // non-trivial constructor
To read, update and return for example, the following cast used to happen
reinterpret_cast<SomeType*>(page.vData)->some_member();
This was okay with 4.4; in 4.6 the above generates:
warning: type punned pointer will break strict-aliasing rules
Now a clean way to remove this error is to use a union, however like I said, we can't use c++0x (and hence unrestricted unions), so I've employed the horrible hack below - now the warning has gone away, but am I likely to invoke nasal daemons?
static_cast<SomeType*>(reinterpret_cast<void*>(page.vData))->some_member();
This appears to work okay (see simple example here: http://www.ideone.com/9p3MS) and generates no warnings, is this okay(not in the stylistic sense) to use this till c++0x?
NOTE: I don't want to use -fno-strict-aliasing generally...
EDIT: It seems I was mistaken, the same warning is there on 4.4, I guess we only picked this up recently with the change (it was always unlikely to be a compiler issue), the question still stands though.
EDIT: further investigation yielded some interesting information, it seems that doing the cast and calling the member function in one line is what is causing the warning, if the code is split into two lines as follows
SomeType* ptr = reinterpret_cast<SomeType*>(page.vData);
ptr->some_method();
this actually does not generate a warning. As a result, my simple example on ideone is flawed and more importantly my hack above does not fix the warning, the only way to fix it is to split the function call from the cast - then the cast can be left as a reinterpret_cast.
SomeDataPage page;
new(page.vData) SomeType(); // non-trivial constructor
reinterpret_cast<SomeType*>(page.vData)->some_member();
This was okay with 4.4; in 4.6 the above generates:
warning: type punned pointer will break strict-aliasing rules
You can try:
SomeDataPage page;
SomeType *data = new(page.vData) SomeType(); // non-trivial constructor
data->some_member();
Why not use:
SomeType *item = new (page.vData) SomeType();
and then:
item->some_member ();
I don't think a union is the best way, it may also be problematic. From the gcc docs:
`-fstrict-aliasing'
Allows the compiler to assume the strictest aliasing rules
applicable to the language being compiled. For C (and C++), this
activates optimizations based on the type of expressions. In
particular, an object of one type is assumed never to reside at
the same address as an object of a different type, unless the
types are almost the same. For example, an `unsigned int' can
alias an `int', but not a `void*' or a `double'. A character type
may alias any other type.
Pay special attention to code like this:
union a_union {
int i;
double d;
};
int f() {
a_union t;
t.d = 3.0;
return t.i;
}
The practice of reading from a different union member than the one
most recently written to (called "type-punning") is common. Even
with `-fstrict-aliasing', type-punning is allowed, provided the
memory is accessed through the union type. So, the code above
will work as expected. However, this code might not:
int f() {
a_union t;
int* ip;
t.d = 3.0;
ip = &t.i;
return *ip;
}
How this relates to your problem is tricky to determine. I guess the compiler is not seeing the data in SomeType as the same as the data in vData.
I'd be more concerned about SOME_SIZE not being big enough, frankly. However, it is legal to alias any type with a char*. So simply doing reinterpret_cast<T*>(&page.vData[0]) should be just fine.
Also, I'd question this kind of design. Unless you're implementing boost::variant or something similar, there's not much reason to use it.
struct SomeDataPage
{
// members
char vData[SOME_SIZE];
};
This is problematic for aliasing/alignment reasons. For one, the alignment of this struct isn't necessarily the same as the type you're trying to pun inside it. You could try using GCC attributes to enforce a certain alignment:
struct SomeDataPage { char vData[SOME_SIZE] __attribute__((aligned(16))); };
Where alignment of 16 should be adequate for anything I've come across. Then again, the compiler still won't like your code, but it won't break if the alignment is good. Alternatively, you could use the new C++0x alignof/alignas.
template <class T>
struct DataPage {
alignof(T) char vData[sizeof(T)];
};
I know that C++ doesn't support covariance for containers elements, as in Java or C#. So the following code probably is undefined behavior:
#include <vector>
struct A {};
struct B : A {};
std::vector<B*> test;
std::vector<A*>* foo = reinterpret_cast<std::vector<A*>*>(&test);
Not surprisingly, I received downvotes when suggesting this a solution to another question.
But what part of the C++ standard exactly tells me that this will result in undefined behavior? It's guaranteed that both std::vector<A*> and std::vector<B*> store their pointers in a continguous block of memory. It's also guaranteed that sizeof(A*) == sizeof(B*). Finally, A* a = new B is perfectly legal.
So what bad spirits in the standard did I conjure (except style)?
The rule violated here is documented in C++03 3.10/15 [basic.lval], which specifies what is referred to informally as the "strict aliasing rule"
If a program attempts to access the stored value of an object through an lvalue of other than one of the following types the behavior is undefined:
the dynamic type of the object,
a cv-qualified version of the dynamic type of the object,
a type that is the signed or unsigned type corresponding to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object,
an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union),
a type that is a (possibly cv-qualified) base class type of the dynamic type of the object,
a char or unsigned char type.
In short, given an object, you are only allowed to access that object via an expression that has one of the types in the list. For a class-type object that has no base classes, like std::vector<T>, basically you are limited to the types named in the first, second, and last bullets.
std::vector<Base*> and std::vector<Derived*> are entirely unrelated types and you can't use an object of type std::vector<Base*> as if it were a std::vector<Derived*>. The compiler could do all sorts of things if you violate this rule, including:
perform different optimizations on one than on the other, or
lay out the internal members of one differently, or
perform optimizations assuming that a std::vector<Base*>* can never refer to the same object as a std::vector<Derived*>*
use runtime checks to ensure that you aren't violating the strict aliasing rule
[It might also do none of these things and it might "work," but there's no guarantee that it will "work" and if you change compilers or compiler versions or compilation settings, it might all stop "working." I use the scare-quotes for a reason here. :-)]
Even if you just had a Base*[N] you could not use that array as if it were a Derived*[N] (though in that case, the use would probably be safer, where "safer" means "still undefined but less likely to get you into trouble).
You are invoking the bad spirit of reinterpret_cast<>.
Unless you really know what you do (I mean not proudly and not pedantically) reinterpret_cast is one of the gates of evil.
The only safe use I know of is managing classes and structures between C++ and C functions calls.
There maybe some others however.
The general problem with covariance in containers is the following:
Let's say your cast would work and be legal (it isn't but let's assume it is for the following example):
#include <vector>
struct A {};
struct B : A { public: int Method(int x, int z); };
struct C : A { public: bool Method(char y); };
std::vector<B*> test;
std::vector<A*>* foo = reinterpret_cast<std::vector<A*>*>(&test);
foo->push_back(new C);
test[0]->Method(7, 99); // What should happen here???
So you have also reinterpret-casted a C* to a B*...
Actually I don't know how .NET and Java manage this (I think they throw an exception when trying to insert a C).
I think it'll be easier to show than tell:
struct A { int a; };
struct Stranger { int a; };
struct B: Stranger, A {};
int main(int argc, char* argv[])
{
B someObject;
B* b = &someObject;
A* correct = b;
A* incorrect = reinterpret_cast<A*>(b);
assert(correct != incorrect); // troubling, isn't it ?
return 0;
}
The (specific) issue showed here is that when doing a "proper" conversion, the compiler adds some pointer ajdustement depending on the memory layout of the objects. On a reinterpret_cast, no adjustement is performed.
I suppose you'll understand why the use of reinterpet_cast should normally be banned from the code...