C++ member layout - c++

Let's we have a simple structure (POD).
struct xyz
{
float x, y, z;
};
May I assume that following code is OK? May I assume there is no any gaps? What the standard says? Is it true for PODs? Is it true for classes?
xyz v;
float* p = &v.x;
p[0] = 1.0f;
p[1] = 2.0f; // Is it ok?
p[2] = 3.0f; // Is it ok?

The answer here is a bit tricky. The C++ standard says that POD data types will have C layout compatability guarantees (Reference). According to section 9.2 of the C spec the members of a struct will be laid out in sequential order if
There is no accessibility modifier difference
No alignment issues with the data type
So yes this solution will work as long as the type float has a compatible alignment on the current platform (it's the platform word size). So this should work for 32 bit processors but my guess is that it would fail for 64 bit ones. Essentially anywhere that sizeof(void*) is different than sizeof(float)

This is not guaranteed by the standard, and will not work on many systems. The reasons are:
The compiler may align struct members as appropriate for the target platform, which may mean 32-bit alignment, 64-bit alignment, or anything else.
The size of the float might be 32 bits, or 64 bits. There's no guarantee that it's the same as the struct member alignment.
This means that p[1] might be at the same location as xyz.y, or it might overlap partially, or not at all.

No, it is not OK to do so except for the first field.
From the C++ standards:
9.2 Class members
A pointer to a POD-struct object,
suitably converted using a
reinterpret_cast, points to its
initial member (or if that member is a
bit-field, then to the unit in which
it resides) and vice versa. [Note:
There might therefore be unnamed
padding within a POD-struct object,
but not at its beginning, as necessary
to achieve appropriate alignment.

Depends on the hardware. The standard explicitly allows POD classes to have unspecified and unpredictable padding. I noted this on the C++ Wikipedia page and grabbed the footnote with the spec reference for you.
^ a b ISO/IEC (2003). ISO/IEC 14882:2003(E): Programming Languages - C++ §9.2 Class members [class.mem] para. 17
In practical terms, however, on common hardware and compilers it will be fine.

When in doubt, change the data structure to suit the application:
struct xyz
{
float p[3];
};
For readability you may want to consider:
struct xyz
{
enum { x_index = 0, y_index, z_index, MAX_FLOATS};
float p[MAX_FLOATS];
float X(void) const {return p[x_index];}
float X(const float& new_x) {p[x_index] = new_x;}
float Y(void) const {return p[y_index];}
float Y(const float& new_y) {p[y_index] = new_y;}
float Z(void) const {return p[z_index];}
float Z(const float& new_z) {p[z_index] = new_z;}
};
Perhaps even add some more encapsulation:
struct Functor
{
virtual void operator()(const float& f) = 0;
};
struct xyz
{
void for_each(Functor& ftor)
{
ftor(p[0]);
ftor(p[1]);
ftor(p[2]);
return;
}
private:
float p[3];
}
In general, if a data structure needs to be treated in two or more different ways, perhaps the data structure needs to be redesigned; or the code.

The standard requires that the order of arrangement in memory match the order of definition, but allows arbitrary padding between them. If you have an access specifier (public:, private: or protected:) between members, even the guarantee about order is lost.
Edit: in the specific case of all three members being of the same primitive type (i.e. not themselves structs or anything like that) you stand a pretty fair chance -- for primitive types, the object's size and alignment requirements are often the same, so it works out.
OTOH, this is only by accident, and tends to be more of a weakness than a strength; the code is wrong, so ideally it would fail immediately instead of appearing to work, right up to the day that you're giving a demo for the owner of the company that's going to be your most important customer, at which time it will (of course) fail in the most heinous possible fashion...

No, you may not assume that there are no gaps. You may check for you architecture, and if there aren't and you don't care about portability, it will be OK.
But imagine a 64-bit architecture with 32-bit floats. The compiler may align the struct's floats on 64-bit boundaries, and your
p[1]
will give you junk, and
p[2]
will give you what you think your getting from
p[1]
&c.
However, you compiler may give you some way to pack the structure. It still wouldn't be "standard"---the standard provides no such thing, and different compilers provide very incompatible ways of doing this--- but it is likely to be more portable.

Lets take a look at Doom III source code:
class idVec4 {
public:
float x;
float y;
float z;
float w;
...
const float * ToFloatPtr( void ) const;
float * ToFloatPtr( void );
...
}
ID_INLINE const float *idVec4::ToFloatPtr( void ) const {
return &x;
}
ID_INLINE float *idVec4::ToFloatPtr( void ) {
return &x;
}
It works on many systems.

Your code is OK (so long as it only ever handles data generated in the same environment). The structure will be laid out in memory as declared if it is POD. However, in general, there is a gotcha you need to be aware of: the compiler will insert padding into the structure to ensure each member's alignment requirements are obeyed.
Had your example been
struct xyz
{
float x;
bool y;
float z;
};
then z would have began 8 bytes into the structure and sizeof(xyz) would have been 12 as floats are (usually) 4 byte aligned.
Similarly, in the case
struct xyz
{
float x;
bool y;
};
sizeof(xyz) == 8, to ensure ((xyz*)ptr)+1 returns a pointer that obeys x's alignment requirements.
Since alignment requirements / type sizes may vary between compilers / platforms, such code is not in general portable.

As others have pointed out the alignment is not guaranteed by the spec. Many say it is hardware dependent, but actually it is also compiler dependent. Hardware may support many different formats. I remember that the PPC compiler support pragmas for how to "pack" the data. You could pack it on 'native' boundaries or force it to 32 bit boundaries, etc.
It would be nice to understand what you are trying to do. If you are trying to 'parse' input data, you are better off with a real parser. If you are going to serialize, then write a real serializer. If you are trying to twiddle bits such as for a driver, then the device spec should give you a specific memory map to write to. Then you can write your POD structure, specify the correct alignment pragmas (if supported) and move on.

structure packing (eg #pragma pack in MSVC) http://msdn.microsoft.com/en-us/library/aa273913%28v=vs.60%29.aspx
variable alignment
(eg __declspec(align( in MSVC) http://msdn.microsoft.com/en-us/library/83ythb65.aspx
are two factors that can wreck your assumptions. floats are usually 4 bytes wide, so it's rare to misalign such large variables. But it's still easy to break your code.
This issue is most visible when binary reading header struct with shorts (like BMP or TGA) - forgetting pack 1 causes a disaster.

I assume you want a struct to keep your coordinates accessed as members (.x, .y and .z) but you still want them to be accessed, let's say, an OpenGL way (as if it was an array).
You can try implementing the [] operator of the struct so it can be accessed as an array. Something like:
struct xyz
{
float x, y, z;
float& operator[] (unsigned int i)
{
switch (i)
{
case 0:
return x;
break;
case 1:
return y;
break;
case 2:
return z;
break;
default:
throw std::exception
break;
}
}
};

Related

Mask from bitfield in C++

Here's a little puzzle I couldn't find a good answer for:
Given a struct with bitfields, such as
struct A {
unsigned foo:13;
unsigned bar:19;
};
Is there a (portable) way in C++ to get the correct mask for one of the bitfields, preferably as a compile-time constant function or template?
Something like this:
constinit unsigned mask = getmask<A::bar>(); // mask should be 0xFFFFE000
In theory, at runtime, I could crudely do:
unsigned getmask_bar() {
union AA {
unsigned mask;
A fields;
} aa{};
aa.fields.bar -= 1;
return aa.mask;
}
That could even be wrapped in a macro (yuck!) to make it "generic".
But I guess you can readily see the various deficiencies of this method.
Is there a nicer, generic C++ way of doing it? Or even a not-so-nice way? Is there something useful coming up for the next C++ standard(s)? Reflection?
Edit: Let me add that I am trying to find a way of making bitfield manipulation more flexible, so that it is up to the programmer to modify multiple fields at the same time using masking. I am after terse notation, so that things can be expressed concisely without lots of boilerplate. Think working with hardware registers in I/O drivers as a use case.
Unfortunately, there is no better way - in fact, there is no way to extract individual adjacent bit fields from a struct by inspecting its memory directly in C++.
From Cppreference:
The following properties of bit-fields are implementation-defined:
The value that results from assigning or initializing a signed bit-field with a value out of range, or from incrementing a signed
bit-field past its range.
Everything about the actual allocation details of bit-fields within the class object
For example, on some platforms, bit-fields don't straddle bytes, on others they do
Also, on some platforms, bit-fields are packed left-to-right, on others right-to-left
Your compiler might give you stronger guarantees; however, if you do rely on the behavior of a specific compiler, you can't expect your code to work with a different compiler/architecture pair. GCC doesn't even document their bit field packing, as far as I can tell, and it differs from one architecture to the next. So your code might work on a specific version of GCC on x86-64 but break on literally everything else, including other versions of the same compiler.
If you really want to be able to extract bitfields from a random structure in a generic way, your best bet is to pass a function pointer around (instead of a mask); that way, the function can access the field in a safe way and return the value to its caller (or set a value instead).
Something like this:
template<typename T>
auto extractThatBitField(const void *ptr) {
return static_cast<const T *>(ptr)->m_thatBitField;
}
auto *extractor1 = &extractThatBitField<Type1>;
auto *extractor2 = &extractThatBitField<Type2>;
/* ... */
Now, if you have a pair of {pointer, extractor}, you can get the value of the bitfield safely. (Of course, the extractor function has to match the type of the object behind that pointer.) It's not much overhead compared to having a {pointer, mask} pair instead; the function pointer is maybe 4 bytes larger than the mask on a 64-bit machine (if at all). The extractor function itself will just be a memory load, some bit twiddling, and a return instruction. It'll still be super fast.
This is portable and supported by the C++ standard, unlike inspecting the bits of a bitfield directly.
Alternatively, C++ allows casting between standard-layout structs that have common initial members. (Though keep in mind that this falls apart as soon as inheritance or private/protected members get involved! The first solution, above, works for all those cases as well.)
struct Common {
int m_a : 13;
int m_b : 19;
int : 0; //Needed to ensure the bit fields end on a byte boundary
};
struct Type1 {
int m_a : 13;
int m_b : 19;
int : 0;
Whatever m_whatever;
};
struct Type2 {
int m_a : 13;
int m_b : 19;
int : 0;
Something m_something;
};
int getFieldA(const void *ptr) {
//We still can't do type punning directly due
//to weirdness in various compilers' aliasing resolution.
//std::memcpy is the official way to do type punning.
//This won't compile to an actual memcpy call.
Common tmp;
std::memcpy(&tmp, ptr, sizeof(Common));
return tmp.m_a;
}
See also: Can memcpy be used for type punning?

How can I legally access object with wrong alignment using pointer?

I need to read/write objects using pointers to wrongly aligned locations. I don't want to
use pragma pack or other compiler-specific features.
At first, I tried this:
char arr[sizeof (float) * 2];
*(float *)(arr + 1) = 1.0;
But this code causes UB because it violates strict aliasing rule.
The next code also will not work without pragma pack because compiler might insert a padding after : 8;.
union
{
struct
{
char arr[sizeof (float) * 2]
} a;
struct
{
: 8;
float val;
} b;
} data;
data.b.val = 1.0;
So, can I legally access object with pointer to wrongly aligned location?
I can copy objects byte by byte to needed location (or memcpy), but I think it will be slow, so I'm trying to find another solutions.
Update: Yes I know about alignas, but it can only increase alignment, not decrease it.
You can always access an object as an array of unsigned char in C (not the other way around though).
As char has size and alignment 1, that allows you to access things however you want.
Be aware though that in C++ things aren't quite that nice:
Unless the type is trivially copyable, a byte-copy is not allowed.

Can size of pointers to non-union classes differ?

I understand there are HW platforms where you need more information to point to a char than you need to point to an int (the platform having non-addressable bytes, so a pointer to char needs to store a pointer to a word and also an index of a byte in the word). So it is possible that sizeof(int*) < sizeof(char*) on such platforms.
Can something similar happen with pointers to non-union classes? C++ allows covariant returns types on virtual functions. Let's say we have classes like this:
struct Gadget
{
// some content
};
struct Widget
{
virtual Gadget* getGadget();
};
Any code which calls getGadget() has to work when receiving a Gadget*, but the same code (actually the same compiled binary code) has to work when it receives a pointer to a type derived from Gadget as well (perhaps one which is defined in a totally different library). The only way I can reasonably see this happening is sizeof(T*) == sizeof(U*) for all non-union class types T and U.
So my question is, given one particular practical compiler (that excludes hypothetical Hell++) on one particular platform, is it reasonable to expect that all pointers to non-union class types will be of the same size? Or is there a practical reason why a compiler might want to use different sizes while remaining compliant with covariant return types?
On platforms where different "levels" of pointer exist (such as __near and __far), assume the same attribute applied to both.
C has a hard requirement that all pointers to all structure types have the same representation and alignment.
6.2.5 Types
27 [...] All pointers to structure types shall have the same representation and alignment requirements as each other. [...]
C++ effectively requires binary compatibility with a C implementation, because of the standard's requirements for extern "C", so indirectly, this requires all pointers to structure types that are valid in C (POD types, pretty much) to have the same representation and alignment in C++ too.
No such requirement seems to have been made for non-POD types, so an implementation would be allowed to use different pointer sizes in that case. You suggest that that cannot work, but to follow your example,
struct G { };
struct H : G { };
struct W
{
virtual G* f() { ... }
};
struct X : W
{
virtual H* f() { ... }
};
could be translated to (pseudo-code)
struct W
{
virtual G* f() { ... }
};
struct X : W
{
override G* f() { ... }
inline H* __X_f() { return static_cast<H *>(f()); }
};
which would still match the requirements of the language.
A valid reason why two pointers to structure types might not be identical is when a C++ compiler would be ported to a platform that has an existing C compiler with a poorly-designed ABI. G is a POD type, so G * needs to be exactly what it is in C. H is not a POD type, so H * does not need to match any C type.
For alignment, that can actually happen: something that really happened is that the x86-32 ABI on a typical GNU/Linux system requires 64-bit integer types to be 32-bit aligned, even though the processor's preferred alignment is actually 64-bit. Now comes another implementer, and they decide that they do want to require 64-bit alignment, but are stuck if they want to remain compatible with the existing implementation.
For sizes, I cannot think of a reasonable scenario in which it would happen, but I am unsure whether that might be a lack of imagination on my part.
As far as I understand C and C++ assume memory to be linearly byte addressable. Certain platforms (early ARM) however insist on word aligned loads and stores. In such a case, it is the compiler's responsibility to round the pointer to the word boundary and then perform the necessary bit shift operations when fetching say a char.
But since this is all done only on loads and stores, all pointers still all look the same.

Is it possible to hash pointers in portable C++03 code?

Is it possible to portably hash a pointer in C++03, which does not have std::hash defined?
It seems really weird for hashables containing pointers to be impossible in C++, but I can't think of any way of making them.
The closest way I can think of is doing reinterpret_cast<uintptr_t>(ptr), but uintptr_t is not required to be defined in C++03, and I'm not sure if the value could be legally manipulated even if it was defined... is this even possible?
No, in general. In fact it's not even possible in general in C++11 without std::hash.
The reason why lies in the difference between values and value representations.
You may recall the very common example used to demonstrate the different between a value and its representation: the null pointer value. Many people mistakenly assume that the representation for this value is all bits zero. This is not guaranteed in any fashion. You are guaranteed behavior by its value only.
For another example, consider:
int i;
int* x = &i;
int* y = &i;
x == y; // this is true; the two pointer values are equal
Underneath that, though, the value representation for x and y could be different!
Let's play compiler. We'll implement the value representation for pointers. Let's say we need (for hypothetical architecture reasons) the pointers to be at least two bytes, but only one is used for the value.
I'll just jump ahead and say it could be something like this:
struct __pointer_impl
{
std::uint8_t byte1; // contains the address we're holding
std::uint8_t byte2; // needed for architecture reasons, unused
// (assume no padding; we are the compiler, after all)
};
Okay, this is our value representation, now lets implement the value semantics. First, equality:
bool operator==(const __pointer_impl& first, const __pointer_impl& second)
{
return first.byte1 == second.byte1;
}
Because the pointer's value is really only contained in the first byte (even though its representation has two bytes), that's all we have to compare. The second byte is irrelevant, even if they differ.
We need the address-of operator implementation, of course:
__pointer_impl address_of(int& i)
{
__pointer_impl result;
result.byte1 = /* hypothetical architecture magic */;
return result;
}
This particular implementation overload gets us a pointer value representation for a given int. Note that the second byte is left uninitialized! That's okay: it's not important for the value.
This is really all we need to drive the point home. Pretend the rest of the implementation is done. :)
So now consider our first example again, "compiler-ized":
int i;
/* int* x = &i; */
__pointer_impl x = __address_of(i);
/* int* y = &i; */
__pointer_impl y = __address_of(i);
x == y; // this is true; the two pointer values are equal
For our tiny example on the hypothetical architecture, this sufficiently provides the guarantees required by the standard for pointer values. But note you are never guaranteed that x == y implies memcmp(&x, &y, sizeof(__pointer_impl)) == 0. There simply aren't requirements on the value representation to do so.
Now consider your question: how do we hash pointers? That is, we want to implement:
template <typename T>
struct myhash;
template <typename T>
struct myhash<T*> :
std::unary_function<T*, std::size_t>
{
std::size_t operator()(T* const ptr) const
{
return /* ??? */;
}
};
The most important requirement is that if x == y, then myhash()(x) == myhash()(y). We also already know how to hash integers. What can we do?
The only thing we can do is try to is somehow convert the pointer to an integer. Well, C++11 gives us std::uintptr_t, so we can do this, right?
return myhash<std::uintptr_t>()(reinterpret_cast<std::uintptr_t>(ptr));
Perhaps surprisingly, this is not correct. To understand why, imagine again we're implementing it:
// okay because we assumed no padding:
typedef std::uint16_t __uintptr_t; // will be used for std::uintptr_t implementation
__uintptr_t __to_integer(const __pointer_impl& ptr)
{
__uintptr_t result;
std::memcpy(&result, &ptr, sizeof(__uintptr_t));
return result;
}
__pointer_impl __from_integer(const __uintptr_t& ptrint)
{
__pointer_impl result;
std::memcpy(&result, &ptrint, sizeof(__pointer_impl));
return result;
}
So when we reinterpret_cast a pointer to integer, we'll use __to_integer, and going back we'll use __from_integer. Note that the resulting integer will have a value depending upon the bits in the value representation of pointers. That is, two equal pointer values could end up with different integer representations...and this is allowed!
This is allowed because the result of reinterpret_cast is totally implementation-defined; you're only guaranteed the resulting of the opposite reinterpret_cast gives you back the same result.
So there's the first issue: on this implementation, our hash could end up different for equal pointer values.
This idea is out. Maybe we can reach into the representation itself and hash the bytes together. But this obviously ends up with the same issue, which is what the comments on your question are alluding to. Those pesky unused representation bits are always in the way, and there's no way to figure out where they are so we can ignore them.
We're stuck! It's just not possible. In general.
Remember, in practice we compile for certain implementations, and because the results of these operations are implementation-defined they are reliable if you take care to only use them properly. This is what Mats Petersson is saying: find out the guarantees of the implementation and you'll be fine.
In fact, most consumer platforms you use will handle the std::uintptr_t attempt just fine. If it's not available on your system, or if you want an alternative approach, just combine the hashes of the individual bytes in the pointer. All this requires to work is that the unused representation bits always take on the same value. In fact, this is the approach MSVC2012 uses!
Had our hypothetical pointer implementation simply always initialized byte2 to a constant, it would work there as well. But there just isn't any requirement for implementations to do so.
Hope this clarifies a few things.
The answer to your question really depends on "HOW portable" do you want it. Many architectures will have a uintptr_t, but if you want something that can compile on DSP's, Linux, Windows, AIX, old Cray machines, IBM 390 series machines, etc, etc, then you may have to have a config option where you define your own "uintptr_t" if it doesn't exist in that architecture.
Casting a pointer to an integer type should be fine. If you were to cast it back, you may be in trouble. Of course, if you have MANY pointers, and you allocate fairly large sections of memory on a 64-bit machine, using a 32-bit integer, there is a chance you get lots of collissions. Note that 64-bit windows still has a "long" as 32-bit.

Purpose of Unions in C and C++

I have used unions earlier comfortably; today I was alarmed when I read this post and came to know that this code
union ARGB
{
uint32_t colour;
struct componentsTag
{
uint8_t b;
uint8_t g;
uint8_t r;
uint8_t a;
} components;
} pixel;
pixel.colour = 0xff040201; // ARGB::colour is the active member from now on
// somewhere down the line, without any edit to pixel
if(pixel.components.a) // accessing the non-active member ARGB::components
is actually undefined behaviour I.e. reading from a member of the union other than the one recently written to leads to undefined behaviour. If this isn't the intended usage of unions, what is? Can some one please explain it elaborately?
Update:
I wanted to clarify a few things in hindsight.
The answer to the question isn't the same for C and C++; my ignorant younger self tagged it as both C and C++.
After scouring through C++11's standard I couldn't conclusively say that it calls out accessing/inspecting a non-active union member is undefined/unspecified/implementation-defined. All I could find was §9.5/1:
If a standard-layout union contains several standard-layout structs that share a common initial sequence, and if an object of this standard-layout union type contains one of the standard-layout structs, it is permitted to inspect the common initial sequence of any of standard-layout struct members. §9.2/19: Two standard-layout structs share a common initial sequence if corresponding members have layout-compatible types and either neither member is a bit-field or both are bit-fields with the same width for a sequence of one or more initial members.
While in C, (C99 TC3 - DR 283 onwards) it's legal to do so (thanks to Pascal Cuoq for bringing this up). However, attempting to do it can still lead to undefined behavior, if the value read happens to be invalid (so called "trap representation") for the type it is read through. Otherwise, the value read is implementation defined.
C89/90 called this out under unspecified behavior (Annex J) and K&R's book says it's implementation defined. Quote from K&R:
This is the purpose of a union - a single variable that can legitimately hold any of one of several types. [...] so long as the usage is consistent: the type retrieved must be the type most recently stored. It is the programmer's responsibility to keep track of which type is currently stored in a union; the results are implementation-dependent if something is stored as one type and extracted as another.
Extract from Stroustrup's TC++PL (emphasis mine)
Use of unions can be essential for compatness of data [...] sometimes misused for "type conversion".
Above all, this question (whose title remains unchanged since my ask) was posed with an intention of understanding the purpose of unions AND not on what the standard allows E.g. Using inheritance for code reuse is, of course, allowed by the C++ standard, but it wasn't the purpose or the original intention of introducing inheritance as a C++ language feature. This is the reason Andrey's answer continues to remain as the accepted one.
The purpose of unions is rather obvious, but for some reason people miss it quite often.
The purpose of union is to save memory by using the same memory region for storing different objects at different times. That's it.
It is like a room in a hotel. Different people live in it for non-overlapping periods of time. These people never meet, and generally don't know anything about each other. By properly managing the time-sharing of the rooms (i.e. by making sure different people don't get assigned to one room at the same time), a relatively small hotel can provide accommodations to a relatively large number of people, which is what hotels are for.
That's exactly what union does. If you know that several objects in your program hold values with non-overlapping value-lifetimes, then you can "merge" these objects into a union and thus save memory. Just like a hotel room has at most one "active" tenant at each moment of time, a union has at most one "active" member at each moment of program time. Only the "active" member can be read. By writing into other member you switch the "active" status to that other member.
For some reason, this original purpose of the union got "overridden" with something completely different: writing one member of a union and then inspecting it through another member. This kind of memory reinterpretation (aka "type punning") is not a valid use of unions. It generally leads to undefined behavior is described as producing implementation-defined behavior in C89/90.
EDIT: Using unions for the purposes of type punning (i.e. writing one member and then reading another) was given a more detailed definition in one of the Technical Corrigenda to the C99 standard (see DR#257 and DR#283). However, keep in mind that formally this does not protect you from running into undefined behavior by attempting to read a trap representation.
You could use unions to create structs like the following, which contains a field that tells us which component of the union is actually used:
struct VAROBJECT
{
enum o_t { Int, Double, String } objectType;
union
{
int intValue;
double dblValue;
char *strValue;
} value;
} object;
The behavior is undefined from the language point of view. Consider that different platforms can have different constraints in memory alignment and endianness. The code in a big endian versus a little endian machine will update the values in the struct differently. Fixing the behavior in the language would require all implementations to use the same endianness (and memory alignment constraints...) limiting use.
If you are using C++ (you are using two tags) and you really care about portability, then you can just use the struct and provide a setter that takes the uint32_t and sets the fields appropriately through bitmask operations. The same can be done in C with a function.
Edit: I was expecting AProgrammer to write down an answer to vote and close this one. As some comments have pointed out, endianness is dealt in other parts of the standard by letting each implementation decide what to do, and alignment and padding can also be handled differently. Now, the strict aliasing rules that AProgrammer implicitly refers to are a important point here. The compiler is allowed to make assumptions on the modification (or lack of modification) of variables. In the case of the union, the compiler could reorder instructions and move the read of each color component over the write to the colour variable.
The most common use of union I regularly come across is aliasing.
Consider the following:
union Vector3f
{
struct{ float x,y,z ; } ;
float elts[3];
}
What does this do? It allows clean, neat access of a Vector3f vec;'s members by either name:
vec.x=vec.y=vec.z=1.f ;
or by integer access into the array
for( int i = 0 ; i < 3 ; i++ )
vec.elts[i]=1.f;
In some cases, accessing by name is the clearest thing you can do. In other cases, especially when the axis is chosen programmatically, the easier thing to do is to access the axis by numerical index - 0 for x, 1 for y, and 2 for z.
As you say, this is strictly undefined behaviour, though it will "work" on many platforms. The real reason for using unions is to create variant records.
union A {
int i;
double d;
};
A a[10]; // records in "a" can be either ints or doubles
a[0].i = 42;
a[1].d = 1.23;
Of course, you also need some sort of discriminator to say what the variant actually contains. And note that in C++ unions are not much use because they can only contain POD types - effectively those without constructors and destructors.
In C it was a nice way to implement something like an variant.
enum possibleTypes{
eInt,
eDouble,
eChar
}
struct Value{
union Value {
int iVal_;
double dval;
char cVal;
} value_;
possibleTypes discriminator_;
}
switch(val.discriminator_)
{
case eInt: val.value_.iVal_; break;
In times of litlle memory this structure is using less memory than a struct that has all the member.
By the way C provides
typedef struct {
unsigned int mantissa_low:32; //mantissa
unsigned int mantissa_high:20;
unsigned int exponent:11; //exponent
unsigned int sign:1;
} realVal;
to access bit values.
Although this is strictly undefined behaviour, in practice it will work with pretty much any compiler. It is such a widely used paradigm that any self-respecting compiler will need to do "the right thing" in cases such as this. It's certainly to be preferred over type-punning, which may well generate broken code with some compilers.
In C++, Boost Variant implement a safe version of the union, designed to prevent undefined behavior as much as possible.
Its performances are identical to the enum + union construct (stack allocated too etc) but it uses a template list of types instead of the enum :)
The behaviour may be undefined, but that just means there isn't a "standard". All decent compilers offer #pragmas to control packing and alignment, but may have different defaults. The defaults will also change depending on the optimisation settings used.
Also, unions are not just for saving space. They can help modern compilers with type punning. If you reinterpret_cast<> everything the compiler can't make assumptions about what you are doing. It may have to throw away what it knows about your type and start again (forcing a write back to memory, which is very inefficient these days compared to CPU clock speed).
Technically it's undefined, but in reality most (all?) compilers treat it exactly the same as using a reinterpret_cast from one type to the other, the result of which is implementation defined. I wouldn't lose sleep over your current code.
For one more example of the actual use of unions, the CORBA framework serializes objects using the tagged union approach. All user-defined classes are members of one (huge) union, and an integer identifier tells the demarshaller how to interpret the union.
Others have mentioned the architecture differences (little - big endian).
I read the problem that since the memory for the variables is shared, then by writing to one, the others change and, depending on their type, the value could be meaningless.
eg.
union{
float f;
int i;
} x;
Writing to x.i would be meaningless if you then read from x.f - unless that is what you intended in order to look at the sign, exponent or mantissa components of the float.
I think there is also an issue of alignment: If some variables must be word aligned then you might not get the expected result.
eg.
union{
char c[4];
int i;
} x;
If, hypothetically, on some machine a char had to be word aligned then c[0] and c[1] would share storage with i but not c[2] and c[3].
In the C language as it was documented in 1974, all structure members shared a common namespace, and the meaning of "ptr->member" was defined as adding the
member's displacement to "ptr" and accessing the resulting address using the
member's type. This design made it possible to use the same ptr with member
names taken from different structure definitions but with the same offset;
programmers used that ability for a variety of purposes.
When structure members were assigned their own namespaces, it became impossible
to declare two structure members with the same displacement. Adding unions to
the language made it possible to achieve the same semantics that had been
available in earlier versions of the language (though the inability to have
names exported to an enclosing context may have still necessitated using a
find/replace to replace foo->member into foo->type1.member). What was
important was not so much that the people who added unions have any particular
target usage in mind, but rather that they provide a means by which programmers
who had relied upon the earlier semantics, for whatever purpose, should still
be able to achieve the same semantics even if they had to use a different
syntax to do it.
As others mentioned, unions combined with enumerations and wrapped into structs can be used to implement tagged unions. One practical use is to implement Rust's Result<T, E>, which is originally implemented using a pure enum (Rust can hold additional data in enumeration variants). Here is a C++ example:
template <typename T, typename E> struct Result {
public:
enum class Success : uint8_t { Ok, Err };
Result(T val) {
m_success = Success::Ok;
m_value.ok = val;
}
Result(E val) {
m_success = Success::Err;
m_value.err = val;
}
inline bool operator==(const Result& other) {
return other.m_success == this->m_success;
}
inline bool operator!=(const Result& other) {
return other.m_success != this->m_success;
}
inline T expect(const char* errorMsg) {
if (m_success == Success::Err) throw errorMsg;
else return m_value.ok;
}
inline bool is_ok() {
return m_success == Success::Ok;
}
inline bool is_err() {
return m_success == Success::Err;
}
inline const T* ok() {
if (is_ok()) return m_value.ok;
else return nullptr;
}
inline const T* err() {
if (is_err()) return m_value.err;
else return nullptr;
}
// Other methods from https://doc.rust-lang.org/std/result/enum.Result.html
private:
Success m_success;
union _val_t { T ok; E err; } m_value;
}
You can use a a union for two main reasons:
A handy way to access the same data in different ways, like in your example
A way to save space when there are different data members of which only one can ever be 'active'
1 Is really more of a C-style hack to short-cut writing code on the basis you know how the target system's memory architecture works. As already said you can normally get away with it if you don't actually target lots of different platforms. I believe some compilers might let you use packing directives also (I know they do on structs)?
A good example of 2. can be found in the VARIANT type used extensively in COM.
#bobobobo code is correct as #Joshua pointed out (sadly I'm not allowed to add comments, so doing it here, IMO bad decision to disallow it in first place):
https://en.cppreference.com/w/cpp/language/data_members#Standard_layout tells that it is fine to do so, at least since C++14
In a standard-layout union with an active member of non-union class type T1, it is permitted to read a non-static data member m of another union member of non-union class type T2 provided m is part of the common initial sequence of T1 and T2 (except that reading a volatile member through non-volatile glvalue is undefined).
since in the current case T1 and T2 donate the same type anyway.