Related
struct A
{
char c;
double d;
} a;
In mingw32-gcc.exe: sizeof a = 16
In gcc 4.6.3(ubuntu): sizeof a = 12
Why they are different? I think it should be 16, does gcc4.6.3 do some optimizations?
Compilers might perform data structure alignment for a target architecture if needed. It might done purely to improve runtime performance of the application, or in some cases is required by the processor (i.e. the program will not work if data is not aligned).
For example, most (but not all) SSE2 instructions require data to aligned on 16-byte boundary. To put it simply, everything in computer memory has an address. Let's say we have a simple array of doubles, like this:
double data[256];
In order to use SSE2 instructions that require 16-byte alignment, one must make sure that address of &data[0] is multiple of 16.
The alignment requirements differ from one architecture to another. On x86_64, it is recommended that all structures larger than 16 bytes align on 16-byte boundaries. In general, for the best performance, align data as follows:
Align 8-bit data at any address
Align 16-bit data to be contained within an aligned four-byte word
Align 32-bit data so that its base address is a multiple of four
Align 64-bit data so that its base address is a multiple of eight
Align 80-bit data so that its base address is a multiple of sixteen
Align 128-bit data so that its base address is a multiple of sixteen
Interestingly enough, most x86_64 CPUs would work with both aligned and non-aligned data. However, if the data is not aligned properly, CPU executes code significantly slower.
When compiler takes this into consideration, it may align members of the structure implicitly and that would affect its size. For example, let's say we have a structure like this:
struct A {
char a;
int b;
};
Assuming x86_64, the size of int is 32-bit or 4 bytes. Therefore, it is recommended to always make address of b a multiple of 4. But because a field size is only 1 byte, this won't be possible. Therefore, compiler would add 3 bytes of padding in between a and b implicitly:
struct A {
char a;
char __pad0[3]; /* This would be added by compiler,
without any field names - __pad0 is for
demonstration purposes */
int b;
};
How compiler does it depends not only on compiler and architecture, but on compiler settings (flags) you pass to the compiler. This behavior can also be affected using special language constructs. For example, one can ask the compiler to not perform any padding with packed attribute like this:
struct A {
char a;
int b;
} __attribute__((packed));
In your case, mingw32-gcc.exe has simply added 7 bytes between c and d to align d on 8 byte boundary. Whereas gcc 4.6.3 on Ubuntu has added only 3 to align d on 4 byte boundary.
Unless you are performing some optimizations, trying to use special extended instruction set, or have specific requirements for your data structures, I'd recommend you do not depend on specific compiler behavior and always assume that not only your structure might get padded, it might get padded differently between architectures, compilers and/or different compiler versions. Otherwise you'd need to semi-manually ensure data alignment and structure sizes using compiler attributes and settings, and make sure it all works across all compilers and platforms you are targeting using unit tests or maybe even static assertions.
For more information, please check out:
Data Alignment article on Wikipedia
Data Alignment when Migrating to 64-Bit Intel® Architecture
GCC Variable Attributes
Hope it helps. Good Luck!
How to minimize padding:
It is always good to have all your struct members properly aligned and at the same time keep your structure size reasonable. Consider these 2 struct variants with members rearanged (from now on assume sizeof char, short, int, long, long long to be 1, 2, 4, 4, 8 respectively):
struct A
{
char a;
short b;
char c;
int d;
};
struct B
{
char a;
char c;
short b;
int d;
};
Both structures are supposed to keep the same data but while sizeof(struct A) will be 12 bytes, sizeof(struct B) will be 8 due to well-though-out member order which eliminated implicit padding:
struct A
{
char a;
char __pad0[1]; // implicit compiler padding
short b;
char c;
char __pad1[3]; // implicit compiler padding
int d;
};
struct B // no implicit padding
{
char a;
char c;
short b;
int d;
};
Rearranging struct members may be error prone with increase of member count. To make it less error prone - put longest at the beginning and shortest at the end:
struct B // no implicit padding
{
int d;
short b;
char a;
char c;
};
Implicit padding at the end of stuct:
Depending on your compiler, settings, platform etc used you may notice that compiler adds padding not only before struct members but also at the end (ie. after the last member). Below structure:
struct abcd
{
long long a;
char b;
};
may occupy 12 or 16 bytes (worst compilers will allow it to be 9 bytes). This padding may be easily overlooked but is very important if your structure will be array alement. It will ensure your a member in subsequent array cells/elements will be properly aligned too.
Final and random thoughts:
It will never hurt (and may actually save) you if - when working with structs - you follow these advices:
Do not rely on compiler to interleave your struct members with proper padding.
Make sure your struct (if outside array) is aligned to boundary required by its longest member.
Make sure you arrange your struct members so that longest are placed first and last member is shortest.
Make sure you explicitly padd your struct (if needed) so that if you create array of structs, every structure member has proper alignment.
Make sure that arrays of your structs are properly aligned too as although your struct may require 8 byte alignment, your compiler may align your array at 4 byte boundary.
The values returned by sizeof for structs are not mandated by any C standard. It's up to the compiler and machine architecture.
For example, it can be optimal to align data members on 4 byte boundaries: in which case the effective packed size of char c will be 4 bytes.
The following code is from a PIC microcontroller header file, but I assume it's plain old C.
I understand that the code is to be used for accessing individual bits at an address in memory, but as a C novice, I'd like some help in understanding what is going on here, and how I'd go about using it in my code to set or get bits from ADCON1.
volatile unsigned char ADCON1 # 0x09F;
volatile bit VCFG0 # ((unsigned)&ADCON1*8)+4;
volatile bit VCFG1 # ((unsigned)&ADCON1*8)+5;
volatile bit ADFM # ((unsigned)&ADCON1*8)+7;
volatile union {
struct {
unsigned : 4;
unsigned VCFG0 : 1;
unsigned VCFG1 : 1;
unsigned : 1;
unsigned ADFM : 1;
};
} ADCON1bits # 0x09F;
tagged to C and C++. Let me know if it's not C++ compatible code, and I'll remove the tag
volatile unsigned char ADCON1 # 0x09F;
This simply declares the ADCON1 variable. volatile means accesses should not be optimized out, because the variable contents may change during execution. (ie. the hardware updates the value.)
I'm guessing the # syntax is non-standard C; I've never seen it before. But I figure it means the value is to be found at offset 0x09F.
volatile bit VCFG0 # ((unsigned)&ADCON1*8)+4;
volatile bit VCFG1 # ((unsigned)&ADCON1*8)+5;
volatile bit ADFM # ((unsigned)&ADCON1*8)+7;
These again declare variables. The bit type is also not standard C, as far as I know, but should be self-explanatory.
The # syntax is again used here to declare the location, but the interesting thing is that apparently the offset is in type-increments, because the address of ADCON1 is multiplied by 8. (A char is 8 times the size of a bit.)
It's much the same as you'd index an array or do pointer arithmetic in regular C, for example: char foo[4] is an array 4 bytes in size, but int bar[4] is an array 32 bytes in size. Except in this case, your 'array' is the processor's entire address space.
So basically, these variables represent specific bits of ADCON1, by taking the char-address (&ADCON1), converting it to a bit-address (*8), then addressing the specific bit (+4).
volatile union {
struct {
unsigned : 4;
unsigned VCFG0 : 1;
unsigned VCFG1 : 1;
unsigned : 1;
unsigned ADFM : 1;
};
} ADCON1bits # 0x09F;
This declaration is independent from the above, but achieves about the same.
A union of one struct is declared, and a variable of that type is declared at offset 0x09F. The :4 syntax you see in the struct indicates a bit-size of the member. Nameless struct members are simply inaccessible.
The union doesn't seem to really add anything here. You'd access bits as ADCON1bits.VCFG0.
Presumably, there is a byte register at 0x09F that controls the ADC, 'bit' is a boolean type that can be addressed as a bit array starting at 0, (hence the *8), so the ADC is accessed by, eg. 'ADFM=0'.
The union is an alternative means to access the ADC control register using bit fields, (eg. ADCON1bits.VCFG1=1).
The whole lot is not really standard C or C++ - it's 'microController C'
I am not totally sure about C, but C++ allows unnamed bit-fields of 0 length. For example:
struct X
{
int : 0;
};
Question one: What practical uses of this can you think of?
Question two: What real-world practical uses (if any) are you aware of?
Edited the example after ice-crime's answer
Edit: OK, thanks to the current answers I now know the theoretical purpose. But the questions are about practical uses so they still hold :)
You use a zero-length bitfield as a hacky way to get your compiler to lay out a structure to match some external requirement, be it another compiler's or architecture's notion of the layout (cross-platform data structures, such as in a binary file format) or a bit-level standard's requirements (network packets or instruction opcodes).
A real-world example is when NeXT ported the xnu kernel from the Motorola 68000 (m68k) architecture to the i386 architecture. NeXT had a working m68k version of their kernel. When they ported it to i386, they found that the i386's alignment requirements differed from the m68k's in such a way that an m68k machine and an i386 machine did not agree on the layout of the NeXT vendor-specific BOOTP structure. In order to make the i386 structure layout agree with the m68k, they added an unnamed bitfield of length zero to force the NV1 structure/nv_U union to be 16-bit aligned.
Here are the relevant parts from the Mac OS X 10.6.5 xnu source code:
/* from xnu/bsd/netinet/bootp.h */
/*
* Bootstrap Protocol (BOOTP). RFC 951.
*/
/*
* HISTORY
*
* 14 May 1992 ? at NeXT
* Added correct padding to struct nextvend. This is
* needed for the i386 due to alignment differences wrt
* the m68k. Also adjusted the size of the array fields
* because the NeXT vendor area was overflowing the bootp
* packet.
*/
/* . . . */
struct nextvend {
u_char nv_magic[4]; /* Magic number for vendor specificity */
u_char nv_version; /* NeXT protocol version */
/*
* Round the beginning
* of the union to a 16
* bit boundary due to
* struct/union alignment
* on the m68k.
*/
unsigned short :0;
union {
u_char NV0[58];
struct {
u_char NV1_opcode; /* opcode - Version 1 */
u_char NV1_xid; /* transcation id */
u_char NV1_text[NVMAXTEXT]; /* text */
u_char NV1_null; /* null terminator */
} NV1;
} nv_U;
};
The standard (9.6/2) only allows 0 length bit-fields as a special case :
As a special case, an unnamed
bit-field with a width of zero
specifies alignment of the next
bit-field at an allocation unit
boundary. Only when declaring an
unnamed bit-field may the
constant-expression be a value equal
to zero.
The only use is described in this quote, although I've never encountered it in practical code yet.
For the record, I just tried the following code under VS 2010 :
struct X {
int i : 3, j : 5;
};
struct Y {
int i : 3, : 0, j : 5; // nice syntax huh ?
};
int main()
{
std::cout << sizeof(X) << " - " << sizeof(Y) << std::endl;
}
The output on my machine is indeed : 4 - 8.
struct X { int : 0; };
is undefined behavior in C.
See (emphasis mine):
(C99, 6.7.2.1p2) "The presence of a struct-declaration-list in a struct-or-union-specifier declares a new type, within a translation unit. The struct-declaration-list is a sequence of declarations for the members of the structure or union. If the struct-declaration-list contains no named members, the behavior is undefined"
(C11 has the same wording.)
You can use an unnamed bit-field with 0 width but not if there is no other named member in the structure.
For example:
struct W { int a:1; int :0; }; // OK
struct X { int :0; }; // Undefined Behavior
By the way for the second declaration, gcc issues a diagnostic (not required by the C Standard) with -pedantic.
On the other hand:
struct X { int :0; };
is defined in GNU C. It is used for example by the Linux kernel (include/linux/bug.h) to force a compilation error using the following macro if the condition is true:
#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))
This is from MSDN and not marked as Microsoft Specific, so I guess this is common C++ standard:
An unnamed bit field of width 0 forces alignment of the next bit field to the next type boundary, where type is the type of the member.
The C11 standard now allows the inclusion of zero length bitfields. Here is an example from the C Committee draft (N1570), which I believe illustrates a practical usage.
3.14 memory location
...
4. EXAMPLE A structure declared as
struct {
char a;
int b:5, c:11, :0, d:8;
struct { int ee:8; } e;
}
contains four separate memory locations: The member a, and bit-fields d and e.ee are each separate memory locations, and can be modified concurrently without interfering with each other. The bit-fields b and c together constitute the fourth memory location. The bit-fields b and c cannot be concurrently modified, but b and a, for example, can be.
So including the zero length bitfield in between the bitfields c and d allows the concurrent modification of b and d as well.
I have learned but don't really get unions. Every C or C++ text I go through introduces them (sometimes in passing), but they tend to give very few practical examples of why or where to use them. When would unions be useful in a modern (or even legacy) case? My only two guesses would be programming microprocessors when you have very limited space to work with, or when you're developing an API (or something similar) and you want to force the end user to have only one instance of several objects/types at one time. Are these two guesses even close to right?
Unions are usually used with the company of a discriminator: a variable indicating which of the fields of the union is valid. For example, let's say you want to create your own Variant type:
struct my_variant_t {
int type;
union {
char char_value;
short short_value;
int int_value;
long long_value;
float float_value;
double double_value;
void* ptr_value;
};
};
Then you would use it such as:
/* construct a new float variant instance */
void init_float(struct my_variant_t* v, float initial_value) {
v->type = VAR_FLOAT;
v->float_value = initial_value;
}
/* Increments the value of the variant by the given int */
void inc_variant_by_int(struct my_variant_t* v, int n) {
switch (v->type) {
case VAR_FLOAT:
v->float_value += n;
break;
case VAR_INT:
v->int_value += n;
break;
...
}
}
This is actually a pretty common idiom, specially on Visual Basic internals.
For a real example see SDL's SDL_Event union. (actual source code here). There is a type field at the top of the union, and the same field is repeated on every SDL_*Event struct. Then, to handle the correct event you need to check the value of the type field.
The benefits are simple: there is one single data type to handle all event types without using unnecessary memory.
I find C++ unions pretty cool. It seems that people usually only think of the use case where one wants to change the value of a union instance "in place" (which, it seems, serves only to save memory or perform doubtful conversions).
In fact, unions can be of great power as a software engineering tool, even when you never change the value of any union instance.
Use case 1: the chameleon
With unions, you can regroup a number of arbitrary classes under one denomination, which isn't without similarities with the case of a base class and its derived classes. What changes, however, is what you can and can't do with a given union instance:
struct Batman;
struct BaseballBat;
union Bat
{
Batman brucewayne;
BaseballBat club;
};
ReturnType1 f(void)
{
BaseballBat bb = {/* */};
Bat b;
b.club = bb;
// do something with b.club
}
ReturnType2 g(Bat& b)
{
// do something with b, but how do we know what's inside?
}
Bat returnsBat(void);
ReturnType3 h(void)
{
Bat b = returnsBat();
// do something with b, but how do we know what's inside?
}
It appears that the programmer has to be certain of the type of the content of a given union instance when he wants to use it. It is the case in function f above. However, if a function were to receive a union instance as a passed argument, as is the case with g above, then it wouldn't know what to do with it. The same applies to functions returning a union instance, see h: how does the caller know what's inside?
If a union instance never gets passed as an argument or as a return value, then it's bound to have a very monotonous life, with spikes of excitement when the programmer chooses to change its content:
Batman bm = {/* */};
Baseball bb = {/* */};
Bat b;
b.brucewayne = bm;
// stuff
b.club = bb;
And that's the most (un)popular use case of unions. Another use case is when a union instance comes along with something that tells you its type.
Use case 2: "Nice to meet you, I'm object, from Class"
Suppose a programmer elected to always pair up a union instance with a type descriptor (I'll leave it to the reader's discretion to imagine an implementation for one such object). This defeats the purpose of the union itself if what the programmer wants is to save memory and that the size of the type descriptor is not negligible with respect to that of the union. But let's suppose that it's crucial that the union instance could be passed as an argument or as a return value with the callee or caller not knowing what's inside.
Then the programmer has to write a switch control flow statement to tell Bruce Wayne apart from a wooden stick, or something equivalent. It's not too bad when there are only two types of contents in the union but obviously, the union doesn't scale anymore.
Use case 3:
As the authors of a recommendation for the ISO C++ Standard put it back in 2008,
Many important problem domains require either large numbers of objects or limited memory
resources. In these situations conserving space is very important, and a union is often a perfect way to do that. In fact, a common use case is the situation where a union never changes its active member during its lifetime. It can be constructed, copied, and destructed as if it were a struct containing only one member. A typical application of this would be to create a heterogeneous collection of unrelated types which are not dynamically allocated (perhaps they are in-place constructed in a map, or members of an array).
And now, an example, with a UML class diagram:
The situation in plain English: an object of class A can have objects of any class among B1, ..., Bn, and at most one of each type, with n being a pretty big number, say at least 10.
We don't want to add fields (data members) to A like so:
private:
B1 b1;
.
.
.
Bn bn;
because n might vary (we might want to add Bx classes to the mix), and because this would cause a mess with constructors and because A objects would take up a lot of space.
We could use a wacky container of void* pointers to Bx objects with casts to retrieve them, but that's fugly and so C-style... but more importantly that would leave us with the lifetimes of many dynamically allocated objects to manage.
Instead, what can be done is this:
union Bee
{
B1 b1;
.
.
.
Bn bn;
};
enum BeesTypes { TYPE_B1, ..., TYPE_BN };
class A
{
private:
std::unordered_map<int, Bee> data; // C++11, otherwise use std::map
public:
Bee get(int); // the implementation is obvious: get from the unordered map
};
Then, to get the content of a union instance from data, you use a.get(TYPE_B2).b2 and the likes, where a is a class A instance.
This is all the more powerful since unions are unrestricted in C++11. See the document linked to above or this article for details.
One example is in the embedded realm, where each bit of a register may mean something different. For example, a union of an 8-bit integer and a structure with 8 separate 1-bit bitfields allows you to either change one bit or the entire byte.
Herb Sutter wrote in GOTW about six years ago, with emphasis added:
"But don't think that unions are only a holdover from earlier times. Unions are perhaps most useful for saving space by allowing data to overlap, and this is still desirable in C++ and in today's modern world. For example, some of the most advanced C++ standard library implementations in the world now use just this technique for implementing the "small string optimization," a great optimization alternative that reuses the storage inside a string object itself: for large strings, space inside the string object stores the usual pointer to the dynamically allocated buffer and housekeeping information like the size of the buffer; for small strings, the same space is instead reused to store the string contents directly and completely avoid any dynamic memory allocation. For more about the small string optimization (and other string optimizations and pessimizations in considerable depth), see... ."
And for a less useful example, see the long but inconclusive question gcc, strict-aliasing, and casting through a union.
Well, one example use case I can think of is this:
typedef union
{
struct
{
uint8_t a;
uint8_t b;
uint8_t c;
uint8_t d;
};
uint32_t x;
} some32bittype;
You can then access the 8-bit separate parts of that 32-bit block of data; however, prepare to potentially be bitten by endianness.
This is just one hypothetical example, but whenever you want to split data in a field into component parts like this, you could use a union.
That said, there is also a method which is endian-safe:
uint32_t x;
uint8_t a = (x & 0xFF000000) >> 24;
For example, since that binary operation will be converted by the compiler to the correct endianness.
Some uses for unions:
Provide a general endianness interface to an unknown external host.
Manipulate foreign CPU architecture floating point data, such as accepting VAX G_FLOATS from a network link and converting them to IEEE 754 long reals for processing.
Provide straightforward bit twiddling access to a higher-level type.
union {
unsigned char byte_v[16];
long double ld_v;
}
With this declaration, it is simple to display the hex byte values of a long double, change the exponent's sign, determine if it is a denormal value, or implement long double arithmetic for a CPU which does not support it, etc.
Saving storage space when fields are dependent on certain values:
class person {
string name;
char gender; // M = male, F = female, O = other
union {
date vasectomized; // for males
int pregnancies; // for females
} gender_specific_data;
}
Grep the include files for use with your compiler. You'll find dozens to hundreds of uses of union:
[wally#zenetfedora ~]$ cd /usr/include
[wally#zenetfedora include]$ grep -w union *
a.out.h: union
argp.h: parsing options, getopt is called with the union of all the argp
bfd.h: union
bfd.h: union
bfd.h:union internal_auxent;
bfd.h: (bfd *, struct bfd_symbol *, int, union internal_auxent *);
bfd.h: union {
bfd.h: /* The value of the symbol. This really should be a union of a
bfd.h: union
bfd.h: union
bfdlink.h: /* A union of information depending upon the type. */
bfdlink.h: union
bfdlink.h: this field. This field is present in all of the union element
bfdlink.h: the union; this structure is a major space user in the
bfdlink.h: union
bfdlink.h: union
curses.h: union
db_cxx.h:// 4201: nameless struct/union
elf.h: union
elf.h: union
elf.h: union
elf.h: union
elf.h:typedef union
_G_config.h:typedef union
gcrypt.h: union
gcrypt.h: union
gcrypt.h: union
gmp-i386.h: union {
ieee754.h:union ieee754_float
ieee754.h:union ieee754_double
ieee754.h:union ieee854_long_double
ifaddrs.h: union
jpeglib.h: union {
ldap.h: union mod_vals_u {
ncurses.h: union
newt.h: union {
obstack.h: union
pi-file.h: union {
resolv.h: union {
signal.h:extern int sigqueue (__pid_t __pid, int __sig, __const union sigval __val)
stdlib.h:/* Lots of hair to allow traditional BSD use of `union wait'
stdlib.h: (__extension__ (((union { __typeof(status) __in; int __i; }) \
stdlib.h:/* This is the type of the argument to `wait'. The funky union
stdlib.h: causes redeclarations with either `int *' or `union wait *' to be
stdlib.h:typedef union
stdlib.h: union wait *__uptr;
stdlib.h: } __WAIT_STATUS __attribute__ ((__transparent_union__));
thread_db.h: union
thread_db.h: union
tiffio.h: union {
wchar.h: union
xf86drm.h:typedef union _drmVBlank {
Unions are useful when dealing with byte-level (low level) data.
One of my recent usage was on IP address modeling which looks like below :
// Composite structure for IP address storage
union
{
// IPv4 # 32-bit identifier
// Padded 12-bytes for IPv6 compatibility
union
{
struct
{
unsigned char _reserved[12];
unsigned char _IpBytes[4];
} _Raw;
struct
{
unsigned char _reserved[12];
unsigned char _o1;
unsigned char _o2;
unsigned char _o3;
unsigned char _o4;
} _Octet;
} _IPv4;
// IPv6 # 128-bit identifier
// Next generation internet addressing
union
{
struct
{
unsigned char _IpBytes[16];
} _Raw;
struct
{
unsigned short _w1;
unsigned short _w2;
unsigned short _w3;
unsigned short _w4;
unsigned short _w5;
unsigned short _w6;
unsigned short _w7;
unsigned short _w8;
} _Word;
} _IPv6;
} _IP;
Unions provide polymorphism in C.
An example when I've used a union:
class Vector
{
union
{
double _coord[3];
struct
{
double _x;
double _y;
double _z;
};
};
...
}
this allows me to access my data as an array or the elements.
I've used a union to have the different terms point to the same value. In image processing, whether I was working on columns or width or the size in the X direction, it can become confusing. To alleve this problem, I use a union so I know which descriptions go together.
union { // dimension from left to right // union for the left to right dimension
uint32_t m_width;
uint32_t m_sizeX;
uint32_t m_columns;
};
union { // dimension from top to bottom // union for the top to bottom dimension
uint32_t m_height;
uint32_t m_sizeY;
uint32_t m_rows;
};
The union keyword, while still used in C++031, is mostly a remnant of the C days. The most glaring issue is that it only works with POD1.
The idea of the union, however, is still present, and indeed the Boost libraries feature a union-like class:
boost::variant<std::string, Foo, Bar>
Which has most of the benefits of the union (if not all) and adds:
ability to correctly use non-POD types
static type safety
In practice, it has been demonstrated that it was equivalent to a combination of union + enum, and benchmarked that it was as fast (while boost::any is more of the realm of dynamic_cast, since it uses RTTI).
1Unions were upgraded in C++11 (unrestricted unions), and can now contain objects with destructors, although the user has to invoke the destructor manually (on the currently active union member). It's still much easier to use variants.
A brilliant usage of union is memory alignment, which I found in the PCL(Point Cloud Library) source code. The single data structure in the API can target two architectures: CPU with SSE support as well as the CPU without SSE support. For eg: the data structure for PointXYZ is
typedef union
{
float data[4];
struct
{
float x;
float y;
float z;
};
} PointXYZ;
The 3 floats are padded with an additional float for SSE alignment.
So for
PointXYZ point;
The user can either access point.data[0] or point.x (depending on the SSE support) for accessing say, the x coordinate.
More similar better usage details are on following link: PCL documentation PointT types
From the Wikipedia article on unions:
The primary usefulness of a union is
to conserve space, since it provides a
way of letting many different types be
stored in the same space. Unions also
provide crude polymorphism. However,
there is no checking of types, so it
is up to the programmer to be sure
that the proper fields are accessed in
different contexts. The relevant field
of a union variable is typically
determined by the state of other
variables, possibly in an enclosing
struct.
One common C programming idiom uses
unions to perform what C++ calls a
reinterpret_cast, by assigning to one
field of a union and reading from
another, as is done in code which
depends on the raw representation of
the values.
In the earliest days of C (e.g. as documented in 1974), all structures shared a common namespace for their members. Each member name was associated with a type and an offset; if "wd_woozle" was an "int" at offset 12, then given a pointer p of any structure type, p->wd_woozle would be equivalent to *(int*)(((char*)p)+12). The language required that all members of all structures types have unique names except that it explicitly allowed reuse of member names in cases where every struct where they were used treated them as a common initial sequence.
The fact that structure types could be used promiscuously made it possible to have structures behave as though they contained overlapping fields. For example, given definitions:
struct float1 { float f0;};
struct byte4 { char b0,b1,b2,b3; }; /* Unsigned didn't exist yet */
code could declare a structure of type "float1" and then use "members" b0...b3 to access the individual bytes therein. When the language was changed so that each structure would receive a separate namespace for its members, code which relied upon the ability to access things multiple ways would break. The values of separating out namespaces for different structure types was sufficient to require that such code be changed to accommodate it, but the value of such techniques was sufficient to justify extending the language to continue supporting it.
Code which had been written to exploit the ability to access the storage within a struct float1 as though it were a struct byte4 could be made to work in the new language by adding a declaration: union f1b4 { struct float1 ff; struct byte4 bb; };, declaring objects as type union f1b4; rather than struct float1, and replacing accesses to f0, b0, b1, etc. with ff.f0, bb.b0, bb.b1, etc. While there are better ways such code could have been supported, the union approach was at least somewhat workable, at least with C89-era interpretations of the aliasing rules.
Lets say you have n different types of configurations (just being a set of variables defining parameters). By using an enumeration of the configuration types, you can define a structure that has the ID of the configuration type, along with a union of all the different types of configurations.
This way, wherever you pass the configuration can use the ID to determine how to interpret the configuration data, but if the configurations were huge you would not be forced to have parallel structures for each potential type wasting space.
One recent boost on the, already elevated, importance of the unions has been given by the Strict Aliasing Rule introduced in recent version of C standard.
You can use unions do to type-punning without violating the C standard.
This program has unspecified behavior (because I have assumed that float and unsigned int have the same length) but not undefined behavior (see here).
#include <stdio.h>
union float_uint
{
float f;
unsigned int ui;
};
int main()
{
float v = 241;
union float_uint fui = {.f = v};
//May trigger UNSPECIFIED BEHAVIOR but not UNDEFINED BEHAVIOR
printf("Your IEEE 754 float sir: %08x\n", fui.ui);
//This is UNDEFINED BEHAVIOR as it violates the Strict Aliasing Rule
unsigned int* pp = (unsigned int*) &v;
printf("Your IEEE 754 float, again, sir: %08x\n", *pp);
return 0;
}
I would like to add one good practical example for using union - implementing formula calculator/interpreter or using some kind of it in computation(for example, you want to use modificable during run-time parts of your computing formulas - solving equation numerically - just for example).
So you may want to define numbers/constants of different types(integer, floating-point, even complex numbers) like this:
struct Number{
enum NumType{int32, float, double, complex}; NumType num_t;
union{int ival; float fval; double dval; ComplexNumber cmplx_val}
}
So you're saving memory and what is more important - you avoid any dynamic allocations for probably extreme quantity(if you use a lot of run-time defined numbers) of small objects(compared to implementations through class inheritance/polymorphism). But what's more interesting, you still can use power of C++ polymorphism(if you're fan of double dispatching, for example ;) with this type of struct. Just add "dummy" interface pointer to parent class of all number types as a field of this struct, pointing to this instance instead of/in addition to raw type, or use good old C function pointers.
struct NumberBase
{
virtual Add(NumberBase n);
...
}
struct NumberInt: Number
{
//implement methods assuming Number's union contains int
NumberBase Add(NumberBase n);
...
}
struct NumberDouble: Number
{
//implement methods assuming Number's union contains double
NumberBase Add(NumberBase n);
...
}
//e.t.c. for all number types/or use templates
struct Number: NumberBase{
union{int ival; float fval; double dval; ComplexNumber cmplx_val;}
NumberBase* num_t;
Set(int a)
{
ival=a;
//still kind of hack, hope it works because derived classes of Number dont add any fields
num_t = static_cast<NumberInt>(this);
}
}
so you can use polymorphism instead of type checks with switch(type) - with memory-efficient implementation(no dynamic allocation of small objects) - if you need it, of course.
From http://cplus.about.com/od/learningc/ss/lowlevel_9.htm:
The uses of union are few and far between. On most computers, the size
of a pointer and an int are usually the same- this is because both
usually fit into a register in the CPU. So if you want to do a quick
and dirty cast of a pointer to an int or the other way, declare a
union.
union intptr { int i; int * p; };
union intptr x; x.i = 1000;
/* puts 90 at location 1000 */
*(x.p)=90;
Another use of a union is in a command or message protocol where
different size messages are sent and received. Each message type will
hold different information but each will have a fixed part (probably a
struct) and a variable part bit. This is how you might implement it..
struct head { int id; int response; int size; }; struct msgstring50 { struct head fixed; char message[50]; } struct
struct msgstring80 { struct head fixed; char message[80]; }
struct msgint10 { struct head fixed; int message[10]; } struct
msgack { struct head fixed; int ok; } union messagetype {
struct msgstring50 m50; struct msgstring80 m80; struct msgint10
i10; struct msgack ack; }
In practice, although the unions are the same size, it makes sense to
only send the meaningful data and not wasted space. A msgack is just
16 bytes in size while a msgstring80 is 92 bytes. So when a
messagetype variable is initialized, it has its size field set
according to which type it is. This can then be used by other
functions to transfer the correct number of bytes.
Unions provide a way to manipulate different kind of data in a single area of storage without embedding any machine independent information in the program
They are analogous to variant records in pascal
As an example such as might be found in a compiler symbol table manager, suppose that a
constant may be an int, a float, or a character pointer. The value of a particular constant
must be stored in a variable of the proper type, yet it is most convenient for table management if the value occupies the same amount of storage and is stored in the same place regardless of its type. This is the purpose of a union - a single variable that can legitimately hold any of one of several types. The syntax is based on structures:
union u_tag {
int ival;
float fval;
char *sval;
} u;
The variable u will be large enough to hold the largest of the three types; the specific size is implementation-dependent. Any of these types may be assigned to u and then used in
expressions, so long as the usage is consistent
I am trying to perform a less-than-32bit read over the PCI bus to a VME-bridge chip (Tundra Universe II), which will then go onto the VME bus and picked up by the target.
The target VME application only accepts D32 (a data width read of 32bits) and will ignore anything else.
If I use bit field structure mapped over a VME window (nmap'd into main memory) I CAN read bit fields >24 bits, but anything less fails. ie :-
struct works {
unsigned int a:24;
};
struct fails {
unsigned int a:1;
unsigned int b:1;
unsigned int c:1;
};
struct main {
works work;
fails fail;
}
volatile *reg = function_that_creates_and_maps_the_vme_windows_returns_address()
This shows that the struct works is read as a 32bit, but a read via fails struct of a for eg reg->fail.a is getting factored down to a X bit read. (where X might be 16 or 8?)
So the questions are :
a) Where is this scaled down? Compiler? OS? or the Tundra chip?
b) What is the actual size of the read operation performed?
I basiclly want to rule out everything but the chip. Documentation on that is on the web, but if it can be proved that the data width requested over the PCI bus is 32bits then the problem can be blamed on the Tundra chip!
edit:-
Concrete example, code was:-
struct SVersion
{
unsigned title : 8;
unsigned pecversion : 8;
unsigned majorversion : 8;
unsigned minorversion : 8;
} Version;
So now I have changed it to this :-
union UPECVersion
{
struct SVersion
{
unsigned title : 8;
unsigned pecversion : 8;
unsigned majorversion : 8;
unsigned minorversion : 8;
} Version;
unsigned int dummy;
};
And the base main struct :-
typedef struct SEPUMap
{
...
...
UPECVersion PECVersion;
};
So I still have to change all my baseline code
// perform dummy 32bit read
pEpuMap->PECVersion.dummy;
// get the bits out
x = pEpuMap->PECVersion.Version.minorversion;
And how do I know if the second read wont actually do a real read again, as my original code did? (Instead of using the already read bits via the union!)
Your compiler is adjusting the size of your struct to a multiple of its memory alignment setting. Almost all modern compilers do this. On some processors, variables and instructions have to begin on memory addresses that are multiples of some memory alignment value (often 32-bits or 64-bits, but the alignment depends on the processor architecture). Most modern processors don't require memory alignment anymore - but almost all of them see substantial performance benefit from it. So the compilers align your data for you for the performance boost.
However, in many cases (such as yours) this isn't the behavior you want. The size of your structure, for various reasons, can turn out to be extremely important. In those cases, there are various ways around the problem.
One option is to force the compiler to use different alignment settings. The options for doing this vary from compiler to compiler, so you'll have to check your documentation. It's usually a #pragma of some sort. On some compilers (the Microsoft compilers, for instance) it's possible to change the memory alignment for only a very small section of code. For example (in VC++):
#pragma pack(push) // save the current alignment
#pragma pack(1) // set the alignment to one byte
// Define variables that are alignment sensitive
#pragma pack(pop) // restore the alignment
Another option is to define your variables in other ways. Intrinsic types are not resized based on alignment, so instead of your 24-bit bitfield, another approach is to define your variable as an array of bytes.
Finally, you can just let the compilers make the structs whatever size they want and manually record the size that you need to read/write. As long as you're not concatenating structures together, this should work fine. Remember, however, that the compiler is giving you padded structs under the hood, so if you make a larger struct that includes, say, a works and a fails struct, there will be padded bits in between them that could cause you problems.
On most compilers, it's going to be darn near impossible to create a data type smaller than 8 bits. Most architectures just don't think that way. This shouldn't be a huge problem because most hardware devices that use datatypes of smaller than 8-bits end up arranging their packets in such a way that they still come in 8-bit multiples, so you can do the bit manipulations to extract or encode the values on the data stream as it leaves or comes in.
For all of the reasons listed above, a lot of code that works with hardware devices like this work with raw byte arrays and just encode the data within the arrays. Despite losing a lot of the conveniences of modern language constructs, it ends up just being easier.
I am wondering about the value of sizeof(struct fails). Is it 1? In this case, if you perform the read by dereferencing a pointer to a struct fails, it looks correct to issue a D8 read on the VME bus.
You can try to add a field unsigned int unused:29; to your struct fails.
The size of a struct is not equal to the sum of the size of its fields, including bit fields. Compilers are allowed, by the C and C++ language specifications, to insert padding between fields in a struct. Padding is often inserted for alignment purposes.
The common method in embedded systems programming is to read the data as an unsigned integer then use bit masking to retrieve the interesting bits. This is due to the above rule that I stated and the fact that there is no standard compiler parameter for "packing" fields in a structure.
I suggest creating an object ( class or struct) for interfacing with the hardware. Let the object read the data, then extract the bits as bool members. This puts the implementation as close to the hardware. The remaining software should not care how the bits are implemented.
When defining bit field positions / named constants, I suggest this format:
#define VALUE (1 << BIT POSITION)
// OR
const unsigned int VALUE = 1 << BIT POSITION;
This format is more readable and has the compiler perform the arithmetic. The calculation takes place during compilation and has no impact during run-time.
As an example, the Linux kernel has inline functions that explicitly handle memory-mapped IO reads and writes. In newer kernels it's a big macro wrapper that boils down to an inline assembly movl instruction, but it older kernels it was defined like this:
#define readl(addr) (*(volatile unsigned int *) (addr))
#define writel(b,addr) ((*(volatile unsigned int *) (addr)) = (b))
Ian - if you want to be sure as to the size of things you're reading/writing I'd suggest not using structs like this to do it - it's possible the sizeof of the fails struct is just 1 byte - the compiler is free to decide what it should be based on optimizations etc- I'd suggest reading/writing explicitly using int's or generally the things you need to assure the sizes of and then doing something else like converting to a union/struct where you don't have those limitations.
It is the compiler that decides what size read to issue. To force a 32 bit read, you could use a union:
union dev_word {
struct dev_reg {
unsigned int a:1;
unsigned int b:1;
unsigned int c:1;
} fail;
uint32_t dummy;
};
volatile union dev_word *vme_map_window();
If reading the union through a volatile-qualified pointer isn't enough to force a read of the whole union (I would think it would be - but that could be compiler-dependent), then you could use a function to provide the required indirection:
volatile union dev_word *real_reg; /* Initialised with vme_map_window() */
union dev_word * const *reg_func(void)
{
static union dev_word local_copy;
static union dev_word * const static_ptr = &local_copy;
local_copy = *real_reg;
return &static_ptr;
}
#define reg (*reg_func())
...then (for compatibility with the existing code) your accesses are done as:
reg->fail.a
The method described earlier of using the gcc flag -fstrict-volatile-bitfields and defining bitfield variables as volatile u32 works, but the total number of bits defined must be greater than 16.
For example:
typedef union{
vu32 Word;
struct{
vu32 LATENCY :3;
vu32 HLFCYA :1;
vu32 PRFTBE :1;
vu32 PRFTBS :1;
};
}tFlashACR;
.
tFLASH* const pFLASH = (tFLASH*)FLASH_BASE;
#define FLASH_LATENCY pFLASH->ACR.LATENCY
.
FLASH_LATENCY = Latency;
causes gcc to generate code
.
ldrb r1, [r3, #0]
.
which is a byte read. However, changing the typedef to
typedef union{
vu32 Word;
struct{
vu32 LATENCY :3;
vu32 HLFCYA :1;
vu32 PRFTBE :1;
vu32 PRFTBS :1;
vu32 :2;
vu32 DUMMY1 :8;
vu32 DUMMY2 :8;
};
}tFlashACR;
changes the resultant code to
.
ldr r3, [r2, #0]
.
I believe the only solution is to
1) edit/create my main struct as all 32bit ints (unsigned longs)
2) keep my original bit-field structs
3) each access I require,
3.1) I have to read the struct member as a 32bit word, and cast it into the bit-field struct,
3.2) read the bit-field element I require. (and for writes, set this bit-field, and write the word back!)
(1) Which is a same, because then I lose the intrinsic types that each member of the "main/SEPUMap" struct are.
End solution :-
Instead of :-
printf("FirmwareVersionMinor: 0x%x\n", pEpuMap->PECVersion);
This :-
SPECVersion ver = *(SPECVersion*)&pEpuMap->PECVersion;
printf("FirmwareVersionMinor: 0x%x\n", ver.minorversion);
Only problem I have is writting! (Writes are now Read/Modify/Writes!)
// Read - Get current
_HVPSUControl temp = *(_HVPSUControl*)&pEpuMap->HVPSUControl;
// Modify - set to new value
temp.OperationalRequestPort = true;
// Write
volatile unsigned int *addr = reinterpret_cast<volatile unsigned int*>(&pEpuMap->HVPSUControl);
*addr = *reinterpret_cast<volatile unsigned int*>(&temp);
Just have to tidy that code up into a method!
#define writel(addr, data) ( *(volatile unsigned long*)(&addr) = (*(volatile unsigned long*)(&data)) )
I had same problem on ARM using GCC compiler, where write into memory is only through bytes rather than 32bit word.
The solution is to define bit-fields using volatile uint32_t (or required size to write):
union {
volatile uint32_t XY;
struct {
volatile uint32_t XY_A : 4;
volatile uint32_t XY_B : 12;
};
};
but while compiling you need add to gcc or g++ this parameter:
-fstrict-volatile-bitfields
more in gcc documentation.