C/C++: Force Bit Field Order and Alignment - c++

I read that the order of bit fields within a struct is platform specific. What about if I use different compiler-specific packing options, will this guarantee data is stored in the proper order as they are written? For example:
struct Message
{
unsigned int version : 3;
unsigned int type : 1;
unsigned int id : 5;
unsigned int data : 6;
} __attribute__ ((__packed__));
On an Intel processor with the GCC compiler, the fields were laid out in memory as they are shown. Message.version was the first 3 bits in the buffer, and Message.type followed. If I find equivalent struct packing options for various compilers, will this be cross-platform?

No, it will not be fully-portable. Packing options for structs are extensions, and are themselves not fully portable. In addition to that, C99 §6.7.2.1, paragraph 10 says: "The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined."
Even a single compiler might lay the bit field out differently depending on the endianness of the target platform, for example.

Bit fields vary widely from compiler to compiler, sorry.
With GCC, big endian machines lay out the bits big end first and little endian machines lay out the bits little end first.
K&R says "Adjacent [bit-]field members of structures are packed into implementation-dependent storage units in an implementation-dependent direction. When a field following another field will not fit ... it may be split between units or the unit may be padded. An unnamed field of width 0 forces this padding..."
Therefore, if you need machine independent binary layout you must do it yourself.
This last statement also applies to non-bitfields due to padding -- however all compilers seem to have some way of forcing byte packing of a structure, as I see you already discovered for GCC.

Bitfields should be avoided - they aren't very portable between compilers even for the same platform. from the C99 standard 6.7.2.1/10 - "Structure and union specifiers" (there's similar wording in the C90 standard):
An implementation may allocate any addressable storage unit large enough to hold a bitfield. If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit. If insufficient space remains, whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is implementation-defined. The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.
You cannot guarantee whether a bit field will 'span' an int boundary or not and you can't specify whether a bitfield starts at the low-end of the int or the high end of the int (this is independant of whether the processor is big-endian or little-endian).
Prefer bitmasks. Use inlines (or even macros) to set, clear and test the bits.

endianness are talking about byte orders not bit orders. Nowadays , it is 99% sure that bit orders are fixed. However, when using bitfields, endianness should be taken in count. See the example below.
#include <stdio.h>
typedef struct tagT{
int a:4;
int b:4;
int c:8;
int d:16;
}T;
int main()
{
char data[]={0x12,0x34,0x56,0x78};
T *t = (T*)data;
printf("a =0x%x\n" ,t->a);
printf("b =0x%x\n" ,t->b);
printf("c =0x%x\n" ,t->c);
printf("d =0x%x\n" ,t->d);
return 0;
}
//- big endian : mips24k-linux-gcc (GCC) 4.2.3 - big endian
a =0x1
b =0x2
c =0x34
d =0x5678
1 2 3 4 5 6 7 8
\_/ \_/ \_____/ \_____________/
a b c d
// - little endian : gcc (Ubuntu 4.3.2-1ubuntu11) 4.3.2
a =0x2
b =0x1
c =0x34
d =0x7856
7 8 5 6 3 4 1 2
\_____________/ \_____/ \_/ \_/
d c b a

Most of the time, probably, but don't bet the farm on it, because if you're wrong, you'll lose big.
If you really, really need to have identical binary information, you'll need to create bitfields with bitmasks - e.g. you use an unsigned short (16 bit) for Message, and then make things like versionMask = 0xE000 to represent the three topmost bits.
There's a similar problem with alignment within structs. For instance, Sparc, PowerPC, and 680x0 CPUs are all big-endian, and the common default for Sparc and PowerPC compilers is to align struct members on 4-byte boundaries. However, one compiler I used for 680x0 only aligned on 2-byte boundaries - and there was no option to change the alignment!
So for some structs, the sizes on Sparc and PowerPC are identical, but smaller on 680x0, and some of the members are in different memory offsets within the struct.
This was a problem with one project I worked on, because a server process running on Sparc would query a client and find out it was big-endian, and assume it could just squirt binary structs out on the network and the client could cope. And that worked fine on PowerPC clients, and crashed big-time on 680x0 clients. I didn't write the code, and it took quite a while to find the problem. But it was easy to fix once I did.

Thanks #BenVoigt for your very useful comment starting
No, they were created to save memory.
Linux source does use a bit field to match to an external structure: /usr/include/linux/ip.h has this code for the first byte of an IP datagram
struct iphdr {
#if defined(__LITTLE_ENDIAN_BITFIELD)
__u8 ihl:4,
version:4;
#elif defined (__BIG_ENDIAN_BITFIELD)
__u8 version:4,
ihl:4;
#else
#error "Please fix <asm/byteorder.h>"
#endif
However in light of your comment I'm giving up trying to get this to work for the multi-byte bit field frag_off.

Of course the best answer is to use a class which reads/writes bit fields as a stream. Using the C bit field structure is just not guaranteed. Not to mention it is considered unprofessional/lazy/stupid to use this in real world coding.

Related

Unexpected behaviour using bit-fields and unions

I was experimenting with bit-fields and unions and created this:
union REG{
struct{
char posX: 7;
char posY: 7;
unsigned char dir: 2;
};
unsigned short reg;
};
And when I run sizeof( short ), I get 2, but when I run sizeof( REG ), I get 4. That's weird to me because when I sum the bits I get 7+7+2=16, which is the size in bits of a 2 byte datatype.
I'm currently using the Dev-C++ editor with compiler TDM-GCC 9.9.2 64-bit Debug.
This is my first question, so please tell me if you need more information... Thanks in advance!
Edit: Upon further experimentation I realized the size is the same (2 bytes) when I set the size of posX and posY to 6 bits. But that still puzzles because the sum is 14 bits which is less than 2 bytes...
Edit 2: Thanks to AviBerger I realized that replacing the char/unsigned char datatype with short/unsigned short the result of '''sizeof( REG )''' turns into 2. But I still can't figure out "Why does this happen?"
From the spec we have
An implementation may allocate any addressable storage unit large enough to hold a bit-
field. If enough space remains, a bit-field that immediately follows another bit-field in a
structure shall be packed into adjacent bits of the same unit. If insufficient space remains,
whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is
implementation-defined. The order of allocation of bit-fields within a unit (high-order to
low-order or low-order to high-order) is implementation-defined. The alignment of the
addressable storage unit is unspecified.
So the actual behavior depends on what size allocation unit the compiler chooses for the bitfields, and whether it allows fields to span mulitple allocation units. This choice is implementation defined, but a common implementation is to use the declared type of the bit field as the allocation unit, and not allow crossing allocation unit boundaries. So when you use (unisgned) char, it uses an 8-bit allocation unit. This means that no two of the bitfields can be combined into a single byte (7+7 > 8 and 7+2 > 8), so it ends up taking 3 allocation units (bytes), which then rounds up to 4 for alignment when combined with a short in the union.
When you change the bitfield size to 6, now the second and third bitfields can fit in a byte (6+2 = 8), so it only takes two allocation units.
When you change the bitfield type to short it uses a 16-bit allocation unit, so all 3 bitfields can fit in one allocation unit.
There are several points of finesse when working with struct and union. The most common is that fields are generously padded to be aligned to the CPU's word size.
struct {
char c1;
char c2;
} s1;
seems like it should be a two byte structure, but surprisingly often sizeof (s1) will be not 2, but 4—or even 8. This was the case even in the 1980s with 16-bit machines.
This is because C and C++ compilers will align each char element of a structure to a two byte or four byte boundary. I have yet to see structure elements aligned to 8 byte boundary, but we haven't got 64 bit architecture being that needy—yet.
The solution is to invoke a compilation option to "pack structures". This can either be done on the compiler command line or including a suitable #pragma option before the structure declaration:
#pragma pack(1) // this particular syntax is valid for many compilers
struct {
char c11;
char c12;
} s2;
#pragma pack(4)
Taken from the standard (n4835):
11.4.9 Bit-fields [class.bit]
1 [...] Allocation of bit-fields within a class object is implementation-defined. Alignment of bit-fields is implementation-defined. Bit-fields are packed into some addressable allocation unit. [Note: Bit-fields straddle allocation units on some machines and not on others. Bit-fields are assigned right-to-left on some machines, left-to-right on others. —end note]
As you see the size and alignment are implementation defined. So you might get the expected behaviour on other compilers/platforms, but on your compiler/platform you get different results than you expect.

How do I organize members in a struct to waste the least space on alignment?

[Not a duplicate of Structure padding and packing. That question is about how and when padding occurs. This one is about how to deal with it.]
I have just realized how much memory is wasted as a result of alignment in C++. Consider the following simple example:
struct X
{
int a;
double b;
int c;
};
int main()
{
cout << "sizeof(int) = " << sizeof(int) << '\n';
cout << "sizeof(double) = " << sizeof(double) << '\n';
cout << "2 * sizeof(int) + sizeof(double) = " << 2 * sizeof(int) + sizeof(double) << '\n';
cout << "but sizeof(X) = " << sizeof(X) << '\n';
}
When using g++ the program gives the following output:
sizeof(int) = 4
sizeof(double) = 8
2 * sizeof(int) + sizeof(double) = 16
but sizeof(X) = 24
That's 50% memory overhead! In a 3-gigabyte array of 134'217'728 Xs 1 gigabyte would be pure padding.
Fortunately, the solution to the problem is very simple - we simply have to swap double b and int c around:
struct X
{
int a;
int c;
double b;
};
Now the result is much more satisfying:
sizeof(int) = 4
sizeof(double) = 8
2 * sizeof(int) + sizeof(double) = 16
but sizeof(X) = 16
There is however a problem: this isn't cross-compatible. Yes, under g++ an int is 4 bytes and a double is 8 bytes, but that's not necessarily always true (their alignment doesn't have to be the same either), so under a different environment this "fix" could not only be useless, but it could also potentially make things worse by increasing the amount of padding needed.
Is there a reliable cross-platform way to solve this problem (minimize the amount of needed padding without suffering from decreased performance caused by misalignment)? Why doesn't the compiler perform such optimizations (swap struct/class members around to decrease padding)?
Clarification
Due to misunderstanding and confusion, I'd like to emphasize that I don't want to "pack" my struct. That is, I don't want its members to be unaligned and thus slower to access. Instead, I still want all members to be self-aligned, but in a way that uses the least memory on padding. This could be solved by using, for example, manual rearrangement as described here and in The Lost Art of Packing by Eric Raymond. I am looking for an automated and as much cross-platform as possible way to do this, similar to what is described in proposal P1112 for the upcoming C++20 standard.
(Don't apply these rules without thinking. See ESR's point about cache locality for members you use together. And in multi-threaded programs, beware false sharing of members written by different threads. Generally you don't want per-thread data in a single struct at all for this reason, unless you're doing it to control the separation with a large alignas(128). This applies to atomic and non-atomic vars; what matters is threads writing to cache lines regardless of how they do it.)
Rule of thumb: largest to smallest alignof(). There's nothing you can do that's perfect everywhere, but by far the most common case these days is a sane "normal" C++ implementation for a normal 32 or 64-bit CPU. All primitive types have power-of-2 sizes.
Most types have alignof(T) = sizeof(T), or alignof(T) capped at the register width of the implementation. So larger types are usually more-aligned than smaller types.
Struct-packing rules in most ABIs give struct members their absolute alignof(T) alignment relative to the start of the struct, and the struct itself inherits the largest alignof() of any of its members.
Put always-64-bit members first (like double, long long, and int64_t). ISO C++ of course doesn't fix these types at 64 bits / 8 bytes, but in practice on all CPUs you care about they are. People porting your code to exotic CPUs can tweak struct layouts to optimize if necessary.
then pointers and pointer-width integers: size_t, intptr_t, and ptrdiff_t (which may be 32 or 64-bit). These are all the same width on normal modern C++ implementations for CPUs with a flat memory model.
Consider putting linked-list and tree left/right pointers first if you care about x86 and Intel CPUs. Pointer-chasing through nodes in a tree or linked list has penalties when the struct start address is in a different 4k page than the member you're accessing. Putting them first guarantees that can't be the case.
then long (which is sometimes 32-bit even when pointers are 64-bit, in LLP64 ABIs like Windows x64). But it's guaranteed at least as wide as int.
then 32-bit int32_t, int, float, enum. (Optionally separate int32_t and float ahead of int if you care about possible 8 / 16-bit systems that still pad those types to 32-bit, or do better with them naturally aligned. Most such systems don't have wider loads (FPU or SIMD) so wider types have to be handled as multiple separate chunks all the time anyway).
ISO C++ allows int to be as narrow as 16 bits, or arbitrarily wide, but in practice it's a 32-bit type even on 64-bit CPUs. ABI designers found that programs designed to work with 32-bit int just waste memory (and cache footprint) if int was wider. Don't make assumptions that would cause correctness problems, but for "portable performance" you just have to be right in the normal case.
People tuning your code for exotic platforms can tweak if necessary. If a certain struct layout is perf-critical, perhaps comment on your assumptions and reasoning in the header.
then short / int16_t
then char / int8_t / bool
(for multiple bool flags, especially if read-mostly or if they're all modified together, consider packing them with 1-bit bitfields.)
(For unsigned integer types, find the corresponding signed type in my list.)
A multiple-of-8 byte array of narrower types can go earlier if you want it to. But if you don't know the exact sizes of types, you can't guarantee that int i + char buf[4] will fill an 8-byte aligned slot between two doubles. But it's not a bad assumption, so I'd do it anyway if there was some reason (like spatial locality of members accessed together) for putting them together instead of at the end.
Exotic types: x86-64 System V has alignof(long double) = 16, but i386 System V has only alignof(long double) = 4, sizeof(long double) = 12. It's the x87 80-bit type, which is actually 10 bytes but padded to 12 or 16 so it's a multiple of its alignof, making arrays possible without violating the alignment guarantee.
And in general it gets trickier when your struct members themselves are aggregates (struct or union) with a sizeof(x) != alignof(x).
Another twist is that in some ABIs (e.g. 32-bit Windows if I recall correctly) struct members are aligned to their size (up to 8 bytes) relative to the start of the struct, even though alignof(T) is still only 4 for double and int64_t.
This is to optimize for the common case of separate allocation of 8-byte aligned memory for a single struct, without giving an alignment guarantee. i386 System V also has the same alignof(T) = 4 for most primitive types (but malloc still gives you 8-byte aligned memory because alignof(maxalign_t) = 8). But anyway, i386 System V doesn't have that struct-packing rule, so (if you don't arrange your struct from largest to smallest) you can end up with 8-byte members under-aligned relative to the start of the struct.
Most CPUs have addressing modes that, given a pointer in a register, allow access to any byte offset. The max offset is usually very large, but on x86 it saves code size if the byte offset fits in a signed byte ([-128 .. +127]). So if you have a large array of any kind, prefer putting it later in the struct after the frequently used members. Even if this costs a bit of padding.
Your compiler will pretty much always make code that has the struct address in a register, not some address in the middle of the struct to take advantage of short negative displacements.
Eric S. Raymond wrote an article The Lost Art of Structure Packing. Specifically the section on Structure reordering is basically an answer to this question.
He also makes another important point:
9. Readability and cache locality
While reordering by size is the simplest way to eliminate slop, it’s not necessarily the right thing. There are two more issues: readability and cache locality.
In a large struct that can easily be split across a cache-line boundary, it makes sense to put 2 things nearby if they're always used together. Or even contiguous to allow load/store coalescing, e.g. copying 8 or 16 bytes with one (unaliged) integer or SIMD load/store instead of separately loading smaller members.
Cache lines are typically 32 or 64 bytes on modern CPUs. (On modern x86, always 64 bytes. And Sandybridge-family has an adjacent-line spatial prefetcher in L2 cache that tries to complete 128-byte pairs of lines, separate from the main L2 streamer HW prefetch pattern detector and L1d prefetching).
Fun fact: Rust allows the compiler to reorder structs for better packing, or other reasons. IDK if any compilers actually do that, though. Probably only possible with link-time whole-program optimization if you want the choice to be based on how the struct is actually used. Otherwise separately-compiled parts of the program couldn't agree on a layout.
(#alexis posted a link-only answer linking to ESR's article, so thanks for that starting point.)
gcc has the -Wpadded warning that warns when padding is added to a structure:
https://godbolt.org/z/iwO5Q3:
<source>:4:12: warning: padding struct to align 'X::b' [-Wpadded]
4 | double b;
| ^
<source>:1:8: warning: padding struct size to alignment boundary [-Wpadded]
1 | struct X
| ^
And you can manually rearrange members so that there is less / no padding. But this is not a cross platform solution, as different types can have different sizes / alignments on different system (Most notably pointers being 4 or 8 bytes on different architectures). The general rule of thumb is go from largest to smallest alignment when declaring members, and if you're still worried, compile your code with -Wpadded once (But I wouldn't keep it on generally, because padding is necessary sometimes).
As for the reason why the compiler can't do it automatically is because of the standard ([class.mem]/19). It guarantees that, because this is a simple struct with only public members, &x.a < &x.c (for some X x;), so they can't be rearranged.
There really isn't a portable solution in the generic case. Baring minimal requirements the standard imposes, types can be any size the implementation wants to make them.
To go along with that, the compiler is not allowed to reorder class member to make it more efficient. The standard mandates that the objects must be laid out in their declared order (by access modifier), so that's out as well.
You can use fixed width types like
struct foo
{
int64_t a;
int16_t b;
int8_t c;
int8_t d;
};
and this will be the same on all platforms, provided they supply those types, but it only works with integer types. There are no fixed-width floating point types and many standard objects/containers can be different sizes on different platforms.
Mate, in case you have 3GB of data, you probably should approach an issue by other way then swapping data members.
Instead of using 'array of struct', 'struct of arrays' could be used.
So say
struct X
{
int a;
double b;
int c;
};
constexpr size_t ArraySize = 1'000'000;
X my_data[ArraySize];
is going to became
constexpr size_t ArraySize = 1'000'000;
struct X
{
int a[ArraySize];
double b[ArraySize];
int c[ArraySize];
};
X my_data;
Each element is still easily accessible mydata.a[i] = 5; mydata.b[i] = 1.5f;....
There is no paddings (except a few bytes between arrays). Memory layout is cache friendly. Prefetcher handles reading sequential memory blocks from a few separate memory regions.
That's not as unorthodox as it might looks at first glance. That approach is widely used for SIMD and GPU programming.
Array of Structures (AoS), Structure of Arrays
This is a textbook memory-vs-speed problem. The padding is to trade memory for speed. You can't say:
I don't want to "pack" my struct.
because pragma pack is the tool invented exactly to make this trade the other way: speed for memory.
Is there a reliable cross-platform way
No, there can't be any. Alignment is strictly platform-dependent issue. Sizeof different types is a platform-dependent issue. Avoiding padding by reorganizing is platform-dependent squared.
Speed, memory, and cross-platform - you can have only two.
Why doesn't the compiler perform such optimizations (swap struct/class members around to decrease padding)?
Because the C++ specifications specifically guarantee that the compiler won't mess up your meticulously organized structs. Imagine you have four floats in a row. Sometimes you use them by name, and sometimes you pass them to a method that takes a float[3] parameter.
You're proposing that compiler should shuffle them around, potentially breaking all the code since the 1970s. And for what reason? Can you guarantee that every programmer ever will actually want to save your 8 bytes per struct? I'm, for one, sure that if I have 3 GB array, I'm having bigger problems than a GB more or less.
Although the Standard grants implementations broad discretion to insert arbitrary amounts of space between structure members, that's because the authors didn't want to try to guess all the situations where padding might be useful, and the principle "don't waste space for no reason" was considered self-evident.
In practice, almost every commonplace implementation for commonplace hardware will use primitive objects whose size is a power of two, and whose required alignment is a power of two that is no larger than the size. Further, almost every such implementation will place each member of a struct at the first available multiple of its alignment that completely follows the previous member.
Some pedants will squawk that code which exploits that behavior is "non-portable". To them I would reply
C code can be non-portable. Although it strove to give programmers the opportunity to write truly portable programs, the C89 Committee did not want to force programmers into writing portably, to preclude the use of C as a “high-level assembler”: the ability to write machine specific code is one of the strengths of C.
As a slight extension to that principle, the ability of code which need only run on 90% of machines to exploit features common to that 90% of machines--even though such code wouldn't exactly be "machine-specific"--is one of the strengths of C. The notion that C programmers shouldn't be expected to bend over backward to accommodate limitations of architectures which for decades have only been used in museums should be self-evident, but apparently isn't.
You can use #pragma pack(1), but the very reason of this is that the compiler optimizes. Accessing a variable through the full register is faster than accessing it to the least bit.
Specific packing is only useful for serialization and intercompiler compatibility, etc.
As NathanOliver correctly added, this might even fail on some platforms.

Struct Bit Packing and LSB / MSB ambiguity C++

I had to write a c++ code for the following packet header:
Original image link, PNG version of the above JPEG.
Here is the struct code I wrote for the above Packet Format. I want to know if the uint8_t or the uint16_t bit fields are correct
struct TelemetryTransferFramePrimaryHeader
{
//-- 6 Ocets Long --//
//-- Master Channel ID (2 octets)--//
uint16_t TransferFrameVersionNumber : 2;
uint16_t SpacecraftID : 10;
uint16_t VirtualChannelID : 3;
uint16_t OCFFlag : 1;
//-----------------//
uint8_t MasterChannelFrameCount;
uint8_t VirtualChannelFrameCount;
//-- Transfer Frame Data Field Status (2 octets) --//
uint16_t TransferFrameSecondaryHeaderFlag : 1;
uint16_t SyncFlag : 1;
uint16_t PacketOrderFlag : 1;
uint16_t SegmentLengthID : 2;
uint16_t FirstHeaderPointer : 11;
//-----------------//
};
How do I ensure that that the LSB -> MSB is preserved in the struct ?
I keep getting confused, and I've tried reading up but it ends up confusing me even more.
PS: I am using a 32bit processor.
Exactly how bits are mapped when using bit fields is implementation-specific. So it's very hard to say for sure if you did it right, we'd need to know the exact CPU and compiler (and compiler version, of course).
In short; don't do this. Bit fields are not very usable for things like this.
Do it manually instead, by declaring the words as needed and setting the bits inside them.
IMHO anyone trying to construct a struct in this way is in a state of sin.
The C99 Standard, for example, says:
An implementation may allocate any addressable storage unit large enough to hold a bitfield. If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit. If insufficient space remains, whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is implementation-defined. The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.
Even if you could predict that your compiler would construct bit-fields in units of (say) uint32_t, and the fields were arranged first field LS bits... you still have endian-ness to deal with !
So... as unwind says... do it by hand !
I agree that you should not do this. However STMicroelectronics uses bitfields to access the bits of its Cortex-M3/M4 microcontroller registers. So any compiler vendor that wants its users to be able to use the STMicroelectronics Cortex-M3/M4 libraries needs to support the allocation of bitfields starting at the least significant bit. In my compiler this is the default, but it is also optional so I could reverse it if I wanted to.

Endianess inside a byte

Recently I'm tracking down a bug that appears when the two sides of the network communication have different endianness. One side has already sent a telegram marking lastSegment while the other side is still waiting for the last segment endlessly.
I read this code:
#ifndef kBigEndian
struct tTelegram
{
u8 lastSegment : 1;
u8 reserved: 7;
u8 data[1];
};
#else
struct tTelegram
{
u8 reserved: 7;
u8 lastSegment : 1;
u8 data[1];
};
#endif
I know endianness is concerned for multi-byte type, e.g., int, long, etc. But why it cares in the previous code? lastSegment and reserved are inside a single byte.
Is that a bug?
You have 16 bits in your struct. On a 32-bit or 64-bit architecture, depending on the endianess, data may come "before" reserved and lastSegment or it may come "after" when looking at the raw binary. IE If we consider 32 bits, your struct may be packed along 32-bit boundaries. It might look like this:
padbyte1 padbyte2 data lastSegment+reserved
or it may look like this
lastSegment+reserved data padbyte1 padbyte2
So when you put those 16 bits over the wire then reinterpret them on the other side, do you know if you're getting data or lastSegment?
Your problem isn't within the byte, its where data lies in relation to reserved and lastSegment.
When it comes to bitfields, ordering is not guaranteed even between different compilers running on the same CPU. You could theoretically even get a change of order just by changing flags with the same compiler (though, in fairness, I have to add that I've never actually seen that happen).

Structures in C

I got a structure like this:
struct bar {
char x;
char *y;
};
I can assume that on a 32 bit system, that padding for char will make it 4 bytes total, and a pointer in 32 bit is 4, so the total size will be 8 right?
I know it's all implementation specific, but I think if it's within 1-4, it should be padded to 4, within 5-8 to 8 and 9-16 within 16, is this right? it seems to work.
Would I be right to say that the struct will be 12 bytes in a x64 arch, because pointers are 8 bytes? Or what do you think it should be?
I can assume that on a 32 bit system,
that padding for char will make it 4
bytes total, and a pointer in 32 bit
is 4, so the total size will be 8
right?
It's not safe to assume that, but that will often be the case, yes. For x86, fields are usually 32-bit aligned. The reason for this is to increase the system's performance at the cost of memory usage (see here).
Would I be right to say that the
struct will be 12 bytes in a x64 arch,
because pointers are 8 bytes? Or what
do you think it should be?
Similarly, for x64, fields are usually 64-bit/8-byte aligned, so sizeof(bar) would be 16.
As Anders points out, however, all this goes flying out the window once you start playing with alignment via /Zp, the pack directive, or whatever else your compiler supports.
Its a compiler switch, you can't assume anything. If you assume you may get into trouble.
For instance in Visual Studio you can decide using pragma pack(1) that you want it directly on the byte boundary.
You can't assume anything in general. Every platform decides its own padding rules.
That said, any architecture that uses "natural" alignment, where operands are padded to their own size (necessary and sufficient to avoid straddling naturally-aligned pages, cachelines, etc), will make bar twice the pointer size.
So, given natural alignment rules and nothing more, 8 bytes on 32-bit, 16 bytes on 64-bit.
$9.2/12-
Nonstatic data members of a
(non-union) class declared without an
intervening access-specifier are
allocated so that later members have
higher addresses within a class
object. The order of allocation of
nonstatic data members separated by an
access-specifier is unspecified
(11.1). Implementation alignment
requirements might cause two adjacent
members not to be allocated
immediately after each other; so might
requirements for space for managing
virtual functions (10.3) and virtual
base classes (10.1).
So, it is highly implementation specific as you already mentioned.
Not quite.
Padding depends on the alignment requirement of the next member. The natural alignment of built-in data types is their size.
There is no padding before char members since their alignment requirement is 1 (assuming char is 1 byte).
For example, if a char (again assume it is one byte) is followed by a short, which, say, is 2 bytes, there may be up to 1 byte of padding because a short must be 2-byte aligned. If a char is followed by double of the size of 8, there may be up to 7 bytes of padding because a double is 8-byte aligned. On the other hand, if a short is followed by a double, the may be up to 6 bytes of padding.
And the size of a structure is a multiple of the alignment of a member with the largest alignment requirement, so there may be tail padding. In the following structure, for instance,
struct baz {
double d;
char c;
};
the member with the largest alignment requirement is d, it's alignment requirement is 8, Which gives sizeof(baz) == 2 * alignof(double). There is 7 bytes of tail padding after member c.
gcc and other modern compilers support __alignof() operator. There is also a portable version in boost.
As others have mentioned, the behaviour can't be relied upon between platforms. However, if you still need to do this, then one thing you can use is BOOST_STATIC_ASSERT() to ensure that if the assumptions are violated then you find out at compile time, eg
#include <boost/static_assert.hpp>
#if ARCH==x86 // or whatever the platform specific #define is
BOOST_STATIC_ASSERT(sizeof(bar)==8);
#elif ARCH==x64
BOOST_STATIC_ASSERT(sizeof(bar)==16);
#else ...
If alignof() is available you could also use that to test your assumption.