Are POD types always aligned? - c++

For example, if I declare a long variable, can I assume it will always be aligned on a "sizeof(long)" boundary? Microsoft Visual C++ online help says so, but is it standard behavior?
some more info:
a. It is possible to explicitely create a misaligned integer (*bar):
char foo[5]
int * bar = (int *)(&foo[1]);
b. Apparently, #pragma pack() only affects structures, classes, and unions.
c. MSVC documentation states that POD types are aligned to their respective sizes (but is it always or by default, and is it standard behavior, I don't know)

As others have mentioned, this isn't part of the standard and is left up to the compiler to implement as it sees fit for the processor in question. For example, VC could easily implement different alignment requirements for an ARM processor than it does for x86 processors.
Microsoft VC implements what is basically called natural alignment up to the size specified by the #pragma pack directive or the /Zp command line option. This means that, for example, any POD type with a size smaller or equal to 8 bytes will be aligned based on its size. Anything larger will be aligned on an 8 byte boundary.
If it is important that you control alignment for different processors and different compilers, then you can use a packing size of 1 and pad your structures.
#pragma pack(push)
#pragma pack(1)
struct Example
{
short data1; // offset 0
short padding1; // offset 2
long data2; // offset 4
};
#pragma pack(pop)
In this code, the padding1 variable exists only to make sure that data2 is naturally aligned.
Answer to a:
Yes, that can easily cause misaligned data. On an x86 processor, this doesn't really hurt much at all. On other processors, this can result in a crash or a very slow execution. For example, the Alpha processor would throw a processor exception which would be caught by the OS. The OS would then inspect the instruction and then do the work needed to handle the misaligned data. Then execution continues. The __unaligned keyword can be used in VC to mark unaligned access for non-x86 programs (i.e. for CE).

By default, yes. However, it can be changed via the pack() #pragma.
I don't believe the C++ Standard make any requirement in this regard, and leaves it up to the implementation.

C and C++ don't mandate any kind of alignment. But natural alignment is strongly preferred by x86 and is required by most other CPU architectures, and compilers generally do their utmost to keep CPUs happy. So in practice you won't see a compiler generate misaligned data unless you really twist it's arm.

Yes, all types are always aligned to at least their alignment requirements.
How could it be otherwise?
But note that the sizeof() a type is not the same as it's alignment.
You can use the following macro to determine the alignment requirements of a type:
#define ALIGNMENT_OF( t ) offsetof( struct { char x; t test; }, test )

Depends on the compiler, the pragmas and the optimisation level. With modern compilers you can also choose time or space optimisation, which could change the alignment of types as well.

Generally it will be because reading/writing to it is faster that way. But almost every compiler has a switch to turn this off. In gcc its -malign-???. With aggregates they are generally aligned and sized based on the alignment requirements of each element within.

Related

How to ensure certain struct layout across compilations?

The C++ standard says nothing about packing and padding of structs, because it is implementation defined.
If it is implementation defined, then for example, why it is safe to pass a struct to a DLL, if this DLL could have been compiled with a different compiler, which could have different methods for struct padding?
Is the struct padding method enforced by the OS's ABI (for example, the padding will be the same on all Windows platforms)?
Or, is there standard method for padding when compiling for a PC (x64 or x86_64 systems) that is used in every modern compiler?
If there is nothing that can guarantee the layout of variables, then is it safe to assume that each basic type in C++ (char, all numeric variables and pointers) must be aligned to an address that is a multiple of its size, and because of that, padding inside a struct can be done by hand without performance problems or UB?
From what I have checked, g++ compiles structs in such a way, that it inserts minimum amount of padding, just to ensure alignment of the next variable.
For example:
struct foo
{
char a;
// char _padding1[3]; <- inserted by compiler
uint32_t b;
};
There are 3 bytes of padding after a because that is the minimum amount that will give us a suitably aligned address for b.
Can we take for granted that compilers will do this that way? Or, can we force this kind of padding by hand without UB or performance issues?
By hand, I mean:
#pragma pack(1)
struct foo
{
char a;
char _padding1[3]; //<- manually adding padding bytes
uint32_t b;
};
#pragma pack()
Just to be clear: I am asking about behavior of compilers only on PC platforms : Windows, Linux distros, and maybe MacOS.
Sorry if my question is in category of "you dig into this too much". I just couldn't find a satisfying answer on the Internet. Some people say that it is not guaranteed. Others say that compiling with different compilers on systems that use the same ABI guarantee that the same struct will have the same layout. Others show how to reduce struct padding assuming that compilers pack structs the way that I described above (it is with minimum required padding to align variables).
If it is implementation defined, then for example, why it is safe to pass struct to dll
Because the dll and the caller follow the same Application binary interface (ABI) that defines the layout.
By the way, dll are a language extension and not part of standard C++.
if this dll could have been compiled with different compiler, which could have different method for struct padding?
If the library and the dependent don't follow an intercompatible ABI, then they cannot work together.
Is structpadding method enforced by the OS's ABI
Yes, class layout (structs are classes) is defined by the ABI.
For example padding will be the same on all Windows platforms
Not quite, since Windows on ARM has a different ABI for example. But within the same CPU architecture, the layout would be the same in Windows.
Or is there standard method for padding when compiling for PC (x64 or x86_64 systems) that is used in every modern compiler?
No, there is no universal class layout followed by OS, even within x86_64 architecture.
From what I checked, g++ compiles structs in such way, that it inserts minimum amount of padding, just to ensure alignment of next variable.
All objects in C++ must be aligned as per the alignment requirement of the type of the object. This guarantee isn't compiler specific. However alignment requirements of types - and even the sizes of types - vary across different ABIs.
Bonus info: Compilers have language extensions that remove such guarantee.
There are 3 bytes of padding after a because it is minimum amount that will give us suitably aligned address for b. Can we take for granted that compilers will do this that way?
In general no. On some systems, alignof(std::uint32_t) == 1 in which case there wouldn't be need for any padding.
Within a single ABI, you can take for granted that the layout is the same, but across multiple systems - which might not follow the same ABI - you cannot take it for granted.
When dealing with binary layout across systems (for example, when reading from a file or network), the standard compliant way is to treat the data as an array of bytes1, and to copy each sequence of bytes2 from pre-determined offsets onto fixed width3 fundamental objects (not classes whose layout may differ). In practice, you don't need to care about sign representation although that used to be a problem historically.
If the optimiser does its job, there ideally shouldn't be any performance penalty if the layout of input data matches the native layout. In case it doesn't match, then there may be a cost (compared to a matching layout) that cannot be optimised away.
1 This isn't sufficient when byte size differs across systems, but you don't need to worry about that since you care about x86_64 only.
2 In order to support systems with varying byte endianness, you must interpret the bytes in order of their significance rather than memory order, but you don't need to worry about that since you care about x86_64 only.
3 I.e. not int, short, long etc., but rather std::int32_t etc.
The C and C++ standards were written to describe existing languages. In situations where 99+% of implementations would do things a certain way, and it was obvious that implementations should do things that way absent a compelling reason for doing otherwise, the standards would generally leave open the possibility of implementations doing something unusual.
Consider, for example, given something like:
struct foo {int i; char a,b[4],c,d,e;}; // Assume sizeof (int) is 4
struct foo myFoo;
On most platforms, making bar be a three-word type which contains all of the individual bytes packed together may be more efficient than doing anything else. On the other hand, on a platform that uses word-addressed storages, but includes instructions to load or store bytes at a specified byte offset from a specified word address, word-aligning the start of b may allow a construct like myfoo.b[i] to be processed by directly using the value of i as an offset onto the word-aligned address of myFoo.b.
The standards were designed by people designing compilers for such platforms to weigh the pros and cons of following normal practice versus deviating from it to better fit the target architecture.
Machines that use word addresses but allow byte-based loads and stores are of course exceptionally rare, and very little code that isn't deliberately written from such machines for which compatibility with such them would offer any added value whatsoever.
The committees weren't willing to say that such machines should be viewed as archaic and not worth supporting, but that doesn't mean they didn't expect and intend that programs written for commonplace implementations could exploit aspects of behavior that were shared by all commonplace implementations, even if not by some obscure ones.

How do I organize members in a struct to waste the least space on alignment?

[Not a duplicate of Structure padding and packing. That question is about how and when padding occurs. This one is about how to deal with it.]
I have just realized how much memory is wasted as a result of alignment in C++. Consider the following simple example:
struct X
{
int a;
double b;
int c;
};
int main()
{
cout << "sizeof(int) = " << sizeof(int) << '\n';
cout << "sizeof(double) = " << sizeof(double) << '\n';
cout << "2 * sizeof(int) + sizeof(double) = " << 2 * sizeof(int) + sizeof(double) << '\n';
cout << "but sizeof(X) = " << sizeof(X) << '\n';
}
When using g++ the program gives the following output:
sizeof(int) = 4
sizeof(double) = 8
2 * sizeof(int) + sizeof(double) = 16
but sizeof(X) = 24
That's 50% memory overhead! In a 3-gigabyte array of 134'217'728 Xs 1 gigabyte would be pure padding.
Fortunately, the solution to the problem is very simple - we simply have to swap double b and int c around:
struct X
{
int a;
int c;
double b;
};
Now the result is much more satisfying:
sizeof(int) = 4
sizeof(double) = 8
2 * sizeof(int) + sizeof(double) = 16
but sizeof(X) = 16
There is however a problem: this isn't cross-compatible. Yes, under g++ an int is 4 bytes and a double is 8 bytes, but that's not necessarily always true (their alignment doesn't have to be the same either), so under a different environment this "fix" could not only be useless, but it could also potentially make things worse by increasing the amount of padding needed.
Is there a reliable cross-platform way to solve this problem (minimize the amount of needed padding without suffering from decreased performance caused by misalignment)? Why doesn't the compiler perform such optimizations (swap struct/class members around to decrease padding)?
Clarification
Due to misunderstanding and confusion, I'd like to emphasize that I don't want to "pack" my struct. That is, I don't want its members to be unaligned and thus slower to access. Instead, I still want all members to be self-aligned, but in a way that uses the least memory on padding. This could be solved by using, for example, manual rearrangement as described here and in The Lost Art of Packing by Eric Raymond. I am looking for an automated and as much cross-platform as possible way to do this, similar to what is described in proposal P1112 for the upcoming C++20 standard.
(Don't apply these rules without thinking. See ESR's point about cache locality for members you use together. And in multi-threaded programs, beware false sharing of members written by different threads. Generally you don't want per-thread data in a single struct at all for this reason, unless you're doing it to control the separation with a large alignas(128). This applies to atomic and non-atomic vars; what matters is threads writing to cache lines regardless of how they do it.)
Rule of thumb: largest to smallest alignof(). There's nothing you can do that's perfect everywhere, but by far the most common case these days is a sane "normal" C++ implementation for a normal 32 or 64-bit CPU. All primitive types have power-of-2 sizes.
Most types have alignof(T) = sizeof(T), or alignof(T) capped at the register width of the implementation. So larger types are usually more-aligned than smaller types.
Struct-packing rules in most ABIs give struct members their absolute alignof(T) alignment relative to the start of the struct, and the struct itself inherits the largest alignof() of any of its members.
Put always-64-bit members first (like double, long long, and int64_t). ISO C++ of course doesn't fix these types at 64 bits / 8 bytes, but in practice on all CPUs you care about they are. People porting your code to exotic CPUs can tweak struct layouts to optimize if necessary.
then pointers and pointer-width integers: size_t, intptr_t, and ptrdiff_t (which may be 32 or 64-bit). These are all the same width on normal modern C++ implementations for CPUs with a flat memory model.
Consider putting linked-list and tree left/right pointers first if you care about x86 and Intel CPUs. Pointer-chasing through nodes in a tree or linked list has penalties when the struct start address is in a different 4k page than the member you're accessing. Putting them first guarantees that can't be the case.
then long (which is sometimes 32-bit even when pointers are 64-bit, in LLP64 ABIs like Windows x64). But it's guaranteed at least as wide as int.
then 32-bit int32_t, int, float, enum. (Optionally separate int32_t and float ahead of int if you care about possible 8 / 16-bit systems that still pad those types to 32-bit, or do better with them naturally aligned. Most such systems don't have wider loads (FPU or SIMD) so wider types have to be handled as multiple separate chunks all the time anyway).
ISO C++ allows int to be as narrow as 16 bits, or arbitrarily wide, but in practice it's a 32-bit type even on 64-bit CPUs. ABI designers found that programs designed to work with 32-bit int just waste memory (and cache footprint) if int was wider. Don't make assumptions that would cause correctness problems, but for "portable performance" you just have to be right in the normal case.
People tuning your code for exotic platforms can tweak if necessary. If a certain struct layout is perf-critical, perhaps comment on your assumptions and reasoning in the header.
then short / int16_t
then char / int8_t / bool
(for multiple bool flags, especially if read-mostly or if they're all modified together, consider packing them with 1-bit bitfields.)
(For unsigned integer types, find the corresponding signed type in my list.)
A multiple-of-8 byte array of narrower types can go earlier if you want it to. But if you don't know the exact sizes of types, you can't guarantee that int i + char buf[4] will fill an 8-byte aligned slot between two doubles. But it's not a bad assumption, so I'd do it anyway if there was some reason (like spatial locality of members accessed together) for putting them together instead of at the end.
Exotic types: x86-64 System V has alignof(long double) = 16, but i386 System V has only alignof(long double) = 4, sizeof(long double) = 12. It's the x87 80-bit type, which is actually 10 bytes but padded to 12 or 16 so it's a multiple of its alignof, making arrays possible without violating the alignment guarantee.
And in general it gets trickier when your struct members themselves are aggregates (struct or union) with a sizeof(x) != alignof(x).
Another twist is that in some ABIs (e.g. 32-bit Windows if I recall correctly) struct members are aligned to their size (up to 8 bytes) relative to the start of the struct, even though alignof(T) is still only 4 for double and int64_t.
This is to optimize for the common case of separate allocation of 8-byte aligned memory for a single struct, without giving an alignment guarantee. i386 System V also has the same alignof(T) = 4 for most primitive types (but malloc still gives you 8-byte aligned memory because alignof(maxalign_t) = 8). But anyway, i386 System V doesn't have that struct-packing rule, so (if you don't arrange your struct from largest to smallest) you can end up with 8-byte members under-aligned relative to the start of the struct.
Most CPUs have addressing modes that, given a pointer in a register, allow access to any byte offset. The max offset is usually very large, but on x86 it saves code size if the byte offset fits in a signed byte ([-128 .. +127]). So if you have a large array of any kind, prefer putting it later in the struct after the frequently used members. Even if this costs a bit of padding.
Your compiler will pretty much always make code that has the struct address in a register, not some address in the middle of the struct to take advantage of short negative displacements.
Eric S. Raymond wrote an article The Lost Art of Structure Packing. Specifically the section on Structure reordering is basically an answer to this question.
He also makes another important point:
9. Readability and cache locality
While reordering by size is the simplest way to eliminate slop, it’s not necessarily the right thing. There are two more issues: readability and cache locality.
In a large struct that can easily be split across a cache-line boundary, it makes sense to put 2 things nearby if they're always used together. Or even contiguous to allow load/store coalescing, e.g. copying 8 or 16 bytes with one (unaliged) integer or SIMD load/store instead of separately loading smaller members.
Cache lines are typically 32 or 64 bytes on modern CPUs. (On modern x86, always 64 bytes. And Sandybridge-family has an adjacent-line spatial prefetcher in L2 cache that tries to complete 128-byte pairs of lines, separate from the main L2 streamer HW prefetch pattern detector and L1d prefetching).
Fun fact: Rust allows the compiler to reorder structs for better packing, or other reasons. IDK if any compilers actually do that, though. Probably only possible with link-time whole-program optimization if you want the choice to be based on how the struct is actually used. Otherwise separately-compiled parts of the program couldn't agree on a layout.
(#alexis posted a link-only answer linking to ESR's article, so thanks for that starting point.)
gcc has the -Wpadded warning that warns when padding is added to a structure:
https://godbolt.org/z/iwO5Q3:
<source>:4:12: warning: padding struct to align 'X::b' [-Wpadded]
4 | double b;
| ^
<source>:1:8: warning: padding struct size to alignment boundary [-Wpadded]
1 | struct X
| ^
And you can manually rearrange members so that there is less / no padding. But this is not a cross platform solution, as different types can have different sizes / alignments on different system (Most notably pointers being 4 or 8 bytes on different architectures). The general rule of thumb is go from largest to smallest alignment when declaring members, and if you're still worried, compile your code with -Wpadded once (But I wouldn't keep it on generally, because padding is necessary sometimes).
As for the reason why the compiler can't do it automatically is because of the standard ([class.mem]/19). It guarantees that, because this is a simple struct with only public members, &x.a < &x.c (for some X x;), so they can't be rearranged.
There really isn't a portable solution in the generic case. Baring minimal requirements the standard imposes, types can be any size the implementation wants to make them.
To go along with that, the compiler is not allowed to reorder class member to make it more efficient. The standard mandates that the objects must be laid out in their declared order (by access modifier), so that's out as well.
You can use fixed width types like
struct foo
{
int64_t a;
int16_t b;
int8_t c;
int8_t d;
};
and this will be the same on all platforms, provided they supply those types, but it only works with integer types. There are no fixed-width floating point types and many standard objects/containers can be different sizes on different platforms.
Mate, in case you have 3GB of data, you probably should approach an issue by other way then swapping data members.
Instead of using 'array of struct', 'struct of arrays' could be used.
So say
struct X
{
int a;
double b;
int c;
};
constexpr size_t ArraySize = 1'000'000;
X my_data[ArraySize];
is going to became
constexpr size_t ArraySize = 1'000'000;
struct X
{
int a[ArraySize];
double b[ArraySize];
int c[ArraySize];
};
X my_data;
Each element is still easily accessible mydata.a[i] = 5; mydata.b[i] = 1.5f;....
There is no paddings (except a few bytes between arrays). Memory layout is cache friendly. Prefetcher handles reading sequential memory blocks from a few separate memory regions.
That's not as unorthodox as it might looks at first glance. That approach is widely used for SIMD and GPU programming.
Array of Structures (AoS), Structure of Arrays
This is a textbook memory-vs-speed problem. The padding is to trade memory for speed. You can't say:
I don't want to "pack" my struct.
because pragma pack is the tool invented exactly to make this trade the other way: speed for memory.
Is there a reliable cross-platform way
No, there can't be any. Alignment is strictly platform-dependent issue. Sizeof different types is a platform-dependent issue. Avoiding padding by reorganizing is platform-dependent squared.
Speed, memory, and cross-platform - you can have only two.
Why doesn't the compiler perform such optimizations (swap struct/class members around to decrease padding)?
Because the C++ specifications specifically guarantee that the compiler won't mess up your meticulously organized structs. Imagine you have four floats in a row. Sometimes you use them by name, and sometimes you pass them to a method that takes a float[3] parameter.
You're proposing that compiler should shuffle them around, potentially breaking all the code since the 1970s. And for what reason? Can you guarantee that every programmer ever will actually want to save your 8 bytes per struct? I'm, for one, sure that if I have 3 GB array, I'm having bigger problems than a GB more or less.
Although the Standard grants implementations broad discretion to insert arbitrary amounts of space between structure members, that's because the authors didn't want to try to guess all the situations where padding might be useful, and the principle "don't waste space for no reason" was considered self-evident.
In practice, almost every commonplace implementation for commonplace hardware will use primitive objects whose size is a power of two, and whose required alignment is a power of two that is no larger than the size. Further, almost every such implementation will place each member of a struct at the first available multiple of its alignment that completely follows the previous member.
Some pedants will squawk that code which exploits that behavior is "non-portable". To them I would reply
C code can be non-portable. Although it strove to give programmers the opportunity to write truly portable programs, the C89 Committee did not want to force programmers into writing portably, to preclude the use of C as a “high-level assembler”: the ability to write machine specific code is one of the strengths of C.
As a slight extension to that principle, the ability of code which need only run on 90% of machines to exploit features common to that 90% of machines--even though such code wouldn't exactly be "machine-specific"--is one of the strengths of C. The notion that C programmers shouldn't be expected to bend over backward to accommodate limitations of architectures which for decades have only been used in museums should be self-evident, but apparently isn't.
You can use #pragma pack(1), but the very reason of this is that the compiler optimizes. Accessing a variable through the full register is faster than accessing it to the least bit.
Specific packing is only useful for serialization and intercompiler compatibility, etc.
As NathanOliver correctly added, this might even fail on some platforms.

Size of Primitive data types

On what exactly does the size of a primitive data type like int depend on?
Compiler
Processor
Development Environment
Or is it a combination of these or other factors?
An explanation on the reason of the same will be really helpful.
EDIT: Sorry for the confusion..I meant to ask about Primitive data type like int and not regarding PODs, I do understand PODs can include structure and with structure it is a whole different ball game with padding coming in to the picture.
I have corrected the Q, the edit note here should ensure the answers regarding POD don't look irrelevant.
I think there are two parts to this question:
What sizes primitive types are allowed to be.
This is specified by the C and C++ standards: the types have allowed minimum value ranges they must have, which implicitly places a lower bound on their size in bits (e.g. long must be at least 32 bit to comply with the standard).
The standards do not specify the size in bytes, because the definition of the byte is up to the implementation, e.g. char is byte, but byte size (CHAR_BIT macro) may be 16 bit.
The actual size as defined by the implementation.
This, as other answers have already pointed out, is dependent on the implementation: the compiler. And the compiler implementation, in turn, is heavily influenced by the target architecture. So it's plausible to have two compilers running on the same OS and architecture, but having different size of int. The only assumption you can make is the one stated by the standard (given that the compiler implements it).
There also may be additional ABI requirements (e.g. fixed size of enums).
First of all, it depends on Compiler. Compiler in turns usually depends on the architecture, processor, development environment etc because it takes them into account. So you may say it's a combination of all. But I would NOT say that. I would say, Compiler, since on the same machine you may have different sizes of POD and built-in types, if you use different compilers. Also note that your source code is input to the compiler, so it's the compiler which makes final decision of the sizes of POD and built-in types. However, it's also true that this decision is influenced by the underlying architecture of the target machine. After all, the real useful compiler has to emit efficient code that eventually runs on the machine you target.
Compilers provides options too. Few of them might effect sizes also!
EDIT: What Standards say,
Size of char, signed char and unsigned char is defined by C++ Standard itself! Sizes of all other types are defined by the compiler.
C++03 Standard $5.3.3/1 says,
sizeof(char), sizeof(signed char) and
sizeof(unsigned char) are 1; the
result of sizeof applied to any other
fundamental type (3.9.1) is
implementation-defined. [Note: in
particular,sizeof(bool) and
sizeof(wchar_t) are
implementation-defined.69)
C99 Standard ($6.5.3.4) also itself defines the size of char, signed char and unsigned char to be 1, but leaves the size of other types to be defined by the compiler!
EDIT:
I found this C++ FAQ chapter really good. The entire chapter. It's very tiny chapter though. :-)
http://www.parashift.com/c++-faq-lite/intrinsic-types.html
Also read the comments below, there are some good arguments!
If you're asking about the size of a primitive type like int, I'd say it depends on the factor you cited.
The compiler/environment couple (where environment often means OS) is surely a part of it, since the compiler can map the various "sensible" sizes on the builtin types in different ways for various reasons: for example, compilers on x86_64 Windows will usually have a 32 bit long and a 64 bit long long to avoid breaking code thought for plain x86; on x86_64 Linux, instead, long is usually 64 bit because it's a more "natural" choice and apps developed for Linux are generally more architecture-neutral (because Linux runs on a much greater variety of architectures).
The processor surely matters in the decision: int should be the "natural size" of the processor, usually the size of the general-purpose registers of the processor. This means that it's the type that will work faster on the current architecture. long instead is often thought as a type which trades performance for an extended range (this is rarely true on regular PCs, but on microcontrollers it's normal).
If in instead you're also talking about structs & co. (which, if they respect some rules, are POD), again the compiler and the processor influence their size, since they are made of builtin types and of the appropriate padding chosen by the compiler to achieve the best performance on the target architecture.
As I commented under #Nawaz's answer, it technically depends solely on the compiler.
The compiler is just tasked with taking valid C++ code, and outputting valid machine code (or whatever language it targets).
So a C++ compiler could decide to make an int have a size of 15, and require it to be aligned on 5-byte boundaries, and it could decide to insert arbitrary padding between the variables in a POD. Nothing in the standard prohibits this, and it could still generate working code.
It'd just be much slower.
So in practice, compilers take some hints from the system they're running on, in two ways:
- the CPU has certain preferences: for example, it may have 32-bit wide registers, so making an int 32 bits wide would be a good idea, and it usually requires variables to be naturally aligned (a 4-byte wide variable must be aligned on an address divisible by 4, for example), so a sensible compiler respects these preferences because it yields faster code.
- the OS may have some influence too, in that if it uses another ABI than the compiler, making system calls is going to be needlessly difficult.
But those are just practical considerations to make life a bit easier for the programmer or to generate faster code. They're not required.
The compiler has the final word, and it can choose to completely ignore both the CPU and the OS. As long as it generates a working executable with the semantics specified in the C++ standard.
It depends on the implementation (compiler).
Implementation-defined behavior means unspecified behavior where each implementation documents how the choice is made.
A struct can also be POD, in which case you can explicity control potential padding between members with #pragma pack on some compilers.

Is it possible to share a C struct in shared memory between apps compiled with different compilers?

I realize that in general the C and C++ standards gives compiler writers a lot of latitude. But in particular it guarantees that POD types like C struct members have to be laid out in memory the same order that they're listed in the structs definition, and most compilers provide extensions letting you fix the alignment of members. So if you had a header that defined a struct and manually specified the alignment of its members, then compiled two apps with different compilers using the header, shouldn't one app be able to write an instance of the struct into shared memory and the other app be able to read it without errors?
I am assuming though that the size of the types contained is consistent across two compilers on the same architecture (it has to be the same platform already since we're talking about shared memory). I realize that this is not always true for some types (e.g. long vs. long long in GCC and MSVC 64-bit) but nowadays there are uint16_t, uint32_t, etc. types, and float and double are specified by IEEE standards.
As long as you can guarantee the exact same memory layout, including offsets, and the data types have the same sizes between the 2 compilers then yes this is fine. Because at that point the struct is identical with respect to data access.
Yes, sure. I've done this many times. The problems and solutions are the same whether mixed code is compiled and linked together, or when transmitting struct-formatted data between machines.
In the bad old days, this frequently occurred when integrating MS C and almost anything else: Borland Turbo C. DEC VAX C, Greenhills C.
The easy part is getting the number of bytes for various data types to agree. For example short on a 32-bit compiler on one side being the same as int on a 16-bit compiler at the other end. Since common source code to declare structures is usually a good thing, a number of to-the-point declarations are helpful:
typedef signed long s32;
typedef signed short s16;
typedef signed char s8;
typedef unsigned long u32;
typedef unsigned short u16;
typedef unsigned char u8;
...
Microsoft C is the most annoying. Its default is to pad members to 16-bit alignment, and maybe more with 64-bit code. Other compilers on x86 don't pad members.
struct {
int count;
char type;
char code;
char data [100];
} variable;
It might seem like the offset of code should be the next byte after type, but there might be a padding byte inserted between. The fix is usually
#ifdef _MSC_VER // if it's any Microsoft compiler
#pragma pack(1) // byte align structure members--that is, no padding
#endif
There is also a compiler command line option to do the same.
The way memory is laid out is important in addition to the datatype size if you need struct from library 1 compiled by compiler 1 to be used in library 2 compiled by compiler 2.
It is indeed possible, you just have to make sure that all compilers involved generate the same data structure from the same code. One way to test this is to write a sample program that creates a struct and writes it to a binary file. Open the resulting files in a hex editor and verify that they are the same. Alternatively, you can cast the struct to an array of uint8_t and dump the individual bytes to the screen.
One way to make sure that the data sizes are the same is to use data types like int16_t (from stdint.h) instead of a plain old int which may change sizes between compilers (although this is rare on two compilers running on the same platform).
It's not as difficult as it sounds. There are many pre-compiled libraries out there that can be used with multiple compilers. The key thing is to build a test program that will let you verify that both compilers are treating the structure equally.
Refer to your compiler manuals.
most compilers provide extensions letting you fix the alignment of members
Are you restricting yourself to those compilers and a mutually compatible #pragma align style? If so, the safety is dictated by their specification.
In the interest of portability, you are possibly better off ditching #pragma align and relying on your ABI, which may provide a "reasonable" standard for compliance of all compilers of your platform.
As the C and C++ standards allow any deterministic struct layout methodology, they're essentially irrelevant.

Determining the alignment of C/C++ structures in relation to its members

Can the alignment of a structure type be found if the alignments of the structure members are known?
Eg. for:
struct S
{
a_t a;
b_t b;
c_t c[];
};
is the alignment of S = max(alignment_of(a), alignment_of(b), alignment_of(c))?
Searching the internet I found that "for structured types the largest alignment requirement of any of its elements determines the alignment of the structure" (in What Every Programmer Should Know About Memory) but I couldn't find anything remotely similar in the standard (latest draft more exactly).
Edited:
Many thanks for all the answers, especially to Robert Gamble who provided a really good answer to the original question and the others who contributed.
In short:
To ensure alignment requirements for structure members, the alignment of a structure must be at least as strict as the alignment of its strictest member.
As for determining the alignment of structure a few options were presented and with a bit of research this is what I found:
c++ std::tr1::alignment_of
not standard yet, but close (technical report 1), should be in the C++0x
the following restrictions are present in the latest draft: Precondition:T shall be a complete type, a reference type, or an array of
unknown bound, but shall not be a function type or (possibly
cv-qualified) void.
this means that my presented use case with the C99 flexible array won't work (this is not that surprising since flexible arrays are not standard c++)
in the latest c++ draft it is defined in the terms of a new keyword - alignas (this has the same complete type requirement)
in my opinion, should c++ standard ever support C99 flexible arrays, the requirement could be relaxed (the alignment of the structure with the flexible array should not change based on the number of the array elements)
c++ boost::alignment_of
mostly a tr1 replacement
seems to be specialized for void and returns 0 in that case (this is forbidden in the c++ draft)
Note from developers: strictly speaking you should only rely on the value of ALIGNOF(T) being a multiple of the true alignment of T, although in practice it does compute the correct value in all the cases we know about.
I don't know if this works with flexible arrays, it should (might not work in general, this resolves to compiler intrinsic on my platform so I don't know how it will behave in the general case)
Andrew Top presented a simple template solution for calculating the alignment in the answers
this seems to be very close to what boost is doing (boost will additionally return the object size as the alignment if it is smaller than the calculated alignment as far as I can see) so probably the same notice applies
this works with flexible arrays
use Windbg.exe to find out the alignment of a symbol
not compile time, compiler specific, didn't test it
using offsetof on the anonymous structure containing the type
see the answers, not reliable, not portable with c++ non-POD
compiler intrinsics, eg. MSVC __alignof
works with flexible arrays
alignof keyword is in the latest c++ draft
If we want to use the "standard" solution we're limited to std::tr1::alignment_of, but that won't work if you mix your c++ code with c99's flexible arrays.
As I see it there is only 1 solution - use the old struct hack:
struct S
{
a_t a;
b_t b;
c_t c[1]; // "has" more than 1 member, strictly speaking this is undefined behavior in both c and c++ when used this way
};
The diverging c and c++ standards and their growing differences are unfortunate in this case (and every other case).
Another interesting question is (if we can't find out the alignment of a structure in a portable way) what is the most strictest alignment requirement possible. There are a couple of solutions I could find:
boost (internally) uses a union of variety of types and uses the boost::alignment_of on it
the latest c++ draft contains std::aligned_storage
The value of default-alignment shall be the most stringent alignment requirement for any C++ object type whose size is no greater than Len
so the std::alignment_of< std::aligned_storage<BigEnoughNumber>>::value should give us the maximum alignment
draft only, not standard yet (if ever), tr1::aligned_storage does not have this property
Any thoughts on this would also be appreciated.
I have temporarily unchecked the accepted answer to get more visibility and input on the new sub-questions
There are two closely related concepts to here:
The alignment required by the processor to access a particular object
The alignment that the compiler actually uses to place objects in memory
To ensure alignment requirements for structure members, the alignment of a structure must be at least as strict as the alignment of its strictest member. I don't think this is spelled out explicitly in the standard but it can be inferred from the the following facts (which are spelled out individually in the standard):
Structures are allowed to have padding between their members (and at the end)
Arrays are not allowed to have padding between their elements
You can create an array of any structure type
If the alignment of a structure was not at least as strict as each of its members you would not be able to create an array of structures since some structure members some elements would not be properly aligned.
Now the compiler must ensure a minimum alignment for the structure based on the alignment requirements of its members but it can also align objects in a stricter fashion than required, this is often done for performance reasons. For example, many modern processors will allow access to 32-bit integers in any alignment but accesses may be significantly slower if they are not aligned on a 4-byte boundary.
There is no portable way to determine the alignment enforced by the processor for any given type because this is not exposed by the language, although since the compiler obviously knows the alignment requirements of the target processor it could expose this information as an extension.
There is also no portable way (at least in C) to determine how a compiler will actually align an object although many compilers have options to provide some level of control over the alignment.
I wrote this type trait code to determine the alignment of any type(based on the compiler rules already discussed). You may find it useful:
template <class T>
class Traits
{
public:
struct AlignmentFinder
{
char a;
T b;
};
enum {AlignmentOf = sizeof(AlignmentFinder) - sizeof(T)};
};
So now you can go:
std::cout << "The alignment of structure S is: " << Traits<S>::AlignmentOf << std::endl;
The following macro will return the alignment requirement of any given type (even if it's a struct):
#define TYPE_ALIGNMENT( t ) offsetof( struct { char x; t test; }, test )
Note: I probably borrowed this idea from a Microsoft header at some point way back in my past...
Edit: as Robert Gamble points out in the comments, this macro is not guaranteed to work. In fact, it will certainly not work very well if the compiler is set to pack elements in structures. So if you decide to use it, use it with caution.
Some compilers have an extension that allows you obtain the alignment of a type (for example, starting with VS2002, MSVC has an __alignof() intrinsic). Those should be used when available.
As the others mentioned, its implementation dependant. Visual Studio 2005 uses 8 bytes as the default structure alignment. Internally, items are aligned by their size - a float has 4 byte alignment, a double uses 8, etc.
You can override the behavior with #pragma pack. GCC (and most compilers) have similar compiler options or pragmas.
It is possible to assume a structure alignment if you know more details about the compiler options that are in use. For example, #pragma pack(1) will force alignment on the byte level for some compilers.
Side note: I know the question was about alignment, but a side issue is padding. For embedded programming, binary data, and so forth -- In general, don't assume anything about structure alignment if possible. Rather use explicit padding if necessary in the structures. I've had cases where it was impossible to duplicate the exact alignment used in one compiler to a compiler on a different platform without adding padding elements. It had to do with the alignment of structures inside of structures, so adding padding elements fixed it.
If you want to find this out for a particular case in Windows, open up windbg:
Windbg.exe -z \path\to\somemodule.dll -y \path\to\symbols
Then, run:
dt somemodule!CSomeType
I don't think memory layout is guaranteed in any way in any C standard. This is very much vendor and architect-dependent. There might be ways to do it that work in 90% of cases, but they are not standard.
I would be very glad to be proven wrong, though =)
I agree mostly with Paul Betts, Ryan and Dan. Really, it's up to the developer, you can either keep the default alignment symanic's which Robert noted about (Robert's explanation is just the default behaviour and not by any means enforced or required), or you can setup whatever alignment you want /Zp[##].
What this means is that if you have a typedef with floats', long double's, uchar's etc... various assortments of arrays's included. Then have another type which has some of these oddly shaped members, and a single byte, then another odd member, it will simply be aligned at whatever preference the make/solution file defines.
As noted earlier, using windbg's dt command at runtime you can find out how the compiler laid out the structure in memory.
You can also use any pdb reading tool like dia2dump to extract this info from pdb's statically.
Modified from Peeter Joot's Blog
C structure alignment is based on the biggest size native type in the structure, at least generally (an exception is something like using a 64-bit integer on win32 where only 32-bit alignment is required).
If you have only chars and arrays of chars, once you add an int, that int will end up starting on a 4 byte boundary (with possible hidden padding before the int member). Additionally, if the structure isn’t a multiple of sizeof(int), hidden padding will be added at the end. Same thing for short and 64-bit types.
Example:
struct blah1 {
char x ;
char y[2] ;
};
sizeof(blah1) == 3
struct blah1plusShort {
char x ;
char y[2] ;
// <<< hidden one byte inserted by the compiler here
// <<< z will start on a 2 byte boundary (if beginning of struct is aligned).
short z ;
char w ;
// <<< hidden one byte tail pad inserted by the compiler.
// <<< the total struct size is a multiple of the biggest element.
// <<< This ensures alignment if used in an array.
};
sizeof(blah1plusShort) == 8
I read this answer after 8 years and I feel that the accepted answer from #Robert is generally right, but mathematically wrong.
To ensure alignment requirements for structure members, the alignment of a structure must be at least as strict as the least common multiple of the alignment of its members. Consider an odd example, where the alignment requirements of members are 4 and 10; in which case the alignment of the structure is LCM(4, 10) which is 20, and not 10. Of course, it is odd to see platforms with such alignment requirement which is not a power of 2, and thus for all practical cases, the structure alignment is equal to the maximum alignment of its members.
The reason for this is that, only if the address of the structure starts with the LCM of its member alignments, the alignment of all the members can be satisfied and the padding between the members and the end of the structure is independent of the start address.
Update: As pointed out by #chqrlie in the comment, C standard does not allow the odd values of the alignment. However this answer still proves why structure alignment is the maximum of its member alignments, just because the maximum happens to be the least common multiple, and thus the members are always aligned relative to the common multiple address.