If I have a struct in C++, is there no way to safely read/write it to a file that is cross-platform/compiler compatible?
Because if I understand correctly, every compiler 'pads' differently based on the target platform.
No. That is not possible. It's because of lack of standardization of C++ at the binary level.
Don Box writes (quoting from his book Essential COM, chapter COM As A Better C++)
C++ and Portability
Once the decision is made to
distribute a C++ class as a DLL, one
is faced with one of the fundamental
weaknesses of C++, that is, lack of
standardization at the binary level.
Although the ISO/ANSI C++ Draft
Working Paper attempts to codify which
programs will compile and what the
semantic effects of running them will
be, it makes no attempt to standardize
the binary runtime model of C++. The
first time this problem will become
evident is when a client tries to link
against the FastString DLL's import library from
a C++ developement environment other
than the one used to build the
FastString DLL.
Struct padding is done differently by different compilers. Even if you use the same compiler, the packing alignment for structs can be different based on what pragma pack you're using.
Not only that if you write two structs whose members are exactly same, the only difference is that the order in which they're declared is different, then the size of each struct can be (and often is) different.
For example, see this,
struct A
{
char c;
char d;
int i;
};
struct B
{
char c;
int i;
char d;
};
int main() {
cout << sizeof(A) << endl;
cout << sizeof(B) << endl;
}
Compile it with gcc-4.3.4, and you get this output:
8
12
That is, sizes are different even though both structs have the same members!
The bottom line is that the standard doesn't talk about how padding should be done, and so the compilers are free to make any decision and you cannot assume all compilers make the same decision.
If you have the opportunity to design the struct yourself, it should be possible. The basic idea is that you should design it so that there would be no need to insert pad bytes into it. the second trick is that you must handle differences in endianess.
I'll describe how to construct the struct using scalars, but the you should be able to use nested structs, as long as you would apply the same design for each included struct.
First, a basic fact in C and C++ is that the alignment of a type can not exceed the size of the type. If it would, then it would not be possible to allocate memory using malloc(N*sizeof(the_type)).
Layout the struct, starting with the largest types.
struct
{
uint64_t alpha;
uint32_t beta;
uint32_t gamma;
uint8_t delta;
Next, pad out the struct manually, so that in the end you will match up the largest type:
uint8_t pad8[3]; // Match uint32_t
uint32_t pad32; // Even number of uint32_t
}
Next step is to decide if the struct should be stored in little or big endian format. The best way is to "swap" all the element in situ before writing or after reading the struct, if the storage format does not match the endianess of the host system.
No, there's no safe way. In addition to padding, you have to deal with different byte ordering, and different sizes of builtin types.
You need to define a file format, and convert your struct to and from that format. Serialization libraries (e.g. boost::serialization, or google's protocolbuffers) can help with this.
Long story short, no. There is no platform-independent, Standard-conformant way to deal with padding.
Padding is called "alignment" in the Standard, and it begins discussing it in 3.9/5:
Object types have alignment
requirements (3.9.1, 3.9.2). The
alignment of a complete object type is
an implementation-defined integer
value representing a number of bytes;
an object is allocated at an address
that meets the alignment requirements
of its object type.
But it goes on from there and winds off to many dark corners of the Standard. Alignment is "implementation-defined" meaning it can be different across different compilers, or even across address models (ie 32-bit/64-bit) under the same compiler.
Unless you have truly harsh performance requirements, you might consider storing your data to disc in a different format, like char strings. Many high-performance protocols send everything using strings when the natural format might be something else. For example, a low-latency exchange feed I recently worked on sends dates as strings formatted like this: "20110321" and times are sent similarly: "141055.200". Even though this exchange feed sends 5 million messages per second all day long, they still use strings for everything because that way they can avoid endian-ness and other issues.
Related
Question
The C99 standard tells us:
There may be unnamed padding within a structure object, but not at its beginning.
and
There may be unnamed padding at the end of a structure or union.
I am assuming this applies to any of the C++ standards too, but I have not checked them.
Let's assume a C/C++ application (i.e. both languages are used in the application) running on an ARM Cortex-M would store some persistent data on a local medium (a serial NOR-flash chip for instance), and read it back after power cycling, possibly after an upgrade of the application itself in the future. The upgraded application may have been compiled with an upgraded compiler (we assume gcc).
Let's further assume that the developer is lazy (that's not me, of course), and directly streams some plain C or C++ structs to flash, instead of first serializing them as any paranoid experienced developer would do.
In fact, the developer in question is lazy, but not totally ignorant, since he has read the AAPCS (Procedure Call Standard for the Arm Architecture).
His rationale, besides laziness, is the following:
He does not want to pack the structs to avoid misalignment problems in the rest of the application.
The AAPCS specifies a fixed alignment for every single fundamental data type.
The only rational motivation for padding is to achieve proper alignment.
Therefore, he thinks, padding (and therefore member offsetof and total sizeof) is fully determined for any C or C++ struct by the AAPCS.
Therefore, he further reasons, there is no way my application would not be able to interpret some read back data that an earlier version of the same application would have written (assuming, of course, that the offset of the data in flash memory has not changed between writing and reading).
However, the developer has a conscience and he is a little worried:
The C standard does not mention any reason for padding. Achieving proper alignment may be the only rational reason for padding, but compilers are free to pad as much as they want, according to the standard.
How can he be sure that his compiler really follows the AAPCS?
Could his assumptions suddenly be broken by some apparently unrelated compiler flag that he would start using, or by a compiler upgrade?
My question is: how dangerously does that lazy developer live? In other words, how stable is padding in C/C++ structs under the assumptions above?
Conclusion
Two weeks after this question was asked, the only answer that has been
received does not really answer the asked question. I have also asked
the exact same question on an ARM community forum,
but got no answer at all.
I however choose to accept 3246135 as the answer because:
I take the absence of proper answer as very relevant information
for this case. The correctness of solutions to software problems
should be obvious. The assumptions made in my question may be true,
but I cannot easily prove it. Additionally, if the assumptions are
incorrect, the consequences, in the general case, could be
catastrophic.
Compared to the risk, the burden on the developer when using the
strategy exposed in the answer seems
very reasonable. Assuming a constant endianness (which is quite easy
to enforce), it is a hundred percent-safe (any deviation will generate
an error at compile-time) and it is much lighter than a full-blown
serialization. Basically, the strategy exposed in
the answer is a mandatory minimum
price to pay in order to make one's C/C++ structs persistent independently of any ABI.
If you are a developer asking yourself the question above, please do
not be lazy, and use instead the strategy exposed in the accepted
answer, or an alternative strategy that guarantees a constant padding
across software releases.
You can never by 100% sure that the compiler won't introduce padding in some capacity. However, you can mitigate the risks by following a few rules:
Use fixed size types for all members, i.e. uint32_t, int64_t, etc.
Start each member at an offset that is a multiple of the member's size (or if the member is an array / struct, the size of the largest member).
Avoid bitfields
Note that doing this will likely introduce some explicit padding fields to satisfy alignment.
For example:
struct orig {
int a;
char b;
int c[10];
short d;
char e[15];
long f;
int g;
};
The size of this struct's members, assuming sizeof(short) == 2, sizeof(int) == 4, and sizeof(long) == 8, would be 74. If you take into account likely padding:
struct orig_padded {
int a;
char b;
char pad1[3];
int c[10];
short d;
char e[15];
char pad2[7];
long f;
int g;
char pad3[4];
};
You have a struct size of 88.
With some rearranging we can reduce the size back to 74:
struct reordered {
int64_t f;
int32_t a;
int32_t c[10];
int32_t g;
int16_t d;
char b;
char e[15];
};
By ordering the fields in descending order of size, we basically remove padding between the fields and only leave potential padding at the end. Note also the use of fixed sizes to avoid some surprises. Then as a safeguard, we add:
static_assert(sizeof(struct reordered) == 74);
So if the compiled size of the struct ever changes, you'll know at compile time.
For more details, take a look at The Lost Art of Structure Packing.
If I have a struct in C++, is there no way to safely read/write it to a file that is cross-platform/compiler compatible?
Because if I understand correctly, every compiler 'pads' differently based on the target platform.
No. That is not possible. It's because of lack of standardization of C++ at the binary level.
Don Box writes (quoting from his book Essential COM, chapter COM As A Better C++)
C++ and Portability
Once the decision is made to
distribute a C++ class as a DLL, one
is faced with one of the fundamental
weaknesses of C++, that is, lack of
standardization at the binary level.
Although the ISO/ANSI C++ Draft
Working Paper attempts to codify which
programs will compile and what the
semantic effects of running them will
be, it makes no attempt to standardize
the binary runtime model of C++. The
first time this problem will become
evident is when a client tries to link
against the FastString DLL's import library from
a C++ developement environment other
than the one used to build the
FastString DLL.
Struct padding is done differently by different compilers. Even if you use the same compiler, the packing alignment for structs can be different based on what pragma pack you're using.
Not only that if you write two structs whose members are exactly same, the only difference is that the order in which they're declared is different, then the size of each struct can be (and often is) different.
For example, see this,
struct A
{
char c;
char d;
int i;
};
struct B
{
char c;
int i;
char d;
};
int main() {
cout << sizeof(A) << endl;
cout << sizeof(B) << endl;
}
Compile it with gcc-4.3.4, and you get this output:
8
12
That is, sizes are different even though both structs have the same members!
The bottom line is that the standard doesn't talk about how padding should be done, and so the compilers are free to make any decision and you cannot assume all compilers make the same decision.
If you have the opportunity to design the struct yourself, it should be possible. The basic idea is that you should design it so that there would be no need to insert pad bytes into it. the second trick is that you must handle differences in endianess.
I'll describe how to construct the struct using scalars, but the you should be able to use nested structs, as long as you would apply the same design for each included struct.
First, a basic fact in C and C++ is that the alignment of a type can not exceed the size of the type. If it would, then it would not be possible to allocate memory using malloc(N*sizeof(the_type)).
Layout the struct, starting with the largest types.
struct
{
uint64_t alpha;
uint32_t beta;
uint32_t gamma;
uint8_t delta;
Next, pad out the struct manually, so that in the end you will match up the largest type:
uint8_t pad8[3]; // Match uint32_t
uint32_t pad32; // Even number of uint32_t
}
Next step is to decide if the struct should be stored in little or big endian format. The best way is to "swap" all the element in situ before writing or after reading the struct, if the storage format does not match the endianess of the host system.
No, there's no safe way. In addition to padding, you have to deal with different byte ordering, and different sizes of builtin types.
You need to define a file format, and convert your struct to and from that format. Serialization libraries (e.g. boost::serialization, or google's protocolbuffers) can help with this.
Long story short, no. There is no platform-independent, Standard-conformant way to deal with padding.
Padding is called "alignment" in the Standard, and it begins discussing it in 3.9/5:
Object types have alignment
requirements (3.9.1, 3.9.2). The
alignment of a complete object type is
an implementation-defined integer
value representing a number of bytes;
an object is allocated at an address
that meets the alignment requirements
of its object type.
But it goes on from there and winds off to many dark corners of the Standard. Alignment is "implementation-defined" meaning it can be different across different compilers, or even across address models (ie 32-bit/64-bit) under the same compiler.
Unless you have truly harsh performance requirements, you might consider storing your data to disc in a different format, like char strings. Many high-performance protocols send everything using strings when the natural format might be something else. For example, a low-latency exchange feed I recently worked on sends dates as strings formatted like this: "20110321" and times are sent similarly: "141055.200". Even though this exchange feed sends 5 million messages per second all day long, they still use strings for everything because that way they can avoid endian-ness and other issues.
If I have a struct in C++, is there no way to safely read/write it to a file that is cross-platform/compiler compatible?
Because if I understand correctly, every compiler 'pads' differently based on the target platform.
No. That is not possible. It's because of lack of standardization of C++ at the binary level.
Don Box writes (quoting from his book Essential COM, chapter COM As A Better C++)
C++ and Portability
Once the decision is made to
distribute a C++ class as a DLL, one
is faced with one of the fundamental
weaknesses of C++, that is, lack of
standardization at the binary level.
Although the ISO/ANSI C++ Draft
Working Paper attempts to codify which
programs will compile and what the
semantic effects of running them will
be, it makes no attempt to standardize
the binary runtime model of C++. The
first time this problem will become
evident is when a client tries to link
against the FastString DLL's import library from
a C++ developement environment other
than the one used to build the
FastString DLL.
Struct padding is done differently by different compilers. Even if you use the same compiler, the packing alignment for structs can be different based on what pragma pack you're using.
Not only that if you write two structs whose members are exactly same, the only difference is that the order in which they're declared is different, then the size of each struct can be (and often is) different.
For example, see this,
struct A
{
char c;
char d;
int i;
};
struct B
{
char c;
int i;
char d;
};
int main() {
cout << sizeof(A) << endl;
cout << sizeof(B) << endl;
}
Compile it with gcc-4.3.4, and you get this output:
8
12
That is, sizes are different even though both structs have the same members!
The bottom line is that the standard doesn't talk about how padding should be done, and so the compilers are free to make any decision and you cannot assume all compilers make the same decision.
If you have the opportunity to design the struct yourself, it should be possible. The basic idea is that you should design it so that there would be no need to insert pad bytes into it. the second trick is that you must handle differences in endianess.
I'll describe how to construct the struct using scalars, but the you should be able to use nested structs, as long as you would apply the same design for each included struct.
First, a basic fact in C and C++ is that the alignment of a type can not exceed the size of the type. If it would, then it would not be possible to allocate memory using malloc(N*sizeof(the_type)).
Layout the struct, starting with the largest types.
struct
{
uint64_t alpha;
uint32_t beta;
uint32_t gamma;
uint8_t delta;
Next, pad out the struct manually, so that in the end you will match up the largest type:
uint8_t pad8[3]; // Match uint32_t
uint32_t pad32; // Even number of uint32_t
}
Next step is to decide if the struct should be stored in little or big endian format. The best way is to "swap" all the element in situ before writing or after reading the struct, if the storage format does not match the endianess of the host system.
No, there's no safe way. In addition to padding, you have to deal with different byte ordering, and different sizes of builtin types.
You need to define a file format, and convert your struct to and from that format. Serialization libraries (e.g. boost::serialization, or google's protocolbuffers) can help with this.
Long story short, no. There is no platform-independent, Standard-conformant way to deal with padding.
Padding is called "alignment" in the Standard, and it begins discussing it in 3.9/5:
Object types have alignment
requirements (3.9.1, 3.9.2). The
alignment of a complete object type is
an implementation-defined integer
value representing a number of bytes;
an object is allocated at an address
that meets the alignment requirements
of its object type.
But it goes on from there and winds off to many dark corners of the Standard. Alignment is "implementation-defined" meaning it can be different across different compilers, or even across address models (ie 32-bit/64-bit) under the same compiler.
Unless you have truly harsh performance requirements, you might consider storing your data to disc in a different format, like char strings. Many high-performance protocols send everything using strings when the natural format might be something else. For example, a low-latency exchange feed I recently worked on sends dates as strings formatted like this: "20110321" and times are sent similarly: "141055.200". Even though this exchange feed sends 5 million messages per second all day long, they still use strings for everything because that way they can avoid endian-ness and other issues.
If I have a struct in C++, is there no way to safely read/write it to a file that is cross-platform/compiler compatible?
Because if I understand correctly, every compiler 'pads' differently based on the target platform.
No. That is not possible. It's because of lack of standardization of C++ at the binary level.
Don Box writes (quoting from his book Essential COM, chapter COM As A Better C++)
C++ and Portability
Once the decision is made to
distribute a C++ class as a DLL, one
is faced with one of the fundamental
weaknesses of C++, that is, lack of
standardization at the binary level.
Although the ISO/ANSI C++ Draft
Working Paper attempts to codify which
programs will compile and what the
semantic effects of running them will
be, it makes no attempt to standardize
the binary runtime model of C++. The
first time this problem will become
evident is when a client tries to link
against the FastString DLL's import library from
a C++ developement environment other
than the one used to build the
FastString DLL.
Struct padding is done differently by different compilers. Even if you use the same compiler, the packing alignment for structs can be different based on what pragma pack you're using.
Not only that if you write two structs whose members are exactly same, the only difference is that the order in which they're declared is different, then the size of each struct can be (and often is) different.
For example, see this,
struct A
{
char c;
char d;
int i;
};
struct B
{
char c;
int i;
char d;
};
int main() {
cout << sizeof(A) << endl;
cout << sizeof(B) << endl;
}
Compile it with gcc-4.3.4, and you get this output:
8
12
That is, sizes are different even though both structs have the same members!
The bottom line is that the standard doesn't talk about how padding should be done, and so the compilers are free to make any decision and you cannot assume all compilers make the same decision.
If you have the opportunity to design the struct yourself, it should be possible. The basic idea is that you should design it so that there would be no need to insert pad bytes into it. the second trick is that you must handle differences in endianess.
I'll describe how to construct the struct using scalars, but the you should be able to use nested structs, as long as you would apply the same design for each included struct.
First, a basic fact in C and C++ is that the alignment of a type can not exceed the size of the type. If it would, then it would not be possible to allocate memory using malloc(N*sizeof(the_type)).
Layout the struct, starting with the largest types.
struct
{
uint64_t alpha;
uint32_t beta;
uint32_t gamma;
uint8_t delta;
Next, pad out the struct manually, so that in the end you will match up the largest type:
uint8_t pad8[3]; // Match uint32_t
uint32_t pad32; // Even number of uint32_t
}
Next step is to decide if the struct should be stored in little or big endian format. The best way is to "swap" all the element in situ before writing or after reading the struct, if the storage format does not match the endianess of the host system.
No, there's no safe way. In addition to padding, you have to deal with different byte ordering, and different sizes of builtin types.
You need to define a file format, and convert your struct to and from that format. Serialization libraries (e.g. boost::serialization, or google's protocolbuffers) can help with this.
Long story short, no. There is no platform-independent, Standard-conformant way to deal with padding.
Padding is called "alignment" in the Standard, and it begins discussing it in 3.9/5:
Object types have alignment
requirements (3.9.1, 3.9.2). The
alignment of a complete object type is
an implementation-defined integer
value representing a number of bytes;
an object is allocated at an address
that meets the alignment requirements
of its object type.
But it goes on from there and winds off to many dark corners of the Standard. Alignment is "implementation-defined" meaning it can be different across different compilers, or even across address models (ie 32-bit/64-bit) under the same compiler.
Unless you have truly harsh performance requirements, you might consider storing your data to disc in a different format, like char strings. Many high-performance protocols send everything using strings when the natural format might be something else. For example, a low-latency exchange feed I recently worked on sends dates as strings formatted like this: "20110321" and times are sent similarly: "141055.200". Even though this exchange feed sends 5 million messages per second all day long, they still use strings for everything because that way they can avoid endian-ness and other issues.
I realize that in general the C and C++ standards gives compiler writers a lot of latitude. But in particular it guarantees that POD types like C struct members have to be laid out in memory the same order that they're listed in the structs definition, and most compilers provide extensions letting you fix the alignment of members. So if you had a header that defined a struct and manually specified the alignment of its members, then compiled two apps with different compilers using the header, shouldn't one app be able to write an instance of the struct into shared memory and the other app be able to read it without errors?
I am assuming though that the size of the types contained is consistent across two compilers on the same architecture (it has to be the same platform already since we're talking about shared memory). I realize that this is not always true for some types (e.g. long vs. long long in GCC and MSVC 64-bit) but nowadays there are uint16_t, uint32_t, etc. types, and float and double are specified by IEEE standards.
As long as you can guarantee the exact same memory layout, including offsets, and the data types have the same sizes between the 2 compilers then yes this is fine. Because at that point the struct is identical with respect to data access.
Yes, sure. I've done this many times. The problems and solutions are the same whether mixed code is compiled and linked together, or when transmitting struct-formatted data between machines.
In the bad old days, this frequently occurred when integrating MS C and almost anything else: Borland Turbo C. DEC VAX C, Greenhills C.
The easy part is getting the number of bytes for various data types to agree. For example short on a 32-bit compiler on one side being the same as int on a 16-bit compiler at the other end. Since common source code to declare structures is usually a good thing, a number of to-the-point declarations are helpful:
typedef signed long s32;
typedef signed short s16;
typedef signed char s8;
typedef unsigned long u32;
typedef unsigned short u16;
typedef unsigned char u8;
...
Microsoft C is the most annoying. Its default is to pad members to 16-bit alignment, and maybe more with 64-bit code. Other compilers on x86 don't pad members.
struct {
int count;
char type;
char code;
char data [100];
} variable;
It might seem like the offset of code should be the next byte after type, but there might be a padding byte inserted between. The fix is usually
#ifdef _MSC_VER // if it's any Microsoft compiler
#pragma pack(1) // byte align structure members--that is, no padding
#endif
There is also a compiler command line option to do the same.
The way memory is laid out is important in addition to the datatype size if you need struct from library 1 compiled by compiler 1 to be used in library 2 compiled by compiler 2.
It is indeed possible, you just have to make sure that all compilers involved generate the same data structure from the same code. One way to test this is to write a sample program that creates a struct and writes it to a binary file. Open the resulting files in a hex editor and verify that they are the same. Alternatively, you can cast the struct to an array of uint8_t and dump the individual bytes to the screen.
One way to make sure that the data sizes are the same is to use data types like int16_t (from stdint.h) instead of a plain old int which may change sizes between compilers (although this is rare on two compilers running on the same platform).
It's not as difficult as it sounds. There are many pre-compiled libraries out there that can be used with multiple compilers. The key thing is to build a test program that will let you verify that both compilers are treating the structure equally.
Refer to your compiler manuals.
most compilers provide extensions letting you fix the alignment of members
Are you restricting yourself to those compilers and a mutually compatible #pragma align style? If so, the safety is dictated by their specification.
In the interest of portability, you are possibly better off ditching #pragma align and relying on your ABI, which may provide a "reasonable" standard for compliance of all compilers of your platform.
As the C and C++ standards allow any deterministic struct layout methodology, they're essentially irrelevant.