Memory-layout compatibility between C and C++ - c++

I'm building a C++ library which uses many functions and struct's defined in a C library. To avoid porting any code to C++, I add the typical conditional preprocessing to the C header files. For example,
//my_struct.h of the C library
#include <complex.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef struct {
double d1,d2,d3;
#ifdef __cplusplus
std::complex<double> z1,z2,z3;
std::complex<double> *pz;
#else
double complex z1,z2,z3;
double complex *pz;
#endif
int i,j,k;
} my_struct;
//Memory allocating + initialization function
my_struct *
alloc_my_struct(double);
#ifdef __cplusplus
}
#endif
The implementation of alloc_my_struct() is compiled in C. It simply allocates memory via malloc() and initializes members of my_struct.
Now when I do the following in my C++ code,
#include "my_struct.h"
...
my_struct *const ms = alloc_my_struct(2.);
I notice that *ms always have the expected memory layout, i.e., any access such as ms->z1 evaluates to the expected value. I find this really cool considering that (correct me if I'm wrong) the memory layout of my_struct during allocation is decided by the C compiler (in my case gcc -std=c11), while during access by the C++ compiler (in my case g++ -std=c++11).
My question is : Is this compatibility standardized? If not, is there any way around it?
NOTE : I don't have enough knowledge to argue against alignment, padding, and other implementation-defined specifics. But it is noteworthy that the GNU scientific library, which is C-compiled, is implementing the same approach (although their structs do not involve C99 complex numbers) for use in C++. On the other hand, I've done sufficient research to conclude that C++11 guarantees layout compatibility between C99 double complex and std::complex<double>.

C and C++ do share memory layout rules. In both languages structs are placed in memory in the same way. And even if C++ did want to do things a little differently, placing the struct inside extern "C" {} guarantees C layout.
But what your code is doing relies on C++ std::complex and C99 complex to be the same.
So see:
https://gcc.gnu.org/ml/libstdc++/2007-02/msg00161.html
C Complex Numbers in C++?

Your program has undefined behaviour: your definitions of my_struct are not lexically identical.
You're gambling that alignment, padding and various other things will not change between the two compilers, which is bad enough… but since this is UB anything could happen even if it were true!

It may not always be identical!
In this case looks like sizeof(std::complex<double>) is identical to sizeof(double complex).
Also pay attention to the fact that the compilers may (or may not) add padding to the structs to make them aligned to a specific value, based on the optimization configuration. And the padding may not always be identical resulting in different structure sizes (between C and c++).
Links to related posts:
C/C++ Struct memory layout equivalency
I would add compiler-specific attributes to "pack" the fields,
thereby guaranteeing all the ints are adjacent and compact. This is
less about C vs. C++ and more about the fact that you are likely using
two "different" compilers when compiling in the two languages, even if
those compilers come from a single vendor.
Adding a constructor will not change the layout (though it will make
the class non-POD), but adding access specifiers like private between
the two fields may change the layout (in practice, not only in
theory).
C struct memory layout?
In C, the compiler is allowed to dictate some alignment for every
primitive type. Typically the alignment is the size of the type. But
it's entirely implementation-specific.
Padding bytes are introduced so every object is properly aligned.
Reordering is not allowed.
Possibly every remotely modern compiler implements #pragma pack which
allows control over padding and leaves it to the programmer to comply
with the ABI. (It is strictly nonstandard, though.)
From C99 §6.7.2.1:
12 Each non-bit-field member of a structure or union object is aligned
in an implementation- defined manner appropriate to its type.
13 Within a structure object, the non-bit-field members and the units
in which bit-fields reside have addresses that increase in the order
in which they are declared. A pointer to a structure object, suitably
converted, points to its initial member (or if that member is a
bit-field, then to the unit in which it resides), and vice versa.
There may be unnamed padding within a structure object, but not at its
beginning.

In general, C and C++ have compatible struct layouts, because the layout is dictated by the platform's ABI rules, not just by the language, and (for most implementations) C and C++ follow the same ABI rules for type sizes, data layout, calling conventions etc.
C++11 even defined a new term, standard-layout, which means the type will have a compatible layout to a similar type in C. That means it can't use virtual functions, private data members, multiple inheritance (and a few other things). A C++ standard-layout type should have the same layout as an equivalent C type.
As noted in other answers, your specific code is not safe in general because std::complex<double> and complex double are not equivalent types, and there is no guarantee that they are layout-compatible. However GCC's C++ standard library ensures it will work because std::complex<double> and std::complex<float> are implemented in terms of the underlying C types. Instead of containing two double, GCC's std::complex<double> has a single member of type __complex__ double, which the compiler implements identically to the equivalent C type.
GCC does this specifically to support code like yours, because it's a reasonable thing to want to do.
So combining GCC's special efforts for std::complex with the standard-layout rules and the platform ABI, means that your code will work with that implementation.
This is not necessarily portable to other C++ implementations.

Also note that by malloc() a struct with C++ object (std::complex<double>) you skipped the ctor and this is also UB - even if you expect the ctor is empty or just zero the value and harmless to be skipped, you can't complain if this breaks. So your program work is by pure luck.

Related

When are type-punned pointers safe in practice?

A colleague of mine is working on C++ code that works with binary data arrays a lot. In certain places, he has code like
char *bytes = ...
T *p = (T*) bytes;
T v = p[i]; // UB
Here, T can be sometimes short or int (assume 16 and 32 bit respectively).
Now, unlike my colleague, I belong to the "no UB if at all possible" camp, while he is more along the lines of "if it works, it's OK". I am having a hard time trying to convince him otherwise.
Given that:
bytes really come from somewhere outside this compilation unit, being read from some binary file.
It's safe to assume that array really contains integers in the native endianness.
In practice, given mainstream C++ compilers like MSVC 2017 and gcc 4.8, and Intel x64 hardware, is such a thing really safe? I know it wouldn't be if T was, say, float (got bitten by it in the past).
char* can alias other entities without breaking strict aliasing rule.
Your code would be UB only if originally p + i wasn't a T originally.
char* byte = (char*) floats;
int *p = (int*) bytes;
int v = p[i]; // UB
but
char* byte = (char*) floats;
float *p = (float*) bytes;
float v = p[i]; // OK
If origin of byte is "unknown", compiler cannot benefit of UB for optimization and should assume we are in valid case and generate code according.
But how do you guaranty it is unknown ? Even outside the TU, something like Link-Time Optimization might allow to provide the hidden information.
Type-punned pointers are safe if one uses a construct which is recognized by the particular compiler one is using [i.e. any compiler that is configured support quality semantics if one is using straightforward constructs; neither gcc nor clang support quality semantics qualifies with optimizations are enabled, however, unless one uses -fno-strict-aliasing]. The authors of C89 were certainly aware that many applications required the use of various type-punning constructs beyond those mandated by the Standard, but thought the question of which constructs to recognize was best left as a quality-of-implementation issue. Given something like:
struct s1 { int objectClass; };
struct s2 { int objectClass; double x,y; };
struct s3 { int objectClass; char someData[32]; };
int getObjectClass(void *p) { return ((struct s1*)p)->objectClass; }
I think the authors of the Standard would have intended that the function be usable to read field objectClass of any of those structures [that is pretty much the whole purpose of the Common Initial Sequence rule] but there would be many ways by which compilers might achieve that. Some might recognize function calls as barriers to type-based aliasing analysis, while others might treat pointer casts in such a fashion. Most programs that use type punning would do several things that compilers might interpret as indications to be cautious with optimizations, so there was no particular need for a compiler to recognize any particular one of them. Further, since the authors of the Standard made no effort to forbid implementations that are "conforming" but are of such low-quality implementations as to be useless, there was no need to forbid compilers that somehow managed not to see any of the indications that storage might be used in interesting ways.
Unfortunately, for whatever reason, there hasn't been any effort by compiler vendors to find easy ways of recognizing common type-punning situations without needlessly impairing optimizations. While handling most cases would be fairly easy if compiler writers hadn't adopted designs that filter out the clearest and most useful evidence before applying optimization logic, both the designs of gcc and clang--and the mentalities of their maintainers--have evolved to oppose such a concept.
As far as I'm concerned, there is no reason why any "quality" implementation should have any trouble recognizing type punning in situations where all operations upon a byte of storage using a pointer converted to a pointer-to-PODS, or anything derived from that pointer, occur before the first time any of the following occurs:
That byte is accessed in conflicting fashion via means not derived from that pointer.
A pointer or reference is formed which will be used sometime in future to access that byte in conflicting fashion, or derive another that will.
Execution enters a function which will do one of the above before it exits.
Execution reaches the start of a bona fide loop [not, e.g. a do{...}while(0);] which will do one of the above before it exits.
A decently-designed compiler should have no problem recognizing those cases while still performing the vast majority of useful optimizations. Further, recognizing aliasing in such cases would be simpler and easier than trying to recognize it only in the cases mandated by the Standard. For those reasons, compilers that can't handle at least the above cases should be viewed as falling in the category of implementations that are of such low quality that the authors of the Standard didn't particularly want to allow, but saw no reason to forbid. Unfortunately, neither gcc nor clang offer any options to behave reasonably except by requiring that they disable type-based aliasing altogether. Unfortunately, the authors of gcc and clang would rather deride as "broken" any code needing features beyond what the Standard requires, than attempt a useful blend of optimization and semantics.
Incidentally, neither gcc nor clang should be relied upon to properly handle any situation in which storage that has been used as one type is later used as another, even when the Standard would require them to do so. Given something like:
union { struct s1 v1; struct s2 v2} unionArr[100];
void test(int i)
{
int test = unionArr[i].v2.objectClass;
unionArr[i].v1.objectClass = test;
}
Both clang and gcc will treat it as a no-op even if it is executed between code which writes unionArr[i].v2.objectClass and code which happens to reads member v1.objectClass of the same union object, thus causing them to ignore the possibility that the write to unionArr[i].v2.objectClass might affect v1.objectClass.

Getting bool from C to C++ and back

When designing data structures which are to be passed through a C API which connects C and C++ code, is it safe to use bool? That is, if I have a struct like this:
struct foo {
int bar;
bool baz;
};
is it guaranteed that the size and meaning of baz as well as its position within foo are interpreted in the same way by C (where it's a _Bool) and by C++?
We are considering to do this on a single platform (GCC for Debian 8 on a Beaglebone) with both C and C++ code compiled by the same GCC version (as C99 and C++11, respectively). General comments are welcome as well, though.
C's and C++'s bool type are different, but, as long as you stick to the same compiler (in your case, gcc), it should be safe, as this is a reasonable common scenario.
In C++, bool has always been a keyword. C didn't have one until C99, where they introduced the keyword _Bool (because people used to typedef or #define bool as int or char in C89 code, so directly adding bool as a keyword would break existing code); there is the header stdbool.h which should, in C, have a typedef or #define from _Bool to bool. Take a look at yours; GCC's implementation looks like this:
/*
* ISO C Standard: 7.16 Boolean type and values <stdbool.h>
*/
#ifndef _STDBOOL_H
#define _STDBOOL_H
#ifndef __cplusplus
#define bool _Bool
#define true 1
#define false 0
#else /* __cplusplus */
/* Supporting <stdbool.h> in C++ is a GCC extension. */
#define _Bool bool
#define bool bool
#define false false
#define true true
#endif /* __cplusplus */
/* Signal that all the definitions are present. */
#define __bool_true_false_are_defined 1
#endif /* stdbool.h */
Which leads us to believe that, at least in GCC, the two types are compatible (in both size and alignment, so that the struct layout will remain the same).
Also worth noting, the Itanium ABI, which is used by GCC and most other compilers (except Visual Studio; as noted by Matthieu M. in the comments below) on many platforms, specifies that _Bool and bool follow the same rules. This is a strong garantee. A third hint we can get is from Objective-C's reference manual, which says that for Objective-C and Objective-C++, which respect C's and C++'s conventions respectively, bool and _Bool are equivalent; so I'd pretty much say that, though the standards do not guarantee this, you can assume that yes, they are equivalent.
Edit:
If the standard does not guarantee that _Bool and bool will be compatible (in size, alignment, and padding), what does?
When we say those things are "architecture dependent", we actually mean that they are ABI dependent. Every compiler implements one or more ABIs, and two compilers (or versions of the same compiler) are said to be compatible if they implement the same ABI. Since it is expected to call C code from C++, as this is ubiquitously common, all C++ ABIs I've ever heard of extend the local C ABI.
Since OP asked about Beaglebone, we must check the ARM ABI, most specifically the GNU ARM EABI used by Debian. As noted by Justin Time in the comments, the ARM ABI indeed declares C++'s ABI to extend C's, and that _Bool and bool are compatible, both being of size 1, alignment 1, representing a machine's unsigned byte. So the answer to the question, on the Beaglebone, yes, _Bool and bool are compatible.
The language standards say nothing about this (I'm happy to be proven wrong about this, I couldn't find anything), so it can't be safe if we just limit ourselves to language standards. But if you're picky about which architectures you support you can find their ABI documentation to see if it will be safe.
For example, the amd64 ABI document has a footnote for the _Bool type that says:
This type is called bool in C++.
Which I can't interpret in any other way than that it will be compatible.
Also, just musing about this. Of course it will work. Compilers generate code that both follow an ABI and the behavior of the largest compiler for the platform (if that behavior is outside the ABI). A big thing about C++ is that it can link to libraries written in C and a thing about libraries is that they can be compiled by any compiler on the same platform (this is why we have ABI documents in the first place). Can there be some minor incompatibility at some point? Sure, but that's something you'd better solve by a bug report to the compiler maker rather than workaround in your code. I doubt bool would be something compiler makers would screw up.
The only thing the C standard says on _Bool :
An object declared as type _Bool is large enough to store the values 0
and 1.
Which would mean that _Bool is at least sizeof(char) or greater (so true / false are guaranteed to be storable).
The exact size is all implementation defined as Michael said in the comments though. You're better off just performing some tests on their sizes on the relevant compiler and if those match and you stick with that same compiler I'd consider it's safe.
As Gill Bates says above, you do have a problem that sizeof(bool) is compiler-dependent in C. There's no guarantee that the same compiler will treat it the same in C and C++, or that they would be the same on different architectures. The compiler would even be within its rights (according to the C standard) to represent this as an individual bit in a bitfield if it wanted.
I've personally experienced this when working with the TI OMAP-L138 processor which combines a 32-bit ARM core and a 32-bit DSP core on the same device, with some shared memory accessible by both. The ARM core represented bool as an int (32-bit here), whereas the DSP represented bool as char (8-bit). To solve this, I defined my own type bool32_t for use with the shared memory interface, knowing that a 32-bit value would work for both sides. Of course I could have defined it as an 8-bit value, but I considered it less likely to affect performance if I kept it as the native integer size.
If you do the same as I did then you can 100% guarantee binary compatibility between your C and C++ code. If you don't then you can't. It's really as simple as that. With the same compiler, your odds are very good - but there is no guarantee, and changing compiler options can easily screw you over in unexpected ways.
On a related subject, your int should also be using int16_t, int32_t or another integer of defined size. (You should include stdint.h for these type definitions.) On the same platform it is highly unlikely that this will be different for C and C++, but it is a code smell for firmware to use int. The exception is in places where you genuinely don't care how long an int is, but it should be clear that interfaces and structures must have that well-defined. It is too easy for programmers to make assumptions (which are frequently incorrect!) about its size, and the results are generally catastrophic when it goes wrong - and worse, they often don't go wrong in testing where you can easily find and fix them.

Size of C++ types with different compilers

I would like to avoid to fall into the XY trap so here is the original problem:
We have a small program which creates a shared memory segment on the PC. This program creates it by reading its structure from its header file (bunch of individual and nested struct definition). Basically just a .h and a .cpp file. This program will be compiled by g++.
We would like to create another program, a shared memory viewer, which displays the layout of this memory in a tree view. For that, we have to parse the previously mentioned header file and computing the offsets to read/manipulate the content of the specific part of the shared memory. We do not want to write a parser if it is not necessary especially because the header file contains additional declarations and definitions too. This program will be compiled by the same version of g++ as the previous program.
Originally, we wanted to use gccxml in the second program to parse the header file but it is based on 4.2 gcc and is cannot parse the included header files which contain C++11 code. Another idea is to use libclang to get the structure of that header file. libclang contains size information too, but I do not know if the size of the types and padding/alignment is the same in case of g++ and clang.
My question is: can you assume that the size of the C++ types and the padding/alignment of the structs will be the same when you compile the code with clang and g++? The environment (PC, OS) is the same. I am afraid we cannot, because the C++ standard does not specify the exact sizes of the types.
Do you know another solution to the original problem?
Short answer: Since clang has as a goal to "be compatible with gcc" (for both C and C++), I would say that you can expect it to generate same offsets and sizes for the same code.
Long answer:
Assuming you are using only basic types (int, short, double, char and pointers to those types), and we're restricting to gcc and clang (and their C++ versions), keeping to the same OS and same bitness (32- or 64-bit on "both sides"), then subject to actual bugs in the compiler, it should have the same structure layout.
Of course, that is a long list of restrictions, and of course the "subject to actual bugs" is a never-ending concern in these cases.
You can make your case a bit easier if you use defined size types, such as uint32_t rather than int - conversely, if you put a class member in the structure, that has virtual members, you'd be seriously in trouble - but that doesn't work very well with shared memory anyway, as it's not guaranteed to be at the same place in different applications.
Be wary of STL functionality - you may not get the same C++ library for the two compilers (you may, or may not, depending on how you installed it).
I would double check, by adding some code to print the offset and size of important members (and run with both compilers, of course) - don't forget to do this for the members deep inside some struct, since it could well be that the overall size of a struct could be identical and the content could be at different offsets.
(As others have said, I have seen projects where some code is generated with a script that prints the offsets of the struct members, and this is used as input for other programs in the project)
Actually, in this particular case, you should be fine.
The memory layout of data-structures is part of the ABI (Application Binary Interface), and gcc and clang both follow the Itanium ABI on x86 (and x86_64). Therefore, baring bugs, and provided they both compile for x86 or x86_64, they should end up with binary compatible types.
In the general case, you would typically cheat:
Use packed data structure: struct X { ... } __attribute__((packed)) __attribute__((aligned (8))); and you completely control the structure memory layout
As mentioned by Alf, have one compiler spew the offset of each member and use that to feed the generation of structures for the second compiler
Other ?
Size of data types vary from platform to platform. Instead of hardcoding, use sizeof operator to find out appropriate size applicable for the target platform, for example,
sizeof(int)
sizeof(char)
sizeof(double)
etc.
If you use fixed width integer types (http://en.cppreference.com/w/cpp/types/integer) in a C-style struct and arrange members in decreasing order of size (i.e. largest members first), it should be pretty safe.
I think I understand your issue. This is what Chrome does
COMPILE_ASSERT(sizeof(double) == 8, Double_size_not_8);
It assumes the sizes will match but checks just to make sure.
COMPILE_ASSERT is a macro. You can find the definition here but the short version is it's just what it says. An assert that happens at compile time.
If the sizes did not match then one way to deal with it is to define your header in bytes only. Instead of for example
struct SomeBinaryFileHeader {
int version;
int width;
int height;
};
You might do this
struct SomeBinaryFileHeaderReadWriteVersion {
uint_8 version_0;
uint_8 version_1;
uint_8 version_2;
uint_8 version_3;
uint_8 width_0;
uint_8 width_1;
uint_8 width_2;
uint_8 width_3;
uint_8 height_0;
uint_8 height_1;
uint_8 height_2;
uint_8 height_3;
}
Etc. and then convert from one to the other which will even work across endianness

struct member alignment - is it possible to assume no padding

Imagine a struct made up of 32-bit, 16-bit, and 8-bit member values. Where the ordering of member values is such that each member is on it's natural boundary.
struct Foo
{
uint32_t a;
uint16_t b;
uint8_t c;
uint8_t d;
uint32_t e;
};
Member alignment and padding rules are documented for Visual C++. sizeof(Foo) on VC++ the above struct is predictably "12".
Now, I'm pretty sure the rule is that no assumption should be made about padding and alignment, but in practice, do other compilers on other operating systems make similar guarantees?
If not, is there an equivalent of "#pragma pack(1)" on GCC?
In practice, on any system where the uintXX_t types exist, you will get the desired alignment with no padding. Don't throw in ugly gcc-isms to try to guarantee it.
Edit: To elaborate on why it may be harmful to use attribute packed or aligned, it may cause the whole struct to be misaligned when used as a member of a larger struct or on the stack. This will definitely hurt performance and, on non-x86 machines, will generate much larger code. It also means it's invalid to take a pointer to any member of the struct, since code that accesses the value through a pointer will not be aware that it could be misaligned and thus could fault.
As for why it's unnecessary, keep in mind that attribute is specific to gcc and gcc-workalike compilers. The C standard does not leave alignment undefined or unspecified. It's implementation-defined which means the implementation is required to further specify and document how it behaves. gcc's behavior is, and always has been, to align each struct member on the next boundary of its natural alignment (the same alignment it would have when used outside of a struct, which is necessarily a number that evenly divides the size of the type). Since attribute is a gcc feature, if you use it you're already assuming a gcc-like compiler, but then by assumption you have the alignment you want already.
In general you are correct that it's not a safe assumption, although you will often get the packing you expect on many systems. You may want to use the packed attribute on your types when you use gcc.
E.g.
struct __attribute__((packed)) Blah { /* ... */ };
On systems that actually offer those types, it is highly likely to work. On, say, a 36-bit system those types would not be available in the first place.
GCC provides an attribute
 
__attribute__ ((packed))
With similar effect.

How to limit the impact of implementation-dependent language features in C++?

The following is an excerpt from Bjarne Stroustrup's book, The C++ Programming Language:
Section 4.6:
Some of the aspects of C++’s fundamental types, such as the size of an int, are implementation- defined (§C.2). I point out these dependencies and often recommend avoiding them or taking steps to minimize their impact. Why should you bother? People who program on a variety of systems or use a variety of compilers care a lot because if they don’t, they are forced to waste time finding and fixing obscure bugs. People who claim they don’t care about portability usually do so because they use only a single system and feel they can afford the attitude that ‘‘the language is what my compiler implements.’’ This is a narrow and shortsighted view. If your program is a success, it is likely to be ported, so someone will have to find and fix problems related to implementation-dependent features. In addition, programs often need to be compiled with other compilers for the same system, and even a future release of your favorite compiler may do some things differently from the current one. It is far easier to know and limit the impact of implementation dependencies when a program is written than to try to untangle the mess afterwards.
It is relatively easy to limit the impact of implementation-dependent language features.
My question is: How to limit the impact of implementation-dependent language features? Please mention implementation-dependent language features then show how to limit their impact.
Few ideas:
Unfortunately you will have to use macros to avoid some platform specific or compiler specific issues. You can look at the headers of Boost libraries to see that it can quite easily get cumbersome, for example look at the files:
boost/config/compiler/gcc.hpp
boost/config/compiler/intel.hpp
boost/config/platform/linux.hpp
and so on
The integer types tend to be messy among different platforms, you will have to define your own typedefs or use something like Boost cstdint.hpp
If you decide to use any library, then do a check that the library is supported on the given platform
Use the libraries with good support and clearly documented platform support (for example Boost)
You can abstract yourself from some C++ implementation specific issues by relying heavily on libraries like Qt, which provide an "alternative" in sense of types and algorithms. They also attempt to make the coding in C++ more portable. Does it work? I'm not sure.
Not everything can be done with macros. Your build system will have to be able to detect the platform and the presence of certain libraries. Many would suggest autotools for project configuration, I on the other hand recommend CMake (rather nice language, no more M4)
endianness and alignment might be an issue if you do some low level meddling (i.e. reinterpret_cast and friends things alike (friends was a bad word in C++ context)).
throw in a lot of warning flags for the compiler, for gcc I would recommend at least -Wall -Wextra. But there is much more, see the documentation of the compiler or this question.
you have to watch out for everything that is implementation-defined and implementation-dependend. If you want the truth, only the truth, nothing but the truth, then go to ISO standard.
Well, the variable sizes one mentioned is a fairly well known issue, with the common workaround of providing typedeffed versions of the basic types that have well defined sizes (normally advertised in the typedef name). This is done use preprocessor macros to give different code-visibility on different platforms. E.g.:
#ifdef __WIN32__
typedef int int32;
typedef char char8;
//etc
#endif
#ifdef __MACOSX__
//different typedefs to produce same results
#endif
Other issues are normally solved in the same way too (i.e. using preprocessor tokens to perform conditional compilation)
The most obvious implementation dependency is size of integer types. There are many ways to handle this. The most obvious way is to use typedefs to create ints of the various sizes:
typedef signed short int16_t;
typedef unsigned short uint16_t;
The trick here is to pick a convention and stick to it. Which convention is the hard part: INT16, int16, int16_t, t_int16, Int16, etc. C99 has the stdint.h file which uses the int16_t style. If your compiler has this file, use it.
Similarly, you should be pedantic about using other standard defines such as size_t, time_t, etc.
The other trick is knowing when not to use these typedef. A loop control variable used to index an array, should just take raw int types so the compile will generate the best code for your processor. for (int32_t i = 0; i < x; ++i) could generate a lot of needless code on a 64-bite processor, just like using int16_t's would on a 32-bit processor.
A good solution is to use common headings that define typedeff'ed types as neccessary.
For example, including sys/types.h is an excellent way to deal with this, as is using portable libraries.
There are two approaches to this:
define your own types with a known size and use them instead of built-in types (like typedef int int32 #if-ed for various platforms)
use techniques which are not dependent on the type size
The first is very popular, however the second, when possible, usually results in a cleaner code. This includes:
do not assume pointer can be cast to int
do not assume you know the byte size of individual types, always use sizeof to check it
when saving data to files or transferring them across network, use techniques which are portable across changing data sizes (like saving/loading text files)
One recent example of this is writing code which can be compiled for both x86 and x64 platforms. The dangerous part here is pointer and size_t size - be prepared it can be 4 or 8 depending on platform, when casting or differencing pointer, cast never to int, use intptr_t and similar typedef-ed types instead.
One of the key ways of avoiding dependancy on particular data sizes is to read & write persistent data as text, not binary. If binary data must be used then all read/write operations must be centralised in a few methods and approaches like the typedefs already described here used.
A second rhing you can do is to enable all your your compilers warnings. for example, using the -pedantic flag with g++ will warn you of lots of potential portability problems.
If you're concerned about portability, things like the size of an int can be determined and dealt with without much difficulty. A lot of C++ compilers also support C99 features like the int types: int8_t, uint8_t, int16_t, uint32_t, etc. If yours doesn't support them natively, you can always include <cstdint> or <sys/types.h>, which, more often than not, has those typedefed. <limits.h> has these definitions for all the basic types.
The standard only guarantees the minimum size of a type, which you can always rely on: sizeof(char) < sizeof(short) <= sizeof(int) <= sizeof(long). char must be at least 8 bits. short and int must be at least 16 bits. long must be at least 32 bits.
Other things that might be implementation-defined include the ABI and name-mangling schemes (the behavior of export "C++" specifically), but unless you're working with more than one compiler, that's usually a non-issue.
The following is also an excerpt from Bjarne Stroustrup's book, The C++ Programming Language:
Section 10.4.9:
No implementation-independent guarantees are made about the order of construction of nonlocal objects in different compilation units. For example:
// file1.c:
Table tbl1;
// file2.c:
Table tbl2;
Whether tbl1 is constructed before tbl2 or vice versa is implementation-dependent. The order isn’t even guaranteed to be fixed in every particular implementation. Dynamic linking, or even a small change in the compilation process, can alter the sequence. The order of destruction is similarly implementation-dependent.
A programmer may ensure proper initialization by implementing the strategy that the implementations usually employ for local static objects: a first-time switch. For example:
class Zlib {
static bool initialized;
static void initialize() { /* initialize */ initialized = true; }
public:
// no constructor
void f()
{
if (initialized == false) initialize();
// ...
}
// ...
};
If there are many functions that need to test the first-time switch, this can be tedious, but it is often manageable. This technique relies on the fact that statically allocated objects without constructors are initialized to 0. The really difficult case is the one in which the first operation may be time-critical so that the overhead of testing and possible initialization can be serious. In that case, further trickery is required (§21.5.2).
An alternative approach for a simple object is to present it as a function (§9.4.1):
int& obj() { static int x = 0; return x; } // initialized upon first use
First-time switches do not handle every conceivable situation. For example, it is possible to create objects that refer to each other during construction. Such examples are best avoided. If such objects are necessary, they must be constructed carefully in stages.