How to limit the impact of implementation-dependent language features in C++? - c++

The following is an excerpt from Bjarne Stroustrup's book, The C++ Programming Language:
Section 4.6:
Some of the aspects of C++’s fundamental types, such as the size of an int, are implementation- defined (§C.2). I point out these dependencies and often recommend avoiding them or taking steps to minimize their impact. Why should you bother? People who program on a variety of systems or use a variety of compilers care a lot because if they don’t, they are forced to waste time finding and fixing obscure bugs. People who claim they don’t care about portability usually do so because they use only a single system and feel they can afford the attitude that ‘‘the language is what my compiler implements.’’ This is a narrow and shortsighted view. If your program is a success, it is likely to be ported, so someone will have to find and fix problems related to implementation-dependent features. In addition, programs often need to be compiled with other compilers for the same system, and even a future release of your favorite compiler may do some things differently from the current one. It is far easier to know and limit the impact of implementation dependencies when a program is written than to try to untangle the mess afterwards.
It is relatively easy to limit the impact of implementation-dependent language features.
My question is: How to limit the impact of implementation-dependent language features? Please mention implementation-dependent language features then show how to limit their impact.

Few ideas:
Unfortunately you will have to use macros to avoid some platform specific or compiler specific issues. You can look at the headers of Boost libraries to see that it can quite easily get cumbersome, for example look at the files:
boost/config/compiler/gcc.hpp
boost/config/compiler/intel.hpp
boost/config/platform/linux.hpp
and so on
The integer types tend to be messy among different platforms, you will have to define your own typedefs or use something like Boost cstdint.hpp
If you decide to use any library, then do a check that the library is supported on the given platform
Use the libraries with good support and clearly documented platform support (for example Boost)
You can abstract yourself from some C++ implementation specific issues by relying heavily on libraries like Qt, which provide an "alternative" in sense of types and algorithms. They also attempt to make the coding in C++ more portable. Does it work? I'm not sure.
Not everything can be done with macros. Your build system will have to be able to detect the platform and the presence of certain libraries. Many would suggest autotools for project configuration, I on the other hand recommend CMake (rather nice language, no more M4)
endianness and alignment might be an issue if you do some low level meddling (i.e. reinterpret_cast and friends things alike (friends was a bad word in C++ context)).
throw in a lot of warning flags for the compiler, for gcc I would recommend at least -Wall -Wextra. But there is much more, see the documentation of the compiler or this question.
you have to watch out for everything that is implementation-defined and implementation-dependend. If you want the truth, only the truth, nothing but the truth, then go to ISO standard.

Well, the variable sizes one mentioned is a fairly well known issue, with the common workaround of providing typedeffed versions of the basic types that have well defined sizes (normally advertised in the typedef name). This is done use preprocessor macros to give different code-visibility on different platforms. E.g.:
#ifdef __WIN32__
typedef int int32;
typedef char char8;
//etc
#endif
#ifdef __MACOSX__
//different typedefs to produce same results
#endif
Other issues are normally solved in the same way too (i.e. using preprocessor tokens to perform conditional compilation)

The most obvious implementation dependency is size of integer types. There are many ways to handle this. The most obvious way is to use typedefs to create ints of the various sizes:
typedef signed short int16_t;
typedef unsigned short uint16_t;
The trick here is to pick a convention and stick to it. Which convention is the hard part: INT16, int16, int16_t, t_int16, Int16, etc. C99 has the stdint.h file which uses the int16_t style. If your compiler has this file, use it.
Similarly, you should be pedantic about using other standard defines such as size_t, time_t, etc.
The other trick is knowing when not to use these typedef. A loop control variable used to index an array, should just take raw int types so the compile will generate the best code for your processor. for (int32_t i = 0; i < x; ++i) could generate a lot of needless code on a 64-bite processor, just like using int16_t's would on a 32-bit processor.

A good solution is to use common headings that define typedeff'ed types as neccessary.
For example, including sys/types.h is an excellent way to deal with this, as is using portable libraries.

There are two approaches to this:
define your own types with a known size and use them instead of built-in types (like typedef int int32 #if-ed for various platforms)
use techniques which are not dependent on the type size
The first is very popular, however the second, when possible, usually results in a cleaner code. This includes:
do not assume pointer can be cast to int
do not assume you know the byte size of individual types, always use sizeof to check it
when saving data to files or transferring them across network, use techniques which are portable across changing data sizes (like saving/loading text files)
One recent example of this is writing code which can be compiled for both x86 and x64 platforms. The dangerous part here is pointer and size_t size - be prepared it can be 4 or 8 depending on platform, when casting or differencing pointer, cast never to int, use intptr_t and similar typedef-ed types instead.

One of the key ways of avoiding dependancy on particular data sizes is to read & write persistent data as text, not binary. If binary data must be used then all read/write operations must be centralised in a few methods and approaches like the typedefs already described here used.
A second rhing you can do is to enable all your your compilers warnings. for example, using the -pedantic flag with g++ will warn you of lots of potential portability problems.

If you're concerned about portability, things like the size of an int can be determined and dealt with without much difficulty. A lot of C++ compilers also support C99 features like the int types: int8_t, uint8_t, int16_t, uint32_t, etc. If yours doesn't support them natively, you can always include <cstdint> or <sys/types.h>, which, more often than not, has those typedefed. <limits.h> has these definitions for all the basic types.
The standard only guarantees the minimum size of a type, which you can always rely on: sizeof(char) < sizeof(short) <= sizeof(int) <= sizeof(long). char must be at least 8 bits. short and int must be at least 16 bits. long must be at least 32 bits.
Other things that might be implementation-defined include the ABI and name-mangling schemes (the behavior of export "C++" specifically), but unless you're working with more than one compiler, that's usually a non-issue.

The following is also an excerpt from Bjarne Stroustrup's book, The C++ Programming Language:
Section 10.4.9:
No implementation-independent guarantees are made about the order of construction of nonlocal objects in different compilation units. For example:
// file1.c:
Table tbl1;
// file2.c:
Table tbl2;
Whether tbl1 is constructed before tbl2 or vice versa is implementation-dependent. The order isn’t even guaranteed to be fixed in every particular implementation. Dynamic linking, or even a small change in the compilation process, can alter the sequence. The order of destruction is similarly implementation-dependent.
A programmer may ensure proper initialization by implementing the strategy that the implementations usually employ for local static objects: a first-time switch. For example:
class Zlib {
static bool initialized;
static void initialize() { /* initialize */ initialized = true; }
public:
// no constructor
void f()
{
if (initialized == false) initialize();
// ...
}
// ...
};
If there are many functions that need to test the first-time switch, this can be tedious, but it is often manageable. This technique relies on the fact that statically allocated objects without constructors are initialized to 0. The really difficult case is the one in which the first operation may be time-critical so that the overhead of testing and possible initialization can be serious. In that case, further trickery is required (§21.5.2).
An alternative approach for a simple object is to present it as a function (§9.4.1):
int& obj() { static int x = 0; return x; } // initialized upon first use
First-time switches do not handle every conceivable situation. For example, it is possible to create objects that refer to each other during construction. Such examples are best avoided. If such objects are necessary, they must be constructed carefully in stages.

Related

Speed : typedef vs #define in c++

Recently I came across typedefs and #defines. Even though they have similar usage, one of them is a compiler token and the other is a preprocessor token.
This made me wonder about their speeds of operation, as one wants to be as fast as possible in competitive programming.
So, which one is relatively faster? An explanation accompanied with the answer would be great. Would the compiler used make any difference like g++ vs MSVC compiler vs clang compiler?
Use case examples:
typedef long long int; and #define ll long long int.
There is no difference in performance, but preprocessor macros are not recommended because they pollute the global scope since unlike typedef they can't be placed in a namespace.
But arguably, ll isn't very expressive; it may make the code less readable. Consider using int64_t from <cstdint>. It's nice because it's more expressive (_t clearly indicates it's a type and its size is exactly 64 bits, and so is future-proof, even when long long is 128 bits), and is relatively concise so no need to typedef anything.
They both take the same amount of time. You won't notice any difference at all.
Also note that if used properly they are identical at runtime. Only in compile times they might differ slightly but that's barely anything.
If you care about using c++ features, the better option would be to use the using keyword which is available since c++11 and specifically designed for this purpose. It is also compatible with templates.
Please note that, both using and typedef are semantically same.
https://en.cppreference.com/w/cpp/language/type_alias
No difference in execution time, so you should be fine using either for competitive programming.
I personally avoid #define and use typedef for types as it is more cleaner and readable.

Attempting to standardise a may_alias idiom in C++?

From my understanding GCC and Clang both support: __attribute__((may_alias))
So I believe for example if I wanted to define a fallback type for say a 128-bit SSE type that works across GCC/Clang/MSVC on x64/ARM I could do something like:
#ifdef _MSC_VER
#define UNIONALIAS union
#else
#define UNIONALIAS union __attribute__((may_alias))
#endif
UNIONALIAS VI128
{
uint64_t uint64[2];
uint32_t uint32[4];
uint16_t uint16[8];
uint8_t uint8[16];
int64_t int64[2];
int32_t int32[4];
int16_t int16[8];
int8_t int8[16];
};
Now as far as I understand it currently all versions of MSVC will allow you via undefined behaviour to ignore strict aliasing rules and access the different types within that union and get what you 'would hope for' despite the C++ standard providing no guarantee etc. With the attribute tag on GCC/Clang this will likely future proof the code on those compilers despite not being standard C++.
I suspect if something like this became a standard idiom then future versions of MSVC would surely take it into account even if they didn't adopt compatible attribute syntax - maybe they have something planned already? Also maybe any future C++ standard would also make room to allow for a suitable transition if it directly added support for may_alias?
I use a fallback SSE type as an example but maybe an integer AVX2 one would have been better. But the point remains is sometimes you do want to do things like this and the memcpy route may not be viable due to performance in debug builds and sometimes it can cause a lot of excessive typing and contorted effort really.
Apart from it being non standard (!) is there any major flaw in this approach, or my understanding current compiler behaviour regards this?

Any reason to define types in dll? Like #define MODULE_NAME_INT8 __int8

So I am going through some code at work and while I am not an expert in c++ yet I think it is safe to say the code is a bit of mess. Still I would like confirmation on something. In the code every type is defined like so:
#define MODULE_NAME_INT8 __int8
The same for unsigned, 16, 32 and 64 bit as well as bool, null, true and false. Is there possibly some reason for this? It is in a dll, and other parts of the application is written in C# so maybe there is something with this I have not thought of?
It really seems like a complete waste and a misuse of define to me. I mean the type name is in the define, it is basically just renaming the type, why clutter the code like this? Am I right or am I missing something?
I think those defines make sense. If this code is a bit old, MSVC 2008 and before do not have stdint.h nor cstdint. So the standard type int8_t cannot be used and the programmer decided to hide the Windows specific __int8 behind a define to only have to change the define if the code has to be ported to another platform (or simply a different compiler)
Double underscore before type is compiler-specific. Typically you can see defines like this one if the code is supposed to be built using different compiler or platform, so that you can build from the same source code, and types remain the same.
Although this clutters the code, using for example just "int" may be dangerous in case you want to build for multiple platforms, because in C++ int has no standard-defined size (int may be 1,2,4,8 bytes).
C++11 introduced fixed types for this purpose: int8_t, int16_t, and so on, but since your code was most probably created long before this, it does not take advantage of this, and goes the "old-fashioned" way.
Having a header file with all data types used by the application is very common practice in commercial software.
It's more of a thinking ahead kind of measure than a maintain compatibility. The purpose is to make the code easily portable and easy to maintain.
Since there is no guarantee that the code will not have to run a different architecture which uses different sizes for the common data types, the practice is a very good one.

Moving from C++ to C

After a few years coding in C++, I was recently offered a job coding in C, in the embedded field.
Putting aside the question of whether it's right or wrong to dismiss C++ in the embedded field, there are some features/idioms in C++ I would miss a lot. Just to name a few:
Generic, type-safe data structures (using templates).
RAII. Especially in functions with multiple return points, e.g. not having to remember to release the mutex on each return point.
Destructors in general. I.e. you write a d'tor once for MyClass, then if a MyClass instance is a member of MyOtherClass, MyOtherClass doesn't have to explicitly deinitialize the MyClass instance - its d'tor is called automatically.
Namespaces.
What are your experiences moving from C++ to C?
What C substitutes did you find for your favorite C++ features/idioms? Did you discover any C features you wish C++ had?
Working on an embedded project, I tried working in all C once, and just couldn't stand it. It was just so verbose that it made it hard to read anything. Also, I liked the optimized-for-embedded containers I had written, which had to turn into much less safe and harder to fix #define blocks.
Code that in C++ looked like:
if(uart[0]->Send(pktQueue.Top(), sizeof(Packet)))
pktQueue.Dequeue(1);
turns into:
if(UART_uchar_SendBlock(uart[0], Queue_Packet_Top(pktQueue), sizeof(Packet)))
Queue_Packet_Dequeue(pktQueue, 1);
which many people will probably say is fine but gets ridiculous if you have to do more than a couple "method" calls in a line. Two lines of C++ would turn into five of C (due to 80-char line length limits). Both would generate the same code, so it's not like the target processor cared!
One time (back in 1995), I tried writing a lot of C for a multiprocessor data-processing program. The kind where each processor has its own memory and program. The vendor-supplied compiler was a C compiler (some kind of HighC derivative), their libraries were closed source so I couldn't use GCC to build, and their APIs were designed with the mindset that your programs would primarily be the initialize/process/terminate variety, so inter-processor communication was rudimentary at best.
I got about a month in before I gave up, found a copy of cfront, and hacked it into the makefiles so I could use C++. Cfront didn't even support templates, but the C++ code was much, much clearer.
Generic, type-safe data structures (using templates).
The closest thing C has to templates is to declare a header file with a lot of code that looks like:
TYPE * Queue_##TYPE##_Top(Queue_##TYPE##* const this)
{ /* ... */ }
then pull it in with something like:
#define TYPE Packet
#include "Queue.h"
#undef TYPE
Note that this won't work for compound types (e.g. no queues of unsigned char) unless you make a typedef first.
Oh, and remember, if this code isn't actually used anywhere, then you don't even know if it's syntactically correct.
EDIT: One more thing: you'll need to manually manage instantiation of code. If your "template" code isn't all inline functions, then you'll have to put in some control to make sure that things get instantiated only once so your linker doesn't spit out a pile of "multiple instances of Foo" errors.
To do this, you'll have to put the non-inlined stuff in an "implementation" section in your header file:
#ifdef implementation_##TYPE
/* Non-inlines, "static members", global definitions, etc. go here. */
#endif
And then, in one place in all your code per template variant, you have to:
#define TYPE Packet
#define implementation_Packet
#include "Queue.h"
#undef TYPE
Also, this implementation section needs to be outside the standard #ifndef/#define/#endif litany, because you may include the template header file in another header file, but need to instantiate afterward in a .c file.
Yep, it gets ugly fast. Which is why most C programmers don't even try.
RAII.
Especially in functions with multiple return points, e.g. not having to remember to release the mutex on each return point.
Well, forget your pretty code and get used to all your return points (except the end of the function) being gotos:
TYPE * Queue_##TYPE##_Top(Queue_##TYPE##* const this)
{
TYPE * result;
Mutex_Lock(this->lock);
if(this->head == this->tail)
{
result = 0;
goto Queue_##TYPE##_Top_exit:;
}
/* Figure out `result` for real, then fall through to... */
Queue_##TYPE##_Top_exit:
Mutex_Lock(this->lock);
return result;
}
Destructors in general.
I.e. you write a d'tor once for MyClass, then if a MyClass instance is a member of MyOtherClass, MyOtherClass doesn't have to explicitly deinitialize the MyClass instance - its d'tor is called automatically.
Object construction has to be explicitly handled the same way.
Namespaces.
That's actually a simple one to fix: just tack a prefix onto every symbol. This is the primary cause of the source bloat that I talked about earlier (since classes are implicit namespaces). The C folks have been living this, well, forever, and probably won't see what the big deal is.
YMMV
I moved from C++ to C for a different reason (some sort of allergic reaction ;) and there are only a few thing that I miss and some things that I gained. If you stick to C99, if you may, there are constructs that let you program quite nicely and safely, in particular
designated initializers (eventually
combined with macros) make
initialization of simple classes as
painless as constructors
compound literals for temporary variables
for-scope variable may help you to do scope bound resource management, in particular to ensure to unlock of mutexes or free of arrays, even under preliminary function returns
__VA_ARGS__ macros can be used to have default arguments to functions and to do code unrolling
inline functions and macros that combine well to replace (sort of) overloaded functions
The difference between C and C++ is the predictability of the code's behavior.
It is a easier to predict with great accuracy what your code will do in C, in C++ it might become a bit more difficult to come up with an exact prediction.
The predictability in C gives you better control of what your code is doing, but that also means you have to do more stuff.
In C++ you can write less code to get the same thing done, but (at leas for me) I have trouble occasionally knowing how the object code is laid out in memory and it's expected behavior.
Nothing like the STL exists for C.
There are libs available which provide similar functionality, but it isn't builtin anymore.
Think that would be one of my biggest problems... Knowing with which tool I could solve the problem, but not having the tools available in the language I have to use.
In my line of work - which is embedded, by the way - I am constantly switching back & forth between C and C++.
When I'm in C, I miss from C++:
templates (including but not limited to STL containers). I use them for things like special counters, buffer pools, etc. (built up my own library of class templates & function templates that I use in different embedded projects)
very powerful standard library
destructors, which of course make RAII possible (mutexes, interrupt disable, tracing, etc.)
access specifiers, to better enforce who can use (not see) what
I use inheritance on larger projects, and C++'s built-in support for it is much cleaner & nicer than the C "hack" of embedding the base class as the first member (not to mention automatic invocation of constructors, init. lists, etc.) but the items listed above are the ones I miss the most.
Also, probably only about a third of the embedded C++ projects I work on use exceptions, so I've become accustomed to living without them, so I don't miss them too much when I move back to C.
On the flip side, when I move back to a C project with a significant number of developers, there are whole classes of C++ problems that I'm used to explaining to people which go away. Mostly problems due to the complexity of C++, and people who think they know what's going on, but they're really at the "C with classes" part of the C++ confidence curve.
Given the choice, I'd prefer using C++ on a project, but only if the team is pretty solid on the language. Also of course assuming it's not an 8K μC project where I'm effectively writing "C" anyway.
Couple of observations
Unless you plan to use your c++ compiler to build your C (which is possible if you stick to a well define subset of C++) you will soon discover things that your compiler allows in C that would be a compile error in C++.
No more cryptic template errors (yay!)
No (language supported) object oriented programming
Pretty much the same reasons I have for using C++ or a mix of C/C++ rather than pure C. I can live without namespaces but I use them all the time if the code standard allows it. The reasons is that you can write much more compact code in C++. This is very usefull for me, I write servers in C++ which tend to crash now and then. At that point it helps a lot if the code you are looking at is short and consist. For example consider the following code:
uint32_t
ScoreList::FindHighScore(
uint32_t p_PlayerId)
{
MutexLock lock(m_Lock);
uint32_t highScore = 0;
for(int i = 0; i < m_Players.Size(); i++)
{
Player& player = m_Players[i];
if(player.m_Score > highScore)
highScore = player.m_Score;
}
return highScore;
}
In C that looks like:
uint32_t
ScoreList_getHighScore(
ScoreList* p_ScoreList)
{
uint32_t highScore = 0;
Mutex_Lock(p_ScoreList->m_Lock);
for(int i = 0; i < Array_GetSize(p_ScoreList->m_Players); i++)
{
Player* player = p_ScoreList->m_Players[i];
if(player->m_Score > highScore)
highScore = player->m_Score;
}
Mutex_UnLock(p_ScoreList->m_Lock);
return highScore;
}
Not a world of difference. One more line of code, but that tends to add up. Nomally you try your best to keep it clean and lean but sometimes you have to do something more complex. And in those situations you value your line count. One more line is one more thing to look at when you try to figure out why your broadcast network suddenly stops delivering messages.
Anyway I find that C++ allows me to do more complex things in a safe fashion.
yes! i have experienced both of these languages and what i found is C++ is more friendly language. It facilitates with more features. It is better to say that C++ is superset of C language as it provide additional features like polymorphism, interitance, operator and function overloading, user defined data types which is not really supported in C. The thousand lines of code is reduce to few lines with the help of object oriented programming that's the main reason of moving from C to C++.
I think the main problem why c++ is harder to be accepted in embedded environment is because of the lack of engineers that understand how to use c++ properly.
Yes, the same reasoning can be applied to C as well, but luckily there aren't that many pitfalls in C that can shoot yourself in the foot. C++ on the other hand, you need to know when not to use certain features in c++.
All in all, I like c++. I use that on the O/S services layer, driver, management code, etc.
But if your team doesn't have enough experience with it, it's gonna be a tough challenge.
I had experience with both. When the rest of the team wasn't ready for it, it was a total disaster. On the other hand, it was good experience.
Certainly, the desire to escape complex/messy syntax is understandable. Sometimes C can appear to be the solution. However, C++ is where the industry support is, including tooling and libraries, so that is hard to work around.
C++ has so many features today including lambdas.
A good approach is to leverage C++ itself to make your code simpler. Objects are good for isolating things under the hood so that at a higher level, the code is simpler. The core guidelines recommend concrete (simple) objects, so that approach can help.
The level of complexity is under the engineer's control. If multiple inheritance (MI) is useful in a scenario and one prefers that option, then one may use MI.
Alternatively, one can define interfaces, inherit from the interface(s), and contain implementing objects (composition/aggregation) and expose the objects through the interface using inline wrappers. The inline wrappers compile down to nothing, i.e., compile down to simple use of the internal (contained) object, yet the container object appears to have that functionality as if multiple inheritance was used.
C++ also has namespaces, so one should leverage namespaces even if coding in a C-like style.
One can use the language itself to create simpler patterns and the STL is full of examples: array, vector, map, queue, string, unique_ptr,... And one can control (to a reasonable extent) how complex their code is.
So, going back to C is not the way, nor is it necessary. One may use C++ in a C-like way, or use C++ multiple inheritance, or use any option in-between.

Best Practices: Should I create a typedef for byte in C or C++?

Do you prefer to see something like t_byte* (with typedef unsigned char t_byte) or unsigned char* in code?
I'm leaning towards t_byte in my own libraries, but have never worked on a large project where this approach was taken, and am wondering about pitfalls.
If you're using C99 or newer, you should use stdint.h for this. uint8_t, in this case.
C++ didn't get this header until C++11, calling it cstdint. Old versions of Visual C++ didn't let you use C99's stdint.h in C++ code, but pretty much every other C++98 compiler did, so you may have that option even when using old compilers.
As with so many other things, Boost papers over this difference in boost/integer.hpp, providing things like uint8_t if your compiler's standard C++ library doesn't.
I suggest that if your compiler supports it use the C99 <stdint.h> header types such as uint8_t and int8_t.
If your compiler does not support it, create one. Here's an example for VC++, older versions of which do not have stdint.h. GCC does support stdint.h, and indeed most of C99
One problem with your suggestion is that the sign of char is implementation defined, so if you do create a type alias. you should at least be explicit about the sign. There is some merit in the idea since in C# for example a char is 16bit. But it has a byte type as well.
Additional note...
There was no problem with your suggestion, you did in fact specify unsigned.
I would also suggest that plain char is used if the data is in fact character data, i.e. is a representation of plain text such as you might display on a console. This will present fewer type agreement problems when using standard and third-party libraries. If on the other hand the data represents a non-character entity such as a bitmap, or if it is numeric 'small integer' data upon which you might perform arithmetic manipulation, or data that you will perform logical operations on, then one of the stdint.h types (or even a type defined from one of them) should be used.
I recently got caught out on a TI C54xx compiler where char is in fact 16bit, so that is why using stdint.h where possible, even if you use it to then define a byte type is preferable to assuming that unsigned char is a suitable alias.
I prefer for types to convey the meaning of the values stored in it. If I need a type describing a byte as it is on my machine, I very much prefer byte_t over unsigned char, which could mean just about anything. (I have been working in a code base that used either signed char or unsigned char to store UTF-8 strings.) The same goes for uint8_t. It could just be used as that: an 8bit unsigned integer.
With byte_t (as with any other aptly named type), there rarely ever is a need to look up what it is defined to (and if so, a good editor will take 3secs to look it up for you; maybe 10secs, if the code base is huge), and just by looking at it it's clear what's stored in objects of that type.
Personally I prefer boost::int8_t and boost::uint8_t.
If you don't want to use boost you could borrow boost\cstdint.hpp.
Another option is to use portable version of stdint.h (link from this answer).
Besides your awkward naming convention, I think that might be okay. Keep in mind boost does this for you, to help with cross-platform-ability:
#include <boost/integer.hpp>
typedef boost::uint8_t byte_t;
Note that usually type's are suffixed with _t, as in byte_t.
I prefer to use standard types, unsigned char, uint8_t, etc., so any programmer looking at the source does not have to refer back to headers to grok the code. The more typedefs you use the more time it takes for others to get to know your typing conventions. For structures, absolutely use typedefs, but for primitives use them sparingly.