Usage of 'short' in C++ - c++

Why is it that for any numeric input we prefer an int rather than short, even if the input is of very few integers.
The size of short is 2 bytes on my x86 and 4 bytes for int, shouldn't it be better and faster to allocate than an int?
Or I am wrong in saying that short is not used?

CPUs are usually fastest when dealing with their "native" integer size. So even though a short may be smaller than an int, the int is probably closer to the native size of a register in your CPU, and therefore is likely to be the most efficient of the two.
In a typical 32-bit CPU architecture, to load a 32-bit value requires one bus cycle to load all the bits. Loading a 16-bit value requires one bus cycle to load the bits, plus throwing half of them away (this operation may still happen within one bus cycle).

A 16-bit short makes sense if you're keeping so many in memory (in a large array, for example) that the 50% reduction in size adds up to an appreciable reduction in memory overhead. They are not faster than 32-bit integers on modern processors, as Greg correctly pointed out.

In embedded systems, the short and unsigned short data types are used for accessing items that require less bits than the native integer.
For example, if my USB controller has 16 bit registers, and my processor has a native 32 bit integer, I would use an unsigned short to access the registers (provided that the unsigned short data type is 16-bits).
Most of the advice from experienced users (see news:comp.lang.c++.moderated) is to use the native integer size unless a smaller data type must be used. The problem with using short to save memory is that the values may exceed the limits of short. Also, this may be a performance hit on some 32-bit processors, as they have to fetch 32 bits near the 16-bit variable and eliminate the unwanted 16 bits.
My advice is to work on the quality of your programs first, and only worry about optimization if it is warranted and you have extra time in your schedule.

Using type short does not guarantee that the actual values will be smaller than those of type int. It allows for them to be smaller, and ensures that they are no bigger. Note too that short must be larger than or equal in size to type char.
The original question above contains actual sizes for the processor in question, but when porting code to a new environment, one can only rely on weak relative assumptions without verifying the implementation-defined sizes.
The C header <stdint.h> -- or, from C++, <cstdint> -- defines types of specified size, such as uint8_t for an unsigned integral type exactly eight bits wide. Use these types when attempting to conform to an externally-specified format such as a network protocol or binary file format.

The short type is very useful if you have a big array full of them and int is just way too big.
Given that the array is big enough, the memory saving will be important (instead of just using an array of ints).
Unicode arrays are also encoded in shorts (although other encode schemes exist).
On embedded devices, space still matters and short might be very beneficial.
Last but not least, some transmission protocols insists in using shorts, so you still need them there.

Maybe we should consider it in different situations. For example, x86 or x64 should consider more suitable type, not just choose int. In some cases, int have faster speed than short. The first floor have answered this question

Related

What is a good example of leveraging the differences between int*_t, int_fast*_t and int_least*_t in C++11?

According to online documentation, there are differences in between these fixed width integer types. For int*_t we fixed the width to whatever the value of * is. Yet for the other two types, the adjectives fastest and smallest are used in the description to request the fastest or the smallest instances provided by the underlying data model.
What are the objective meanings of "the fastest" or the "smallest"? What is an example in where this would be advantageous or even necessary?
There is no objective meaning to "fastest"; it's basically a judgement call by the compiler writer. Typically, it means expanding smaller values to the native register width of the architecture, but that's not always fastest (e.g. a 1 billion entry array would probably be processed quicker if it were 8 bit values, but uint_fast8_t might be a 32 bit value because the CPU register manipulation goes faster for that size).
"smallest" usually means "the same size as the bits requested", but on weird architectures with limited size values to choose from (e.g. old Crays had everything as a 64 bit type), int_least16_t would work (and seamlessly become a 64 bit value), while the compiler would likely error out on int16_t (because it's impossible to make a true 16 bit integer value there).
Point is, if you're relying on overflow behaviors, you need to use an exact fixed width type. Otherwise, you should probably default to least types for maximum portability, switching to fast types in hot code paths, but profiling would be needed to determine if it really makes any difference.

Difference between uint8_t, uint_fast8_t and uint_least8_t

The C99 standard introduces the following datatypes. The documentation can be found here for the AVR stdint library.
uint8_t means it's an 8-bit unsigned type.
uint_fast8_t means it's the fastest unsigned int with at least 8
bits.
uint_least8_t means it's an unsigned int with at least 8 bits.
I understand uint8_t and what is uint_fast8_t( I don't know how it's implemented in register level).
1.Can you explain what is the meaning of "it's an unsigned int with at least 8 bits"?
2.How uint_fast8_t and uint_least8_t help increase efficiency/code space compared to the uint8_t?
uint_least8_t is the smallest type that has at least 8 bits.
uint_fast8_t is the fastest type that has at least 8 bits.
You can see the differences by imagining exotic architectures. Imagine a 20-bit architecture. Its unsigned int has 20 bits (one register), and its unsigned char has 10 bits. So sizeof(int) == 2, but using char types requires extra instructions to cut the registers in half. Then:
uint8_t: is undefined (no 8 bit type).
uint_least8_t: is unsigned char, the smallest type that is at least 8 bits.
uint_fast8_t: is unsigned int, because in my imaginary architecture, a half-register variable is slower than a full-register one.
uint8_t means: give me an unsigned int of exactly 8 bits.
uint_least8_t means: give me the smallest type of unsigned int which has at least 8 bits. Optimize for memory consumption.
uint_fast8_t means: give me an unsigned int of at least 8 bits. Pick a larger type if it will make my program faster, because of alignment considerations. Optimize for speed.
Also, unlike the plain int types, the signed version of the above stdint.h types are guaranteed to be 2's complement format.
The theory goes something like:
uint8_t is required to be exactly 8 bits but it's not required to exist. So you should use it where you are relying on the modulo-256 assignment behaviour* of an 8 bit integer and where you would prefer a compile failure to misbehaviour on obscure architectures.
uint_least8_t is required to be the smallest available unsigned integer type that can store at least 8 bits. You would use it when you want to minimise the memory use of things like large arrays.
uint_fast8_t is supposed to be the "fastest" unsigned type that can store at least 8 bits; however, it's not actually guaranteed to be the fastest for any given operation on any given processor. You would use it in processing code that performs lots of operations on the value.
The practice is that the "fast" and "least" types aren't used much.
The "least" types are only really useful if you care about portability to obscure architectures with CHAR_BIT != 8 which most people don't.
The problem with the "fast" types is that "fastest" is hard to pin down. A smaller type may mean less load on the memory/cache system but using a type that is smaller than native may require extra instructions. Furthermore which is best may change between architecture versions but implementers often want to avoid breaking ABI in such cases.
From looking at some popular implementations it seems that the definitions of uint_fastn_t are fairly arbitrary. glibc seems to define them as being at least the "native word size" of the system in question taking no account of the fact that many modern processors (especially 64-bit ones) have specific support for fast operations on items smaller than their native word size. IOS apparently defines them as equivalent to the fixed-size types. Other platforms may vary.
All in all if performance of tight code with tiny integers is your goal you should be bench-marking your code on the platforms you care about with different sized types to see what works best.
* Note that unfortunately modulo-256 assignment behaviour does not always imply modulo-256 arithmetic, thanks to C's integer promotion misfeature.
Some processors cannot operate as efficiently on smaller data types as on large ones. For example, given:
uint32_t foo(uint32_t x, uint8_t y)
{
x+=y;
y+=2;
x+=y;
y+=4;
x+=y;
y+=6;
x+=y;
return x;
}
if y were uint32_t a compiler for the ARM Cortex-M3 could simply generate
add r0,r0,r1,asl #2 ; x+=(y<<2)
add r0,r0,#12 ; x+=12
bx lr ; return x
but since y is uint8_t the compiler would have to instead generate:
add r0,r0,r1 ; x+=y
add r1,r1,#2 ; Compute y+2
and r1,r1,#255 ; y=(y+2) & 255
add r0,r0,r1 ; x+=y
add r1,r1,#4 ; Compute y+4
and r1,r1,#255 ; y=(y+4) & 255
add r0,r0,r1 ; x+=y
add r1,r1,#6 ; Compute y+6
and r1,r1,#255 ; y=(y+6) & 255
add r0,r0,r1 ; x+=y
bx lr ; return x
The intended purpose of the "fast" types was to allow compilers to replace smaller types which couldn't be processed efficiently with faster ones. Unfortunately, the semantics of "fast" types are rather poorly specified, which in turn leaves murky questions of whether expressions will be evaluated using signed or unsigned math.
1.Can you explain what is the meaning of "it's an unsigned int with at least 8 bits"?
That ought to be obvious. It means that it's an unsigned integer type, and that it's width is at least 8 bits. In effect this means that it can at least hold the numbers 0 through 255, and it can definitely not hold negative numbers, but it may be able to hold numbers higher than 255.
Obviously you should not use any of these types if you plan to store any number outside the range 0 through 255 (and you want it to be portable).
2.How uint_fast8_t and uint_least8_t help increase efficiency/code space compared to the uint8_t?
uint_fast8_t is required to be faster so you should use that if your requirement is that the code be fast. uint_least8_t on the other hand requires that there is no candidate of lesser size - so you would use that if size is the concern.
And of course you use only uint8_t when you absolutely require it to be exactly 8 bits. Using uint8_t may make the code non-portable as uint8_t is not required to exist (because such small integer type does not exist on certain platforms).
The "fast" integer types are defined to be the fastest integer available with at least the amount of bits required (in your case 8).
A platform can define uint_fast8_t as uint8_t then there will be absolutely no difference in speed.
The reason is that there are platforms that are slower when not using their native word length.
As the name suggests, uint_least8_t is the smallest type that has at least 8 bits, uint_fast8_t is the fastest type that has at least 8 bits. uint8_t has exactly 8 bits, but it is not guaranteed to exist on all platforms, although this is extremely uncommon.
In most case, uint_least8_t = uint_fast8_t = uint8_t = unsigned char. The only exception I have seen is the C2000 DSP from Texas Instruments, it is 32-bit, but its minimum data width is 16-bit. It does not have uint8_t, you can only use uint_least8_t and uint_fast8_t, they are defined as unsigned int, which is 16-bit.
I'm using the fast datatypes (uint_fast8_t) for local vars and function parameters, and using the normal ones (uint8_t) in arrays and structures which are used frequently and memory footprint is more important than the few cycles that could be saved by not having to clear or sign extend the upper bits.
Works great, except with MISRA checkers. They go nuts from the fast types. The trick is that the fast types are used through derived types that can be defined differently for MISRA builds and normal ones.
I think these types are great to create portable code, that's efficient on both low-end microcontrollers and big application processors. The improvement might be not huge, or totally negligible with good compilers, but better than nothing.
Some guessing in this thread.
"fast": The compiler should place "fast" type vars in IRAM (local processor RAM) which requires fewer cycles to access and write than vars stored in the hinterlands of RAM. "fast" is used if you need quickest possible action on a var, such as in an Interrupt Service Routine (ISR). Same as declaring a function to have an IRAM_ATTR; this == faster access. There is limited space for "fast" or IRAM vars/functions, so only use when needed, and never persist unless they qualify for that. Most compilers will move "fast" vars to general RAM if processor RAM is all allocated.

When to use `short` over `int`?

There are many questions that asks for difference between the short and int integer types in C++, but practically, when do you choose short over int?
(See Eric's answer for more detailed explanation)
Notes:
Generally, int is set to the 'natural size' - the integer form that the hardware handles most efficiently
When using short in an array or in arithmetic operations, the short integer is converted into int, and so this can introduce a hit on the speed in processing short integers
Using short can conserve memory if it is narrower than int, which can be important when using a large array
Your program will use more memory in a 32-bit int system compared to a 16-bit int system
Conclusion:
Use int unless you conserving memory is critical, or your program uses a lot of memory (e.g. many arrays). In that case, use short.
You choose short over int when:
Either
You want to decrease the memory footprint of the values you're storing (for instance, if you're targeting a low-memory platform),
You want to increase performance by increasing either the number of values that can be packed into a single memory page (reducing page faults when accessing your values) and/or in the memory caches (reducing cache misses when accessing values), and profiling has revealed that there are performance gains to be had here,
Or you are sending data over a network or storing it to disk, and want to decrease your footprint (to take up less disk space or network bandwidth). Although for these cases, you should prefer types which specify exactly the size in bits rather than int or short, which can vary based on platform (as you want a platform with a 32-bit short to be able to read a file written on a platform with a 16-bit short). Good candidates are the types defined in stdint.h.
And:
You have a numeric value which does not need to take on any values that can't be stored in a short on your target platform (for a 16-bit short, this is -32768-32767, or 0-65535 for a 16-bit unsigned short).
Your target platform (or one of you r target platforms) uses less memory for a short than for an int. The standard only guarantees that short is not larger than int, so implementations are allowed to have the same size for a short and for an int.
Note:
chars can also be used as arithmetic types. An answer to "When should I use char instead of short or int?" would read very similarly to this one, but with different numbers (-128-127 for an 8-bit char, 0-255 for an 8-bit unsigned char)
In reality, you likely don't actually want to use the short type specifically. If you want an integer of specific size, there are types defined in <cstdint> that should be preferred, as, for example, an int16_t will be 16 bits on every system, whereas you cannot guarantee the size of a short will be the same across all targets your code will be compiled for.
In general, you don't prefer short over int.
The int type is the processor's native word size
Usually, an int is the processor's word size.
For example, with a 32-bit word size processor, an int would be 32 bits. The processor is most efficient using 32-bits. Assuming that short is 16-bit, the processor still fetches 32-bits from memory. So no efficiency here; actually it's longer because the processor may have to shift the bits to be placed in the correct position in a 32-bit word.
Choosing a smaller data type
There are standardized data types that are bit specific in length, such as uint16_t. These are preferred to the ambiguous types of char, short, and int. These width specific data types are usually used for accessing hardware, or compressing space (such as message protocols).
Choosing a smaller range
The short data type is based on range not bit width. On a 32-bit system, both short and int may have the same 32-bit length.
Once reason for using short is because the value will never go past a given range. This is usually a fallacy because programs will change and the data type could overflow.
Summary
Presently, I do not use short anymore. I use uint16_t when I access 16-bit hardware devices. I use unsigned int for quantities, including loop indices. I use uint8_t, uint16_t and uint32_t when size matters for data storage. The short data type is ambiguous for data storage, since it is a minimum. With the advent of stdint header files, there is no longer any need for short.
If you don't have any specific constraints imposed by your architecture, I would say you can always use int. The type short is meant for specific systems where memory is a precious resource.

Is using a non-32-bit integer reasonable? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
The importance of using a 16bit integer
If today's processors perform (under standard conditions) 32-bit operations -- then is using a "short int" reasonable? Because in order to perform an operation on that data, it will convert it to a 32-bit (from 16-bit) integer, perform the operations, and then go back to 16-bit -- I think. So what is the point?
In essence my questions are as follows:
What (if any) performance gain/hindrance does using a smaller ranged integer bring? Like, if instead of using a standard 32-bit integer for storage, I use a 16-bit short integer.
"and then go back to 16-bit" -- Am I correct here? See above.
Are all integer data stored as 32-bit integer space on CPU/RAM?
The answer to your first question should also clarify the last one: if you need to store large numbers of 16-bit ints, you save half the amount of memory required for 32-bit ints, with whatever "fringe benefits" that may come along with it, such as using the cache more efficiently.
Most CPUs these days have separate instructions for 16-bit vs. 32-bit operations, along with instructions to read and write 16-bit values from and to memory. Internally, the ALU may be performing a 32-bit operation, but the result for the upper half does not make it back into the registers.
The processor doesn't need to "expand" a value to work with it. It just pads the unused spaces with zeroes and ignores them when performing calculations. So, actually, it is faster to operate on a short int than a long int, although with today's fast CPUs it is very hard to notice even a bit of difference (pun intended).
The machine doesn't really convert. When changing the size of a value, it either pads zeroes to the left or totally ignores extra bits to the left that won't fit in the target memory region.
No, and this is usually the reason people use short int values for purposes where the range of a long int just isn't needed. The memory allocated is different for each length of int, like a short int takes up fewer bits of memory than a long int. One of the steps in optimization is to change long int values to short int values when the range does not exceed that of a short int, meaning that the value would never use the extra bits allocated with a long int. The memory saved from such an optimization can actually be quite significant when dealing with a lot of elements in arrays or a lot of objects of the same struct or class.
Different int sizes are stored with different amounts of bits in both the RAM and the internal processor cache. This is also true of float, double, and long double, although long double is mainly for 64-bit systems and most compilers just ignore the long if running on 32-bit machines because a 64-bit value in a 32-bit accumulator & ALU will be 'mowed down' during any calculation and would likely never receive anything but zeros for the first 32 bits.
What (if any) performance gain/hindrance does using a smaller ranged integer bring? Like, if instead of using a standard 32-bit integer for storage, I use a 16-bit short integer.
It uses less memory. Under normal circumstances, it will use half as much.
"and then go back to 16-bit" -- Am I correct here? See above.
It only converts between 16 an 32-bit if that is needed by your code, which you failed to show.
Are all integer data stored as 32-bit integer space on CPU/RAM?
No. 32-bit processors can address and work directly with values up to 32 bits. Many operations can be done on 8 and 16-bit values as well.
No is not reasonable unless you have some sort of (very tight) memory constraints you should use int
You dont gain performance, just memory. In fact you lose performance because of what you just said, since registers need to strip out the upper bits.
See above
Yes depends on the CPU, No it's 16 bit on the RAM
What (if any) performance gain/hindrance does using a smaller ranged
integer bring? Like, if instead of using a standard 32-bit integer for
storage, I use a 16-bit short integer.
Performance comes from cache locality. The more data you fit in cache, the faster your program runs. This is more relevant if you have lots of short values.
"and then go back to 16-bit" -- Am I correct here?
I'm not so sure about this. I would have expected that the CPU can optimize multiple operations in parallel, and you get bigger throughput if you can pack data into 16 bits. It may also be that this can happen at the same time as other 32-bit operations. I am speculating here, so I'll stop!
Are all integer data stored as 32-bit integer space on CPU/RAM?
No. The various integer datatypes have a specific size. However, you may encounter padding inside structs when you use char and short in particular.
Speed efficiency is not the only concern. Obviously you have storage benefits, as well as intrinsic behaviour (for example, I have written performance-specific code that exploits the integer overflow of a unsigned short just so that I don't have to do any modulo). You also have the benefit of using specific data sizes for reading and writing binary data. There's probably more that I haven't mentioned, but you get the point =)

What platforms have something other than 8-bit char?

Every now and then, someone on SO points out that char (aka 'byte') isn't necessarily 8 bits.
It seems that 8-bit char is almost universal. I would have thought that for mainstream platforms, it is necessary to have an 8-bit char to ensure its viability in the marketplace.
Both now and historically, what platforms use a char that is not 8 bits, and why would they differ from the "normal" 8 bits?
When writing code, and thinking about cross-platform support (e.g. for general-use libraries), what sort of consideration is it worth giving to platforms with non-8-bit char?
In the past I've come across some Analog Devices DSPs for which char is 16 bits. DSPs are a bit of a niche architecture I suppose. (Then again, at the time hand-coded assembler easily beat what the available C compilers could do, so I didn't really get much experience with C on that platform.)
char is also 16 bit on the Texas Instruments C54x DSPs, which turned up for example in OMAP2. There are other DSPs out there with 16 and 32 bit char. I think I even heard about a 24-bit DSP, but I can't remember what, so maybe I imagined it.
Another consideration is that POSIX mandates CHAR_BIT == 8. So if you're using POSIX you can assume it. If someone later needs to port your code to a near-implementation of POSIX, that just so happens to have the functions you use but a different size char, that's their bad luck.
In general, though, I think it's almost always easier to work around the issue than to think about it. Just type CHAR_BIT. If you want an exact 8 bit type, use int8_t. Your code will noisily fail to compile on implementations which don't provide one, instead of silently using a size you didn't expect. At the very least, if I hit a case where I had a good reason to assume it, then I'd assert it.
When writing code, and thinking about cross-platform support (e.g. for general-use libraries), what sort of consideration is it worth giving to platforms with non-8-bit char?
It's not so much that it's "worth giving consideration" to something as it is playing by the rules. In C++, for example, the standard says all bytes will have "at least" 8 bits. If your code assumes that bytes have exactly 8 bits, you're violating the standard.
This may seem silly now -- "of course all bytes have 8 bits!", I hear you saying. But lots of very smart people have relied on assumptions that were not guarantees, and then everything broke. History is replete with such examples.
For instance, most early-90s developers assumed that a particular no-op CPU timing delay taking a fixed number of cycles would take a fixed amount of clock time, because most consumer CPUs were roughly equivalent in power. Unfortunately, computers got faster very quickly. This spawned the rise of boxes with "Turbo" buttons -- whose purpose, ironically, was to slow the computer down so that games using the time-delay technique could be played at a reasonable speed.
One commenter asked where in the standard it says that char must have at least 8 bits. It's in section 5.2.4.2.1. This section defines CHAR_BIT, the number of bits in the smallest addressable entity, and has a default value of 8. It also says:
Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.
So any number equal to 8 or higher is suitable for substitution by an implementation into CHAR_BIT.
Machines with 36-bit architectures have 9-bit bytes. According to Wikipedia, machines with 36-bit architectures include:
Digital Equipment Corporation PDP-6/10
IBM 701/704/709/7090/7094
UNIVAC 1103/1103A/1105/1100/2200,
A few of which I'm aware:
DEC PDP-10: variable, but most often 7-bit chars packed 5 per 36-bit word, or else 9 bit chars, 4 per word
Control Data mainframes (CDC-6400, 6500, 6600, 7600, Cyber 170, Cyber 176 etc.) 6-bit chars, packed 10 per 60-bit word.
Unisys mainframes: 9 bits/byte
Windows CE: simply doesn't support the `char` type at all -- requires 16-bit wchar_t instead
There is no such thing as a completely portable code. :-)
Yes, there may be various byte/char sizes. Yes, there may be C/C++ implementations for platforms with highly unusual values of CHAR_BIT and UCHAR_MAX. Yes, sometimes it is possible to write code that does not depend on char size.
However, almost any real code is not standalone. E.g. you may be writing a code that sends binary messages to network (protocol is not important). You may define structures that contain necessary fields. Than you have to serialize it. Just binary copying a structure into an output buffer is not portable: generally you don't know neither the byte order for the platform, nor structure members alignment, so the structure just holds the data, but not describes the way the data should be serialized.
Ok. You may perform byte order transformations and move the structure members (e.g. uint32_t or similar) using memcpy into the buffer. Why memcpy? Because there is a lot of platforms where it is not possible to write 32-bit (16-bit, 64-bit -- no difference) when the target address is not aligned properly.
So, you have already done a lot to achieve portability.
And now the final question. We have a buffer. The data from it is sent to TCP/IP network. Such network assumes 8-bit bytes. The question is: of what type the buffer should be? If your chars are 9-bit? If they are 16-bit? 24? Maybe each char corresponds to one 8-bit byte sent to network, and only 8 bits are used? Or maybe multiple network bytes are packed into 24/16/9-bit chars? That's a question, and it is hard to believe there is a single answer that fits all cases. A lot of things depend on socket implementation for the target platform.
So, what I am speaking about. Usually code may be relatively easily made portable to certain extent. It's very important to do so if you expect using the code on different platforms. However, improving portability beyond that measure is a thing that requires a lot of effort and often gives little, as the real code almost always depends on other code (socket implementation in the example above). I am sure that for about 90% of code ability to work on platforms with bytes other than 8-bit is almost useless, for it uses environment that is bound to 8-bit. Just check the byte size and perform compilation time assertion. You almost surely will have to rewrite a lot for a highly unusual platform.
But if your code is highly "standalone" -- why not? You may write it in a way that allows different byte sizes.
It appears that you can still buy an IM6100 (i.e. a PDP-8 on a chip) out of a warehouse. That's a 12-bit architecture.
Many DSP chips have 16- or 32-bit char. TI routinely makes such chips for example.
The C and C++ programming languages, for example, define byte as "addressable unit of data large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). Since the C char integral data type must contain at least 8 bits (clause 5.2.4.2.1), a byte in C is at least capable of holding 256 different values. Various implementations of C and C++ define a byte as 8, 9, 16, 32, or 36 bits
Quoted from http://en.wikipedia.org/wiki/Byte#History
Not sure about other languages though.
http://en.wikipedia.org/wiki/IBM_7030_Stretch#Data_Formats
Defines a byte on that machine to be variable length
The DEC PDP-8 family had a 12 bit word although you usually used 8 bit ASCII for output (on a Teletype mostly). However, there was also a 6-BIT character code that allowed you to encode 2 chars in a single 12-bit word.
For one, Unicode characters are longer than 8-bit. As someone mentioned earlier, the C spec defines data types by their minimum sizes. Use sizeof and the values in limits.h if you want to interrogate your data types and discover exactly what size they are for your configuration and architecture.
For this reason, I try to stick to data types like uint16_t when I need a data type of a particular bit length.
Edit: Sorry, I initially misread your question.
The C spec says that a char object is "large enough to store any member of the execution character set". limits.h lists a minimum size of 8 bits, but the definition leaves the max size of a char open.
Thus, the a char is at least as long as the largest character from your architecture's execution set (typically rounded up to the nearest 8-bit boundary). If your architecture has longer opcodes, your char size may be longer.
Historically, the x86 platform's opcode was one byte long, so char was initially an 8-bit value. Current x86 platforms support opcodes longer than one byte, but the char is kept at 8 bits in length since that's what programmers (and the large volumes of existing x86 code) are conditioned to.
When thinking about multi-platform support, take advantage of the types defined in stdint.h. If you use (for instance) a uint16_t, then you can be sure that this value is an unsigned 16-bit value on whatever architecture, whether that 16-bit value corresponds to a char, short, int, or something else. Most of the hard work has already been done by the people who wrote your compiler/standard libraries.
If you need to know the exact size of a char because you are doing some low-level hardware manipulation that requires it, I typically use a data type that is large enough to hold a char on all supported platforms (usually 16 bits is enough) and run the value through a convert_to_machine_char routine when I need the exact machine representation. That way, the platform-specific code is confined to the interface function and most of the time I can use a normal uint16_t.
what sort of consideration is it worth giving to platforms with non-8-bit char?
magic numbers occur e.g. when shifting;
most of these can be handled quite simply
by using CHAR_BIT and e.g. UCHAR_MAX instead of 8 and 255 (or similar).
hopefully your implementation defines those :)
those are the "common" issues.....
another indirect issue is say you have:
struct xyz {
uchar baz;
uchar blah;
uchar buzz;
}
this might "only" take (best case) 24 bits on one platform,
but might take e.g. 72 bits elsewhere.....
if each uchar held "bit flags" and each uchar only had 2 "significant" bits or flags that
you were currently using, and you only organized them into 3 uchars for "clarity",
then it might be relatively "more wasteful" e.g. on a platform with 24-bit uchars.....
nothing bitfields can't solve, but they have other things to watch out
for ....
in this case, just a single enum might be a way to get the "smallest"
sized integer you actually need....
perhaps not a real example, but stuff like this "bit" me when porting / playing with some code.....
just the fact that if a uchar is thrice as big as what is "normally" expected,
100 such structures might waste a lot of memory on some platforms.....
where "normally" it is not a big deal.....
so things can still be "broken" or in this case "waste a lot of memory very quickly" due
to an assumption that a uchar is "not very wasteful" on one platform, relative to RAM available, than on another platform.....
the problem might be more prominent e.g. for ints as well, or other types,
e.g. you have some structure that needs 15 bits, so you stick it in an int,
but on some other platform an int is 48 bits or whatever.....
"normally" you might break it into 2 uchars, but e.g. with a 24-bit uchar
you'd only need one.....
so an enum might be a better "generic" solution ....
depends on how you are accessing those bits though :)
so, there might be "design flaws" that rear their head....
even if the code might still work/run fine regardless of the
size of a uchar or uint...
there are things like this to watch out for, even though there
are no "magic numbers" in your code ...
hope this makes sense :)
The weirdest one I saw was the CDC computers. 6 bit characters but with 65 encodings. [There were also more than one character set -- you choose the encoding when you install the OS.]
If a 60 word ended with 12, 18, 24, 30, 36, 40, or 48 bits of zero, that was the end of line character (e.g. '\n').
Since the 00 (octal) character was : in some code sets, that meant BNF that used ::= was awkward if the :: fell in the wrong column. [This long preceded C++ and other common uses of ::.]
ints used to be 16 bits (pdp11, etc.). Going to 32 bit architectures was hard. People are getting better: Hardly anyone assumes a pointer will fit in a long any more (you don't right?). Or file offsets, or timestamps, or ...
8 bit characters are already somewhat of an anachronism. We already need 32 bits to hold all the world's character sets.
The Univac 1100 series had two operational modes: 6-bit FIELDATA and 9-bit 'ASCII' packed 6 or 4 characters respectively into 36-bit words. You chose the mode at program execution time (or compile time.) It's been a lot of years since I actually worked on them.