I've seen several libraries and some C++ header files that provide compiler independent types but I don't understand quite why they are compiler independent.
For example:
int Number; // Not compiler Independent
typedef unsigned int U32;
U32 Number2; // Now this is compiler independent
Is this above true? If so, why? I don't quite understand why the usage of a typedef would mean that the size of Number2 is the same across compilers.
Elaborating on the comment,
Proposition : Use a typedef for compiler independence.
Rationale : Platform independence is a Good Thing
Implementation:
#ifdef _MSC_VER
#if _MSC_VER < 1400
typedef int bar;
#elif _MSC_VER < 1600
typedef char bar;
#else
typedef bool bar;
#else
#error "Unknown compiler"
#endif
The preprocessor macro chain is the important part not the typedef.
Disclaimer: I haven't compiled it!
I'm assuming that you meant for the types to be the same with unsigned int Number.
But no, these are exactly the same. Both declarations, Number and Number2, have the same type. Neither is more compiler independent than the other.
However, the point of using a typedef like this is so that the developers of the library can easily change the integer type used by all functions that use U32. If, for example, they are on a system that where an unsigned int is not 32 bits, but an unsigned long is, they could change the typedef to:
typedef unsigned long U32;
In fact, it's possible to use the build system to conditionally change the typedef depending on the target platform.
However, if you want a nice standardised way to ensure that the type is a 32 bit unsigned integer type, I recommend using std::uint32_t from the <cstdint> header. However, this type is not guaranteed to exist if you're on a machine with no 32 bit integer type. Instead, you can use std::uint_least32_t, which will give you the smallest integer type with at least 32 bits.
As stated in the comments, the shown typedef is not compiler independent.
If you want a compiler independent way to get fixed sizes, you might want to use cstdint.
This header file actually comes with your compiler and assures you a minimum size, but no maximum for bigger types (64 bit, 128 bit).
If you want to be completely sure about all the sizes of your types, you need to check it.
Related
In most implementations, I've seen uint32_t defined as
typedef unsigned int uint32_t;
But as I understand it ints are not always guaranteed to be 4 bytes across all systems. So if the system has non 4-byte integers, how does uint32_t guarantee 4?
An implementation is required to define uint32_t correctly (or not at all if that's not possible).
If unsigned int meets the requirements (32 bits wide, no padding bits), the implementation can define it as
typedef unsigned int uint32_t;
If it doesn't, but unsigned long does meet the requirements, it can define it as:
typedef unsigned long uint32_t;
Or it can use a different unsigned type if that's appropriate.
The <stdint.h> header has to be compatible with the compiler that it's used with. If you took a <stdint.h> header that unconditionally defined uint32_t as unsigned int, and used it with a compiler that makes unsigned int 16 bits, the result would be a non-conforming implementation.
Compatibility can be maintained either by tailoring the header to the compiler, or by writing the header so that it adapts to the characteristics of the compiler.
As a programmer, you don't have to worry about how correctness is maintained (though of course there's nothing wrong with being curious). You can rely on it being done correctly -- or you can complain to the provider about a serious bug.
Each C or C++ implementation that defines uint32_t defines it in a way that works for that implementation. It may use typedef unsigned int uint32_t; only if unsigned int is satisfactory for uint32_t in that implementation.
The fact that typedef unsigned int uint32_t; appears in <stdint.h> in one C or C++ implementation does not mean it will appear in <stdint.h> for any other C or C++ implementation. An implementation in which unsigned int were not suitable for uint32_t would have to provide a different <stdint.h>. <stdint.h> is part of the implementation and changes when the implementation changes.
uint32_t guarantee 32 bits?
Yes.
If CHAR_BIT == 16, uint32_t would be 2 "bytes". A "byte" is not always 8 bits in C.
The size of int is not a major issue concerning the implementation uintN_t.
uintN_t (N = 8,16,32,64) are optional non-padded types that independently exist when and if the system can support them. It is extremely common to find them implemented, especially the larger ones.
intN_t is similarly optional and must be 2's complement.
Since ints and longs and other integer types may be different sizes on different systems, why not have stouint8_t(), stoint64_t(), etc. so that portable string to int code could be written?
Because typing that would make me want to chop off my fingers.
Seriously, the basic integer types are int and long and the std::stoX functions are just very simple wrappers around strtol etc. and note that C doesn't provide strtoi32 or strtoi64 or anything that std::stouint32_t could wrap.
If you want something more complicated you can write it yourself.
I could just as well ask "why do people use int and long, instead of int32_t and int64_t everywhere, so the code is portable?" and the answer would be because it's not always necessary.
But the actual reason is probably that noone ever proposed it for the standard. Things don't just magically appear in the standard, someone has to write a proposal and justify adding them, and convince the rest of the committee to add them. So the answer to most "why isn't this thing I just thought of in the standard?" is that noone proposed it.
Because it's usually not necessary.
stoll and stoull return results of type long long and unsigned long long respectively. If you want to convert a string to int64_t, you can just call stoll() and store the result in your int64_t object; the value will be implicitly converted.
This assumes that long long is the widest signed integer type. Like C (starting with C99), C++ permits extended integer types, some of which might be wider than [unsigned] long long. C provides conversion functions strtoimax and strtoumax (operating on intmax_t and uintmax_t, respectively) in <inttypes.h>. For whatever reason, C++ doesn't provide wrappers for this functions (the logical names would be stoimax and stoumax.
But that's not going to matter unless you're using a C++ compiler that provides an extended integer type wider than [unsigned] long long, and I'm not aware that any such compilers actually exist. For any types no wider than 64 bits, the existing functions are all you need.
For example:
#include <iostream>
#include <string>
#include <cstdint>
int main() {
const char *s = "0xdeadbeeffeedface";
uint64_t u = std::stoull(s, NULL, 0);
std::cout << u << "\n";
}
What is the best practice for exporting a packed structure containing booleans?
I ask this because I'm trying to find the best way to do that. Current I do:
#ifndef __cplusplus
#if __STDC_VERSION__ >= 199901L
#include <stdbool.h> //size is 1.
#else
typedef enum {false, true} bool; //sizeof(int)
#endif
#endif
now in the above, the size of a boolean can be 1 or sizeof(int)..
So in a structure like:
#pragma pack(push, 1)
typedef struct
{
long unsigned int sock;
const char* address;
bool connected;
bool blockmode;
} Sock;
#pragma pack(pop)
the alignment is different if using C compared to C99 & C++. If I export it as an integer then languages where boolean is size 1 have alignment problems and need to pad the structure.
I was wondering if it would be best to typedef a bool as a char in the case of pre-C99 but it just doesn't feel right.
Any better ideas?
It depends on what you're looking for: preserve space but run a few extra instructions, or waste a few bytes but run faster.
If you're looking to be fast, but can "waste" a few bytes of space (i.e. a single value for each boolean flag, see sizeof bool discussion), your current approach is the superior. That is because it can load and compare the boolean values directly without having to mask them out of a packed field (see next).
If you're looking to preserve space then you should look into C bitfields:
struct Sock {
...
int connected:1; // For 2 flags, you could also use char here.
int blockmode:1;
}
or roll your own "flags" and set bits in integer values:
#define SOCKFLAGS_NONE 0
#define SOCKFLAGS_CONNECTED (1<<0)
#define SOCKFLAGS_BLOCKMODE (1<<1)
struct Sock {
...
int flags; // For 2 flags, you could also use char here.
}
Both examples lead to more or less the same code which masks bits and shifts values around (extra instructions) but is denser packed than simple bool values.
IMHO, using #pragma pack is more pain (in long term) than the gain (in short term).
It is compiler specific; non-standard and non-portable
I understand the embedded systems or protocols scenarios. With little extra effort, the code can be written pragma free.
I too want to pack my structure as much as possible and lay out the members in wider-first way as you did. However, I do not mind losing 2 bytes, if that allows my code to be standard-compliant and portable.
I would do the following three things:
Declare the flags as bool (you already did) and assign true/false
Put them as last members of the struct (you already did)
Use bitfield (as suggested by fellow stackers)
Combining these:
typedef struct Sock
{
long unsigned int sock;
const char* address;
bool connected : 1;
bool blockmode : 1;
} Sock;
In the pre-C99 case, it is risky to typedef char bool;. That will silently break code like:
bool x = (foo & 0x100);
which is supposed to set x to be true if that bit is set in foo. The enum has the same problem.
In my code I actually do typedef unsigned char bool; but then I am careful to write !! everywhere that an expression is converted to this bool. It's not ideal.
In my experience, using flags in an integral type leads to fewer issues than using bool in your structure, or bitfields, for C90.
Is there a way to do this?
#if sizeof(int) == 4
typedef unsigned int Integer32;
#else
typedef unsigned long Integer32;
#endif
or do you have to just #define integer size and compile different headers in?
If you need exact sizes you can use the intXX_t and uintXX_t variants, where XX is 8, 16, 32, or 64.
If you need types that are at least some size, use int_leastXX_t and uint_leastXX_t;
if you need fast, use int_fastXX_t and uint_fastXX_t.
You get these from <stdint.h>, which came in with C99. If you don't have C99 it's a little harder. You can't use sizeof(int) because the preprocessor doesn't know about types. So use INT_MAX (from <limits.h>, etc. to figure out whether a particular type is large enough for what you need.
I cant find the answer anywhere. The quesion is (?) simple. Lets assume I have such function:
class Sth {
private:
long u;
public:
void set(long u)
{
this.u = u;
}
};
and I run it like this:
Sth s;
CORBA::Long val = 5;
s.set(val);
Can I do this? Will everything be ok?
This should be fine. According to the IBM reference, an IDL long is in range of -2^31 to 2^31-1 and at least 32 bits in size.
It should convert natively to long in c++. The standard doesn't define a size, but defines the minimum ranges these values can hold. The CORBA::Long is a typedef of long, which may change between platforms.
You could use an int_least32_t (from <stdint.h>, or <cstdint>), which is a native type guaranteed to be at least 32 bits wide and typedef'd to the appropriate compiler native type.
It depends on the inplementation of the IDL to C++ mapping you are using. In the new IDL to C++11 mapping a long in IDL maps to an int32_t.