I was wondering what is the difference between uint32_t and uint32, and when I looked in the header files it had this:
types.h:
/** #brief 32-bit unsigned integer. */
typedef unsigned int uint32;
stdint.h:
typedef unsigned uint32_t;
This only leads to more questions:
What is the difference between
unsigned varName;
and
unsigned int varName;
?
I am using MinGW.
unsigned and unsigned int are synonymous, much like unsigned short [int] and unsigned long [int].
uint32_t is a type that's (optionally) defined by the C standard. uint32 is just a name you made up, although it happens to be defined as the same thing.
There is no difference.
unsigned int = uint32 = uint32_t = unsigned in your case and unsigned int = unsigned always
unsigned and unsigned int are synonymous for historical reasons; they both mean "unsigned integer of the most natural size for the CPU architecture/platform", which is often (but by no means always) 32 bits on modern platforms.
<stdint.h> is a standard header in C99 that is supposed to give type definitions for integers of particular sizes, with the uint32_t naming convention.
The <types.h> that you're looking at appears to be non-standard and presumably belongs to some framework your project is using. Its uint32 typedef is compatible with uint32_t. Whether you should use one or the other in your code is a question for your manager.
There is absolutely no difference between unsigned and unsigned int.
Whether that type is a good match for uint32_t is implementation-dependant though; an int could be "shorter" than 32 bits.
Related
In most implementations, I've seen uint32_t defined as
typedef unsigned int uint32_t;
But as I understand it ints are not always guaranteed to be 4 bytes across all systems. So if the system has non 4-byte integers, how does uint32_t guarantee 4?
An implementation is required to define uint32_t correctly (or not at all if that's not possible).
If unsigned int meets the requirements (32 bits wide, no padding bits), the implementation can define it as
typedef unsigned int uint32_t;
If it doesn't, but unsigned long does meet the requirements, it can define it as:
typedef unsigned long uint32_t;
Or it can use a different unsigned type if that's appropriate.
The <stdint.h> header has to be compatible with the compiler that it's used with. If you took a <stdint.h> header that unconditionally defined uint32_t as unsigned int, and used it with a compiler that makes unsigned int 16 bits, the result would be a non-conforming implementation.
Compatibility can be maintained either by tailoring the header to the compiler, or by writing the header so that it adapts to the characteristics of the compiler.
As a programmer, you don't have to worry about how correctness is maintained (though of course there's nothing wrong with being curious). You can rely on it being done correctly -- or you can complain to the provider about a serious bug.
Each C or C++ implementation that defines uint32_t defines it in a way that works for that implementation. It may use typedef unsigned int uint32_t; only if unsigned int is satisfactory for uint32_t in that implementation.
The fact that typedef unsigned int uint32_t; appears in <stdint.h> in one C or C++ implementation does not mean it will appear in <stdint.h> for any other C or C++ implementation. An implementation in which unsigned int were not suitable for uint32_t would have to provide a different <stdint.h>. <stdint.h> is part of the implementation and changes when the implementation changes.
uint32_t guarantee 32 bits?
Yes.
If CHAR_BIT == 16, uint32_t would be 2 "bytes". A "byte" is not always 8 bits in C.
The size of int is not a major issue concerning the implementation uintN_t.
uintN_t (N = 8,16,32,64) are optional non-padded types that independently exist when and if the system can support them. It is extremely common to find them implemented, especially the larger ones.
intN_t is similarly optional and must be 2's complement.
Why this piece of code is needed ?
typedef struct corr_id_{
unsigned int size:8;
unsigned int valueType:8;
unsigned int classId:8;
unsigned int reserved:8;
} CorrId;
I did some investigation around it and found that this way we are limiting the memory consumption to just what we need.
For E.g.
typedef struct corr_id_new{
unsigned int size;
unsigned int valueType;
unsigned int classId;
unsigned int reserved;
} CorrId_NEW;
typedef struct corr_id_{
unsigned int size:8;
unsigned int valueType:8;
unsigned int classId:8;
unsigned int reserved:8;
} CorrId;
int main(){
CorrId_NEW Obj1;
CorrId Obj2;
std::cout<<sizeof(Obj1)<<endl;
std::cout<<sizeof(Obj2)<<endl;
}
Output:-
16
4
I want to understand the real use case of such scenarios? why can't we declare the struct something like this,
typedef struct corr_id_new{
unsigned _int8 size;
unsigned _int8 valueType;
unsigned _int8 classId;
unsigned _int8 reserved;
} CorrId_NEW;
Does this has something to do with compiler optimizations? Or, what are the benefits of declaring the structure that way?
I want to understand the real use case of such scenarios?
For example, structure of status register of some CPU may look like this:
In order to represent it via structure, you could use bitfield:
struct CSR
{
unsigned N: 1;
unsigned Z: 1;
unsigned C: 1;
unsigned V: 1;
unsigned : 20;
unsigned I: 1;
unsigned : 2;
unsigned M: 5;
};
You can see here that fields are not multiplies of 8, so you can't use int8_t, or something similar.
Lets see a simple scenario,
typedef struct student{
unsigned int age:8; // max 8-bits is enough to store a students's age 255 years
unsigned int roll_no:16; //max roll_no can be 2^16, which long enough
unsigned int classId:4; //class ID can be 4-bits long (0-15), as per need.
unsigned int reserved:4; // reserved
};
Above case all work is done in 32-bits only.
But if you use just a integer it would have taken 4*32 bits.
If we take age as 32-bit integer, It can store in range of 0 to 2^32. But don't forget a normal person's age is just max 100 or 140 or 150 (even somebody studying in this age also), which needs max 8-bits to store, So why to waste remaining 24-bits.
You are right, the last structure definition with unsigned _int8 is almost equivalent to the definition using :8. Almost, because byte order can make a difference here, so you might find that the memory layout is reversed in the two cases.
The main purpose of the :8 notation is to allow the use of fractional bytes, as in
struct foo {
uint32_t a:1;
uint32_t b:2;
uint32_t c:3;
uint32_t d:4;
uint32_t e:5;
uint32_t f:6;
uint32_t g:7;
uint32_t h:4;
}
To minimize padding, I strongly suggest to learn the padding rules yourself, they are not hard to grasp. If you do, you can know that your version with unsigned _int8 does not add any padding. Or, if you don't feel like learning those rules, just use __attribute__((__packed__)) on your struct, but that may introduce a severe performance penalty.
It's often used with pragma pack to create bitfields with labels, e.g.:
#pragma pack(0)
struct eg {
unsigned int one : 4;
unsigned int two : 8;
unsigned int three : 16
};
Can be cast for whatever purpose to an int32_t, and vice versa. This might be useful when reading serialized data that follows a (language agnostic) protocol -- you extract an int and cast it to a struct eg to match the fields and field sizes defined in the protocol. You could also skip the conversion and just read an int sized chunk into such a struct, point being that the bitfield sizes match the protocol field sizes. This is extremely common in network programming -- if you want to send a packet following the protocol, you just populate your struct, serialize, and transmit.
Note that pragma pack is not standard C but it is recognized by various common compilers. Without pragma pack, however, the compiler is free to place padding between fields, reducing the use value for the purposes described above.
In the C++ standard 18.4 it specifies:
typedef 'signed integer type' intmax_t;
By the standard(s) on a platform with a 64-bit long int and a 64-bit long long int which should this "signed integer type" be?
Note that long int and long long int are distinct fundamental types.
The C++ standard says:
The header defines all functions, types, and macros the same as 7.18 in the C standard.
and in 7.18 of the C standard (N1548) it says:
The following type designates a signed integer type capable of representing any value of
any signed integer type:
intmax_t
It would seem that in this case that both long int and long long int qualify?
Is that the correct conclusion? That either would be a standard-compliant choice?
Yes, your reasoning is correct. Most real-world implementations choose the lowest-rank type satisfying the conditions.
Well, assuming the GNU C library is correct (from /usr/include/stdint.h):
/* Largest integral types. */
#if __WORDSIZE == 64
typedef long int intmax_t;
typedef unsigned long int uintmax_t;
#else
__extension__
typedef long long int intmax_t;
__extension__
typedef unsigned long long int uintmax_t;
#end
I am programming in linux, which is new to me. I am working on a project to design a 'layer 7' network protocol, and we have these packets that contain resources. And depending on the type of resource, the length of that resource would be different. I am kind of new to C/C++, and am not sure I understand unions all that well. The idea was that I would be able to make a "generic resource" type and depending on what resource it was I could just cast a void* as a pointer to this typedef structure and then call the data contained in it as anything I please and it would take care of the 'casting'. Anyways, here is what I came up with:
typedef struct _pktresource
{
unsigned char Type; // The type of the resource.
union {
struct { // This is used for variable length data.
unsigned short Size;
void *Data;
};
void *ResourceData; // Just a generic pointer to the data.
unsigned char Byte;
char SByte;
short Int16;
unsigned short UInt16;
int Int32;
unsigned int UInt32;
long long Int64;
unsigned long long UInt64;
float Float;
double Double;
unsigned int Time;
};
} pktresource, *ppktresource;
The principal behind this was simple. But when I do something like
pktresource.Size = XXXX
It starts out 4 bytes into the structure instead of 1 byte. Am I failing to grasp a major concept here? Because it feels like I am.
EDIT: Forgot to mention, when I reference
pktresource.Type
It starts at the beginning like its supposed to.
EDIT: Correction was to add pragma statements for proper alignment. After fix, the code looks like:
#pragma pack(push)
#pragma pack(1)
typedef struct _pktresource
{
unsigned char Type; // The type of the resource.
union {
struct { // This is used for variable length data.
unsigned short Size;
unsigned char Data[];
};
unsigned char ResourceData[]; // Just a generic pointer to the data.
unsigned char Byte;
char SByte;
short Int16;
unsigned short UInt16;
int Int32;
unsigned int UInt32;
long long Int64;
unsigned long long UInt64;
float Float;
double Double;
unsigned int Time;
};
} pktresource, *ppktresource;
#pragma pack(pop)
Am I failing to grasp a major concept here?
You're missing knowledge of structure alignment. Basically, it forces certain fields to be aligned by > 1 byte boundaries depending on their size. You can use #pragma to override this behavior, but that can cause interoperability issues if the structure is used anywhere outside your application.
I think the problem is alignment. By default most compilers align to the word size of the machine / OS, in this case 32 bits / 4 bytes. So, since you have that unsigned char Type field up front, the compiler is pushing the Size field to the next even 4 byte boundary.
try
#pragma pack 1
ahead of you structure definitions.
I don't know what compiler you are using, but that's good old-fashioned C code that's been regularly in use for network programming since before most of these rude kids on StackOverflow were born.
What is the difference between unsigned long and UINT64?
I think they are the same, but I'm not sure.
The definition of UINT64 is :
typedef unsigned __int64 UINT64
(by using StdAfx.h)
UINT64 is specific and declares your intent. You want a type that is an unsigned integer that is exactly 64 bits wide. That this may be equal to an unsigned long on some platforms is coincidence.
The C++ standard does not define the sizes of each of the types (besides char), so the size of unsigned long is implementation defined. In most cases I know of, though, unsigned long is an unsigned 32 bit type, while UINT64 (which is an implementation type, not even mentioned in the standard) is a 64 bit unsigned integer in VS.
The unsigned long type size could change depending on the architecture of the system you are on, while the assumption is that UINT64 is definitely 64 bits wide. Have a look at http://en.wikipedia.org/wiki/C_variable_types_and_declarations#Size
See http://msdn.microsoft.com/en-us/library/s3f49ktz(VS.90).aspx
You want to see the difference between unsigned long and unsigned __int64.
A long is typically 32 bits (but this may very per architecture) and an uint64 is always 64 bits. A native data type which is sometimes 64 bits long is a long long int.