What does ': number' after a struct field mean? [duplicate] - c++

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What does ‘unsigned temp:3’ means
I came across some code like this that I am not sure with:
unsigned long byte_count : 32
unsigned long byte_count2 : 28
What does the : mean here?

That is a bit field:
a data structure used in computer programming. It consists of a number of adjacent computer memory locations which have been allocated to hold a sequence of bits, stored so that any single bit or group of bits within the set can be addressed. A bit field is most commonly used to represent integral types of known, fixed bit-width...

It's also non-standard. Bit fields must be of type _Bool (C99), signed int or unsigned int. However, GCC allows any integer type. The type affects the alignment of the field, the alignment of any subsequent field, and the overall size of the struct containing the bit-field.

Related

How works int64_t on computers with size of machine word equals to 32 bits? [duplicate]

This question already has answers here:
How does a 32 bit processor support 64 bit integers?
(5 answers)
Closed 2 years ago.
So it can be realised as two int32_t. Is this true?
Also I hear that it depends from the compilers. Is any other variants for realisation? It seems that int64_t can't be like int32_t(both weigh 4 bytes).
So int64_t can be realised as two int32_t. Is this true?
Yes, it is true.
A int64_t consists of 8 octets. Two int32_t consist of 8 octets. There is exactly as many bits, and therefore they can represent exactly as many states, therefore you can map one pair of int32_t values into a single int64_t value and back.

Bit width of unsigned and signed short int [duplicate]

This question already has answers here:
What does the C++ standard state the size of int, long type to be?
(24 answers)
Closed 3 years ago.
Why is the 'typical bit width' of unsigned and signed short int data types classed as 'range'? Does this mean they are likely to be any number of bytes? If so why when the 'typical range' is predictable (0 to 65,535 & -32768 to 32767) as with other data types?
It's both sensible and intuitive to describe the possible values of an integer in terms of its numerical range.
I realise that it's tempting to focus on implementation details, like "how many bits there are" or "how many bytes it takes up", but we're not in the 1970s any more. We're not creating machine instructions on punchcards. C++ and C are abstractions. Think in terms of semantics and in terms of behaviours and you'll find your programming life much easier.
The author of the information you're looking at is following that rule.
Why is the 'typical bit width' of unsigned and signed short int data types classed as 'range'?
In math, "range" is (depending on context) synonymous with "interval". An interval is set of numbers lying between two numbers (the minimum and maximum values). The set of all values of all integer types are intervals, and as such may be referred to as ranges.
The minimum required range that signed short must have as specified by the C11 standard is [−32,767, +32,767], and unsigned must have at least [0, 65,535].
Does this mean they are likely to be any number of bytes?
That does not follow from "range", but the number of bytes is indeed implementation defined. A minimum of 16 bits are required to represent the minimum range, and that requires at least one or two bytes depending on the size of the byte (which is at least 8 bits).
What number of bytes is "likely" depends on what system one is likely to use.
If so why
Because that allows the language to be usable on wide variety of CPU architectures which have different sizes of bytes, different representations for signed integers as well as different set of instructions that support different widths of registers.

What exactly is a bit vector in C++? [duplicate]

This question already has answers here:
C/C++ Bit Array or Bit Vector
(5 answers)
Closed 7 years ago.
So, I was reading a question in Cracking the Coding Interview: 5th Edition where it says to implement a bit vector with 4 billion bits. And it defines a bit vector as an array that compactly stores boolean values by using an array of ints. Each int stores a sequence of 32 bits, or boolean values. I am sort of confused in the above definition. Can someone explain me what exactly does the above statement mean?
The marked question that has been attached as duplicate, I couldn't really understand since their is no associated example. The second answer does have an example but it's not really understandable. It will be great if any of you can add an example, albeit for a small value only. Thanks!
The bool type is at least 1 byte. It means it's at least 8 bits.
In a 'int' type, on a 32bits system, it's 32 bits.
You then have 32 booleans in 4 bytes with int, instead of 32 bytes minimum if you use bool type.
In an int you can store 32 booleans by basic bit operations : &, | and ~

Should I use int or long in C++? [duplicate]

This question already has answers here:
Difference between long and int data types [duplicate]
(6 answers)
Closed 7 years ago.
While the size of int depends on the CPU, long seems to be 32 bit (?). But it seems so intuitive to use int for numbers where size doesn't really matter, like in a for loop.
It's also confusing that C++ has both long and __int32. What is the second for then?
Question: What number types should I use in what situations?
Both int and long don´t have fixed sizes (or any fixed representation at all), as long they can hold specific value ranges (including that long can´t be smaller than int).
For specific sizes, there are some types like int32_t etc. (which may be the same).
And __int32 isn´t standard C++, but a compiler-specific thing (eg. MSVC)
Standard specifies, that long is not shorter than int - Specified in C++ standard §3.9.1
C++11 introduced integers with fixed number of bytes, like int32_t.
Note that int is 32 bits even on many 64-bit architecture/compiler combinations (the 64-bit versions of gcc and MSVC both use 32 bits, as far as I know). On the other hand, long will typically be 64 bits on a 64-bit compiler (not on Windows, though).
These are only guidelines though, you always have to look into your compiler's manual to find out how these datatypes are defined.

meaning of known data types [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What does a type followed by _t (underscore-t) represent?
Does anyone knows what the 't' in time_t, uint8_t, etc. stands for, is it "type" ?
second, why declare this kind of new types, for instance size_t, couldn't it be just an int ?
Yes, the t is for Type.
The reason for defining the new types is so they can change in the future. As 64-bit machines have become the norm, it's possible for implementations to change the bit-width of size_t to 64 bits instead of just 32. It's a way to future-proof your programs. Some small embedded processors only handle 16 bit numbers well. Their size_t might only be 16 bits wide.
An especially important one might be ptrdiff_t, which represents the difference between two pointers. If the pointer size changes (say to 64 or 128 bits) sometime in the future, your program should not care.
Another reason for the typedefs is stylistic. While those size_t might just be defined by
typedef int size_t;
using the name size_t clearly shows that variable is meant to be the size of something (a container, a region of memory, etc, etc).
I think, it stands for type - a type which is possibly a typedef of some other type. So when we see int, we can assume that it is not a typedef of any type, but when we see uint32_t, it is most likely a typedef of some type. It is not a rule, but my observation, though there is one exception to this: wchar_t is not a typedef of any other type, yet it has _t.
Yes, it probably stands for type or typedef, or something like that.
The idea between those typedefs is that you are specifying exactly that that variable is not a generic int, but it is the size of an object/the number of seconds since the UNIX epoch/whatever; also, the standard makes specific guarantees about the characteristics of those types.
For example, size_t is guaranteed to contain the size of the biggest object you can create in C - and a type that can do this can change depending on the platform (on Win32 unsigned long is ok, on Win64 you need unsigned long long, while on some microcontrollers with really small memory an unsigned short may suffice).
As for the various [u]intNN_t, they are fixed size integer types: while for "plain" int/short/long/... the standard do not mandate a specific size, often you'll need a type that, wherever you compile your program, is guaranteed to be of that specific size (e.g. if you are reading a binary file); those typedefs are the solution for this necessity. (By the way, there are also typedefs for "fastest integer of at least some size", when you just need a minimum guaranteed range.)