This question already has answers here:
C/C++ Bit Array or Bit Vector
(5 answers)
Closed 7 years ago.
So, I was reading a question in Cracking the Coding Interview: 5th Edition where it says to implement a bit vector with 4 billion bits. And it defines a bit vector as an array that compactly stores boolean values by using an array of ints. Each int stores a sequence of 32 bits, or boolean values. I am sort of confused in the above definition. Can someone explain me what exactly does the above statement mean?
The marked question that has been attached as duplicate, I couldn't really understand since their is no associated example. The second answer does have an example but it's not really understandable. It will be great if any of you can add an example, albeit for a small value only. Thanks!
The bool type is at least 1 byte. It means it's at least 8 bits.
In a 'int' type, on a 32bits system, it's 32 bits.
You then have 32 booleans in 4 bytes with int, instead of 32 bytes minimum if you use bool type.
In an int you can store 32 booleans by basic bit operations : &, | and ~
Related
This question already has answers here:
How does a 32 bit processor support 64 bit integers?
(5 answers)
Closed 2 years ago.
So it can be realised as two int32_t. Is this true?
Also I hear that it depends from the compilers. Is any other variants for realisation? It seems that int64_t can't be like int32_t(both weigh 4 bytes).
So int64_t can be realised as two int32_t. Is this true?
Yes, it is true.
A int64_t consists of 8 octets. Two int32_t consist of 8 octets. There is exactly as many bits, and therefore they can represent exactly as many states, therefore you can map one pair of int32_t values into a single int64_t value and back.
This question already has answers here:
Is the size of C "int" 2 bytes or 4 bytes?
(13 answers)
Does "Undefined Behavior" really permit *anything* to happen? [duplicate]
(9 answers)
Closed 5 years ago.
Today while experimenting with a C++ program i tried to get a random variable from a garbage value with code that read
int main(){
int x,r;
r = x%7
cout << r;
}
needless to say this method didn't work since x was being used without initialization however when I looked at he variable watch I saw that the garbage value of x was -846.... and the same value for r. This confused me as to how could an integer hold such insanely huge garbage values. Normally C++ integers are ±32676 however the insane 7+ digit value I saw was never in this range. What could be the reason for this very large value if integers can hold only small values
According to this answer, the minimum (emphasis mine) ranges of an integer is -32,767 to 32,767. The actual limits are implementation defined, and are most likely from -2 billion to positive 2 billion.
To check the integer limits on your device, you could use the header <limits> as follows:
int imin = std::numeric_limits<int>::min(); // minimum value
int imax = std::numeric_limits<int>::max(); //or INT_MAX
Normally C++ integers are ±32676
This is false on most systems (except embedded things like Arduino). On most current C++ systems, int uses 32 bits. IIRC the standard mandates at least 16 bits.
You should use the <limits> standard C++11 (or better) header.
BTW your program is a typical example of undefined behaviour so you need to be very scared (and you should not expect any particular concrete behavior).
Integers are ussually at least 32bit on most architectures you are likely to use.
So can hold around +/-2 billion
Integer size is usually 4 bytes on most operating systems (platforms). Therefore, at the lowest, which is 32-bit architectures, it can hold values from (-2147483648) to (+2147483648).
If you're working with a 64-bit architecutre, which is the common one today, you can hold values from (-2^64 / 2) to (+2^64 / 2).
Take a look at this link that shows the size of integer on different platforms: http://ivbel.blogspot.co.il/2012/02/size-of-primitive-types-in-c-language.html
This question already has answers here:
C++ 2-bit bitfield arrays possible?
(2 answers)
Closed 6 years ago.
I'm writing same container with bitset, but for ternary logic, and i have to make it so that one trit( ternary analogue of bit) held only two bits. And i dont know how i can do this, can you give me some ideas?
Struct members in C and C++ may be declared to occupy a given number of bits.
See, for example, here.
The right question for Google is "c++ multibit arrays".
The right answer is here -> https://stackoverflow.com/a/25384425/2743554
It seems nothing in libstdc++/boost for managing packaged arrays of N-bits values.
This question already has answers here:
Why is 0 < -0x80000000?
(6 answers)
Closed 7 years ago.
Why is 1 not greater than -0x80000000. I know it has something to do with overflow. But can someone explain why? is 0x80000000 not a constant I think it is?
assert(1 > -0x80000000);
The assert triggers in C++. Why is that?
I am grateful for some of the answer provided. But does C++ standard define that the constant needs to be stored in a 32 bit integer? Why doesn't compiler recognized that 80000000 isn't going to be fit for a 32 bit integer and use 64 bit for it? I mean, the largest 32 bit int can be 0x7FFFFFFF. 0x80000000 is obviously larger than that. Why does compiler still use 32 bit for that?
According to the C and C++ standards, -0x80000000 is not an integer constant. It's an expression, like 3 + 5. In this case, it's the constant 0x80000000, operated upon by the negation operator. For compilers which have 32-bit ints, 0x80000000 is not representable as an int, but is representable as an unsigned int. But negating an unsigned integer is (weirdly) done in an unsigned context. So the negation here effectively has no effect.
One way to fix this is to use a type that you know it is likely to be able to represent and retain your value correctly, which means that your expression can be fixed like so
assert(1 > -0x80000000L);
or
assert(1 > -0x80000000LL);
Which is basically about using standard suffix in C++ for your supposedly integer expression.
The only 3 standard suffix for integer types in C++ are u, l and ll, along with the uppercase variations that mean the same thing as their lowercase counterpart; U, L and LL.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What does ‘unsigned temp:3’ means
I came across some code like this that I am not sure with:
unsigned long byte_count : 32
unsigned long byte_count2 : 28
What does the : mean here?
That is a bit field:
a data structure used in computer programming. It consists of a number of adjacent computer memory locations which have been allocated to hold a sequence of bits, stored so that any single bit or group of bits within the set can be addressed. A bit field is most commonly used to represent integral types of known, fixed bit-width...
It's also non-standard. Bit fields must be of type _Bool (C99), signed int or unsigned int. However, GCC allows any integer type. The type affects the alignment of the field, the alignment of any subsequent field, and the overall size of the struct containing the bit-field.