How to get the bit length of an integer in C++? [duplicate] - c++

This question already has answers here:
Minimum number of bits to represent a given `int`
(9 answers)
Closed 4 months ago.
This question is not a duplicate of Count the number of set bits in a 32-bit integer. See comment by Daniel S. below.
--
Let's say there is a variable int x;. Its size is 4 bytes, i.e. 32 bits.
Then I assign a value to this variable, x = 4567 (in binary 10001 11010111), so in memory it looks like this:
00000000 00000000 00010001 11010111
Is there a way to get the length of the bits which matter. In my example, the length of bits is 13 (I marked them with bold).
If I use sizeof(x) it returns 4, i.e. 4 bytes, which is the size of the whole int. How do I get the minimum number of bits required to represent the integer without the leading 0s?

unsigned bits, var = (x < 0) ? -x : x;
for(bits = 0; var != 0; ++bits) var >>= 1;
This should do it for you.

Warning: math ahead. If you are squeamish, skip ahead to the TL;DR.
What you are really looking for is the highest bit that is set. Let's write out what the binary number 10001 11010111 actually means:
x = 1 * 2^(12) + 0 * 2^(11) + 0 * 2^(10) + ... + 1 * 2^1 + 1 * 2^0
where * denotes multiplication and ^ is exponentiation.
You can write this as
2^12 * (1 + a)
where 0 < a < 1 (to be precise, a = 0/2 + 0/2^2 + ... + 1/2^11 + 1/2^12).
If you take the logarithm (base 2), let's denote it by log2, of this number you get
log2(2^12 * (1 + a)) = log2(2^12) + log2(1 + a) = 12 + b.
Since a < 1 we can conclude that 1 + a < 2 and therefore b < 1.
In other words, if you take the log2(x) and round it down you will get the most significant power of 2 (in this case, 12). Since the powers start counting at 0, the number of bits is one more than this power, namely 13. So:
TL;DR:
The minimum number of bits needed to represent the number x is given by
numberOfBits = floor(log2(x)) + 1

You're looking for the most significant bit that's set in the number. Let's ignore negative numbers for a second. How can we find it? Well, let's see how many bits we need to set to zero before the whole number is zero.
00000000 00000000 00010001 11010111
00000000 00000000 00010001 11010110
^
00000000 00000000 00010001 11010100
^
00000000 00000000 00010001 11010000
^
00000000 00000000 00010001 11010000
^
00000000 00000000 00010001 11000000
^
00000000 00000000 00010001 11000000
^
00000000 00000000 00010001 10000000
^
...
^
00000000 00000000 00010000 00000000
^
00000000 00000000 00000000 00000000
^
Done! After 13 bits, we've cleared them all. Now how do we do this? Well, the expression 1<< pos is the 1 bit shifted over pos positions. So we can check if (x & (1<<pos)) and if true, remove it: x -= (1<<pos). We can also do this in one operation: x &= ~(1<<pos). ~ gets us the complement: all ones with the pos bit set to zero instead of the other way around. x &= y copies the zero bits of y into x.
Now how do we deal with signed numbers? The easiest is to just ignore it: unsigned xu = x;

Many processors provide an instruction for calculating the number of leading zero bits directly (e.g. x86 has lzcnt / bsr and ARM has clz). Usually C++ compilers provide an intrinsic for accessing one of these instructions. The number of leading zeros can then be used to calculate the bit length.
In GCC, the intrinsic is called __builtin_clz. It counts the number of leading zeros for a 32 bit integer.
However, there is one caveat about __builtin_clz. When the input is 0, then the result is undefined. Therefor we need to take care of this special case. This is done in the following function with (x == 0) ? 32 : ..., which gives the result 32 when x is 0:
uint32_t count_of_leading_0_bits(const uint32_t &x) {
return (x == 0) ? 32 : __builtin_clz(x);
}
The bit length can then be calculated from the number of leading zeros:
uint32_t bitlen(const uint32_t &x) {
return 32 - count_of_leading_0_bits(x);
}
Note that other C++ compilers have different intrinsics for counting the number of leading zero bits, but you can find them quickly with a search on the internet. Here is How to use MSVC intrinsics to get the equivalent of this GCC code? for an equivalent with MSVC.

The portable modern way since C++20 should probably use std::countl_zero, like
#include <bit>
int bit_length(unsigned x)
{
return (8*sizeof x) - std::countl_zero(x);
}
Both gcc and clang emit a single bsr instruction on x86 for this code (with a branch on zero), so it should be pretty much optimal.
Note that std::countl_zero only accepts unsigned arguments though, so deciding how to handle your original int parameter is left as an exercise for the reader.

Related

How is a max signed byte -1? [duplicate]

This question already has answers here:
What is “two's complement”?
(24 answers)
Closed 1 year ago.
I am really curious and confused:
how is 0xFFFF (0b11111111111111) or 0xFF (0b11111111) -1?
if 0b01111111 is 127 and the first bit indicates that it is a positive number then shouldn't 0b11111111 be -127?
Am I missing something???
Two's complement form is commonly used to represent signed integers. To swap the sign of a number this way, invert all the bits and add 1. It has the advantage that there is only one representation of zero.
01111111 = 127
To get -127 flip the bits:
10000000
And add 1:
10000001
To negate 1:
00000001
11111110 // flip
11111111 // + 1
With 0:
00000000
11111111 // flip
00000000 // + 1 (the carry bit is discarded and it's still 0)
And just to show it works going the other way:
With -127:
10000001
01111110 // flip
01111111 // + 1 and you are back to +127.

How is this size alignment working

I am not able to understand the below code with respect to the comment provided. What does this code does, and what would be the equivalent code for 8-aligned?
/* segment size must be 4-aligned */
attr->options.ssize &= ~3;
Here, ssize is of unsigned int type.
Since 4 in binary is 100, any value aligned to 4-byte boundaries (i.e. a multiple of 4) will have the last two bits set to zero.
3 in binary is 11, and ~3 is the bitwise negation of those bits, i.e., ...1111100. Performing a bitwise AND with that value will keep every bit the same, except the last two which will be cleared (bit & 1 == bit, and bit & 0 == 0). This gives us a the next lower or equal value that is a multiple of 4.
To do the same operation for 8 (1000 in binary), we need to clear out the lowest three bits. We can do that with the bitwise negation of the binary 111, i.e., ~7.
All powers of two (1, 2, 4, 8, 16, 32...) can be aligned by simple a and operation.
This gives the size rounded down:
size &= ~(alignment - 1);
or if you want to round up:
size = (size + alignment-1) & ~(alignment-1);
The "alignment-1", as long as it's a value that is a power of two, will give you "all ones" up to the bit just under the power of two. ~ inverts all the bits, so you get ones for zeros and zeros for ones.
You can check that something is a power of two by:
bool power_of_two = !(alignment & (alignment-1))
This works because, for example 4:
4 = 00000100
4-1 = 00000011
& --------
0 = 00000000
or for 16:
16 = 00010000
16-1 = 00001111
& --------
0 = 00000000
If we use 5 instead:
5 = 00000101
4-1 = 00000100
& --------
4 = 00000100
So not a power of two!
Perhaps more understandable comment would be
/* make segment size 4-aligned
by zeroing two least significant bits,
effectively rounding down */
Then at least for me, immediate question pops to my mind: should it really be rounded down, when it is size? Wouldn't rounding up be more appropriate:
attr->options.ssize = (attr->options.ssize + 3) & ~3;
As already said in other answers, to make it 8-aligned, 3 bits need to be zeroed, so use 7 instead of 3. So, we might make it into a function:
unsigned size_align(unsigned size, unsigned bit_count_to_zero)
{
unsigned bits = (1 << bit_count_to_zero) - 1;
return (size + bits) & ~bits;
}
~3 is the bit pattern ...111100. When you do a bitwise AND with that pattern, it clears the bottom two bits, i.e. rounds down to the nearest multiple of 4.
~7 does the same thing for 8-aligned.
The code ensures the bottom two bits of ssize are cleared, guaranteeing that ssize is a multiple of 4. Equivalent code for 8-aligned would be
attr->options.ssize &= ~7;
number = number & ~3
The number is rounded off to the nearest multiple of 4 that is lesser than number
Ex:
if number is 0,1,2 or 3, the `number` is rounded off to 0
similarly if number is 4,5,6,or 7,numberis rounded off to 4
But if this is related to memory alignment, the memory must be aligned upwards and not downwards.

Are bitwise operations going to help me to serialize some bools?

I'm not used to binary files, and I'm trying to get the hang of it. I managed to store some integers and unsigned char, and read them without too much pain. Now, when I'm trying to save some booleans, I see that each of my bool takes exactly 1 octet in my file, which seems logical since a lone bool is stored in a char-sized data (correct me if I'm wrong!).
But since I'm going to have 3 or 4 bools to serialize, I figure it is a waste to store them like this : 00000001 00000001 00000000, for instance, when I could have 00000110. I guess to obtain this I should use bitwise operation, but I'm not very good with them... so could somebody tell me:
How to store up to 8 bools in a single octet using bitwise manipulations?
How to give proper values to (up to 8 bools) from a single octet using bitwise manipulation?
(And, bonus question, does anybody can recommend a simple, non-mathematical-oriented-mind like mine, bit manipulation tutorial if this exists? Everything I found I understood but could not put into practice...)
I'm using C++ but I guess most C-syntaxic languages will use the same kind of operation.
To store bools in a byte:
bool flag; // value to store
unsigned char b = 0; // all false
int position; // ranges from 0..7
b = b | (flag << position);
To read it back:
flag = (b & (1 << position));
The easy way is to use std::bitset which allows you to use indexing to access individual bits (bools), then get the resulting value as an integer. It also allows the reverse.
int main() {
std::bitset<8> s;
s[1] = s[2] = true; // 0b_0000_0110
cout << s.to_ulong() << '\n';
}
Without wrapping in fancy template/pre-processor machinery:
Set bit 3 in var:var |= (1 << 3)
Set bit n in var:var |= (1 << n)
Clear bit n in var:var &= ~(1 << n)
Test bit n in var: (the !! ensures the result is 0 or 1)!!(var & (1 << n))
Try reading this in order.
http://www.cprogramming.com/tutorial/bitwise_operators.html
http://www-graphics.stanford.edu/~seander/bithacks.html#ConditionalSetOrClearBitsWithoutBranching
Some people willthink that 2nd link is way too hardcore, but once you will master simple manipulation, it will come handy.
Basic stuff first:
The only combination of bits that means false is 00000000 all the others mean true i.e: 00001000,01010101
00000000 = 0(decimal), 00000001 = 2^0, 00000010 = 2^1, 00000100 = 2^2, …. ,10000000 = 2^7
There is a big difference between the operands (&&, ||) and (&,|) the first ones give the result of the logic operation between the two numbers, for example:
00000000 && 00000000 = false,
01010101 && 10101010 = true
00001100 || 00000000 = true,
00000000 || 00000000 = false
The second pair makes a bitwise operation (the logic operation between each bit of the numbers):
00000000 & 00000000 = 00000000 = false
00001111 & 11110000 = 00000000 = false
01010101 & 10101001 = 00000001 = true
00001111 | 11110000 = 11111111 = true
00001100 | 00000011 = 00001111 = true
To work with this and play with the bits, you only need to know some basic tricks:
To set a bit to 1 you make the operation | with an octet that has a 1 in that position and ceros in the rest.
For example: we want the first bit of the octet A to be 1 we make: A|00000001
To set a bit to 0 you make the operation & with an octet that has a 0 in that position and ones in the rest.
For example: we want the last bit of the octet A to be 0 we make: A&01111111
To get the Boolean value that holds a bit you make the operation & with an octet that has a 1 in that position and ceros in the rest.
For example: we want to see the value of the third bit of the octet A, we make: A&00000100, if A was XXXXX1XX we get 00000100 = true and if A was XXXXX0XX we get 00000000 = false;
You can always serialize bitfields. Something like:
struct bools
{
bool a:1;
bool b:1;
bool c:1;
bool d:1;
};
has a sizeof 1

A bit column shift

How can I shift a column in 8x8 area? For example, I have this one 64-bit unsigned integer as follows:
#include <boost/cstdint.hpp>
int main()
{
/** In binary:
*
* 10000000
* 10000000
* 10000000
* 10000000
* 00000010
* 00000010
* 00000010
* 00000010
*/
boost::uint64_t b = 0x8080808002020202;
}
Now, I want to shift the first vertical row let say four times, after which it becomes this:
/** In binary:
*
* 00000000
* 00000000
* 00000000
* 00000000
* 10000010
* 10000010
* 10000010
* 10000010
*/
b == 0x82828282;
Can this be done relatively fast with only bit-wise operators, or what?
My best guess is this:
(((b & 0x8080808080808080) >> 4 * 8) | (b & ~0x8080808080808080)
The idea is to isolate the column bits and shift only them.
Can this be done relatively fast with only bit-wise operators, or what?
Yes.
How you do it will depend on how "generic" you want to make the solution. Always first column? Always shift by 4?
Here's an idea:
The first 4 bytes represent the top 4 rows. Exploit that, loop over the top 4.
Mask out the first column using 0x8, to see if the bit is set.
Shift that bit over by 4 bytes (>>4), of course it'll need to be in a uint64 to do that.
biwise-or (|) it against the new byte.
You can probably do better, by avoiding looping and writing more code.
There might be a SIMD instruction for this. You'd have to turn on those instructions in your VC++ settings, and of course they won't work on architectures other than AMD/Intel processors.
In this case, you want to split the value into two pieces, the first column and the other columns. Shift the first column by the appropriate amount, then combine them back together.
b = ((b & 0x8080808080808080)) >> (8*4) | (b & 0x7f7f7f7f7f7f7f7f)
Complete guess since I don't have a compiler nor Boost libs available:
Given b, col (counting 1 to 8 from right), and shift (distance of shift)
In your example, col would be 8 and shift would be 4.
boost::uint64_t flags = 0x0101010101010101;
boost::uint64_t mask = flags << (col -1);
boost::int64_t eraser = -1 ^ flags;
boost::uint64_t data = b & mask;
data = data >> (8*shift)
b = (b & eraser) | data;

Weird output for bitwise NOT

I am trying to take one's complement of 0 to get 1 but I get 4294967295. Here is what I have done:
unsigned int x = 0;
unsigned int y= ~x;
cout << y;
My output is 4294967295 but I expect 1, why is this so? By the way, I am doing this in C++.
Why do you expect 1? Bit-wise complement flips all the bits.
00000000000000000000000000000000 = 0
|
bitwise NOT
|
v
11111111111111111111111111111111 = 4294967295
Perhaps you are thinking of a logical NOT. In C++ this is written as !x.
You have to look at this in binary to understand exactly what is happening.
unsigned int x = 0, is 00000000 00000000 00000000 00000000 in memory.
The ~x statement flips all bits, meaning the above turns into:
11111111 11111111 11111111 11111111
which converts to 4294967295 in decimal form.
XOR will allow you flip only certain bits. If you only want to flip the least significant bit, use x ^ 1 instead.
Where did you get the expectation of 1 from?
Your understanding of bitwise operations clearly shows is lacking, it would be prudent to work through them first before posting in here...
you're not confusing with a ! which is a logical NOT, are you?
a ~ bitwise complement or a bitwise NOT operation flips all the bits from 1 to 0 and vice versa depending on where in the bitmask is set, so for example, a 1 is
00000000 00000000 00000000 00000001
doing a ~ bitwise NOT on that flips it to
11111111 11111111 11111111 11111110
which gives you the maximum value less 1 of the integer datatype on a 32bit system.
Here is a worthy linky to this which shows you how to do bit-twiddling here.
An integer is more than just 1 bit (it's 4 bytes, or 32 bits). By notting it, you'r flipping everything, so in this case 00000... becomes 11111...
~ flips all of the bits in the input. Your input is an unsigned int, which has 32 bits, all of which are 0. Flipping each of those 0-bits gives you 32 1-bits instead, which is binary for that large number.
If you only want to flip the least significant bit, you can use y = x ^ 1 - that is, use XOR instead.
You can use
unsigned int y= !x;
to get y = 1;