Unitary number for “&” bitwise operator in c++ [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a question, I would appreciate it if you helped me to understand it. Imagin I define the following number
c= 0x3FFFFFFF
and a = an arbitrary integer number=Q. My question is, why a &= c always is equal to "Q" and it does not change? for example, if I consider a=10 then the result of a &= c is 10 if a=256 the result of a &= c is 256. Could you please explain why? Thanks a lot.

Both a and c are integer types and are composed of 32 bits in a computer. The first digit of an integer in a computer is the sign bit.The first digit of a positive number is 0, and the first digit of a negative number is 1. 0x3FFFFFFF is a special value. The first two digits of this number are 0, and the other digits are all 1. 1 & 1 = 1, 1 & 0 = 0. So when the number a a is positive and less than c, a & 0x3FFFFFFF is still a itself

a &= c is the same as a = a & c, which calculates the binary and of a and b and then assign that value to a again - just in case you've mistaken what that operator does.
Now a contains almost only 1's. Then just think what each bit becomes: 1 & x will always be x. Since you try with such low numbers only, none of them will change.
Try with c=0xffffffff and you will get a different result.

You have not tested a &= c; with all possible values of a and are incorrect to assert it does not change the value of a in all cases.
a &= c; sets a to a value in which each bit is set if the two bits in the same position in a and in c are both set. If the two bits are not both set, 5he bit in the result is clear.
In 0x3FFFFFFF, the 30 least significant bits are set. When this is used in a &= c; with any number in which higher bits are set, such as 0xC0000000, the higher bits will be cleared.

If you know about bitwise & ("and") operation and how it works, then there should be no question about this. Say, you have two numbers a and b. Each of them are n-bits long. Look,
a => a_(n-1) a_(n-2) a_(n-3) ... a_i ... a_2 a_1 a_0
b => b_(n-1) b_(n-2) b_(n-3) ... b_i ... b_2 b_1 b_0
Where a_0 and b_0 are the least significant bits and a_(n-1) and b_(n-1) are the most significant bits of a and b respectively.
Now, take a look at the & operation on two single binary bits.
1 & 1 = 1
1 & 0 = 0
0 & 1 = 0
0 & 0 = 0
So, the result of the & operation is 1 only when all bits are 1. If at least one bit is 0, then the result is 0.
Now, for n-bits long number,
a & b = (a_i & b_i); where `i` is from 0 to `n-1`
For example, if a and b both are 4 bits long numbers and a = 5, b = 12, then
a = 5 => a = 0101
b = 12 => b = 1100
if c = (a & b), c_i = (a_i & b_i) for i=0..3, here all numbers are 4 bits(0..3)
now, c = c_3 c_2 c_1 c_0
so c_3 = a_3 & b_3
c_2 = a_2 & b_2
c_1 = a_1 & b_1
c_0 = a_0 & b_0
a 0 1 0 1
b 1 1 0 0
-------------
c 0 1 0 0 (that means c = 4)
therefore, c = a & b = 5 & 12 = 4
Now, what would happen, if all of the bits in one number are 1s?
Let's see.
0 & 1 = 0
1 & 1 = 1
so if any bit is fixed and it 1, then the result is the same as the other bit.
if a = 5 (0101) and b = 15 (1111), then
a 0 1 0 1 (5)
b 1 1 1 1 (15)
------------------
c 0 1 0 1 (5, which is equal to a=5)
So, if any of the numbers has all bits are 1s, then the & result is the same as the other number. Actually, for a=any value of 4-bits long number, you will get the result as a, since b is 4-bits long and all 4 bits are 1s.
Now another issue would happen, when a > 15 means a exceeds 4-bits
For the above example, expand the bit size to 1 and change the value of a is 25.
a = 25 (11001) and b = 15 (01111). Still, b is the same as before except the size. So the Most Significant Bit (MSB) is 0. Now,
a 1 1 0 0 1 (25)
b 0 1 1 1 1 (15)
----------------------
c 0 1 0 0 1 (9, not equal to a=25)
So, it is clear that we have to keep every single bit to 1 if we want to get the other number as the result of the & operation.
Now it is time to analyze the scenario you posted.
Here, a &= c is the same as a = a & c.
We assumed that you are using 32-bit integer variables.
You set c = 0x3FFFFFFF means c = (2^30) - 1 or c = 1073741823
a = 0000 0000 0000 0000 0000 0000 0000 1010 (10)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0000 0000 1010 (10, which is equal to a=10)
and
a = 0000 0000 0000 0000 0000 0001 0000 0000 (256)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0001 0000 0000 (256, which is equal to a=256)
but, if a > c, say a=0x40000000 (1073741824, c+1 in base 10), then
a = 0100 0000 0000 0000 0000 0001 0000 0000 (1073741824)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0000 0000 0000 (0, which is not equal to a=1073741823)
So, your assumption ( the value of a after executing statement a &= c is the same as previous a) is true only if a <= c

Related

How to check if number of lines in a text file is power of 2?

I'm making a program that would simulate something like a March Madness tournament where 64 teams were to play against each other. I've used fstream to read in the teams from the file. The problem I've having is that I must make sure that there are at least a power-of-2 teams in the file at all times or it wouldn't run properly. How would I implement a function that checks if there is the power of 2 lines?
Check if there is exactly one bit set in the number of teams.
bool powerOfTwo = std::popcount(numberOfTeams) == 1;
A trick with binary numbers is that if n is a power of 2, or 0, then n & (n - 1) is 0. Otherwise it's not.
i.e.
bool isPowerOfTwo(int n) {
return (n & (n - 1)) == 0;
}
This works because:
Powers of 2 have a single 1 bit in binary and the rest of the bits are 0s.
To subtract 1 from a number in binary, you change the last 1 bit and all the bits after it.
If that was the only 1 bit in the entire number, then all the bits that didn't change were 0s. Otherwise they weren't all 0s.
Example:
0000 0000 0010 0000 (32)
& 0000 0000 0001 1111 (31)
= 0000 0000 0000 0000 (0) (so 32 is a power of 2)
0000 0000 0110 0100 (100)
& 0000 0000 0110 0011 (99)
= 0000 0000 0110 0000 (96) (so 100 is not a power of 2)

How does 1 left shift by 31 (1 << 31) work to get maximum int value? Here are my thoughts and some explanations I found online

I'm fairly new to bit manipulation and I'm trying to figure out how (1 << 31) - 1 works.
First I know that 1 << 31 is
1000000000000000000000000000
and I know it's actually complement of minimum int value, but when I tried to figure out (1 << 31) - 1, I found an explanation states that, it's just
10000000000000000000000000000000 - 1 = 01111111111111111111111111111111
I was almost tempted to believe it since it's really straightforward. But is this what really happening? If it's not, why it happens to be right?
My original thought was that, the real process should be: the two's complement of -1 is
11111111111111111111111111111111
then (1 << 31) - 1 =
(1)01111111111111111111111111111111
the leftmost 1 is abandoned, then we have maximum value of int.
I'm really confused about which one is right.
It's both! 1 << 31 is:
1000 0000 0000 0000 0000 0000 0000 0000
Subtracting 1 gives:
0111 1111 1111 1111 1111 1111 1111 1111
One of the nice features about the two's complement layout of signed numbers is that addition and subtraction are exactly the same operations as they are for unsigned numbers. So 10000...000 represents a negative number in two's complement, the largest negative number, which is -2,147,483,648 in this case, and subtracting 1 from it causes wrap-around to the largest positive number, 2,147,483,647, but two's complement numbers are arranged so that we can pretend it's an unsigned number instead, so the subtraction is uncomplicated. Subtracting 1 from 10000...000 simply drops the leading 1 to a 0, and borrows a bunch of 1s, same as in decimal you get a bunch of 9s: 10000 - 1 = 9999.
It's also true that mathematically, (a - b) is the same as (a + (-b)), so we can do (1 << 31) + (-1) instead:
1000 0000 0000 0000 0000 0000 0000 0000 (1 << 31)
1111 1111 1111 1111 1111 1111 1111 1111 (-1)
-----------------------------------------
1 0111 1111 1111 1111 1111 1111 1111 1111 +
0111 1111 1111 1111 1111 1111 1111 1111 (truncate)
A 1 is carried out of the high end, and lost once the result is truncated back into a 32-bit integer.
Either way, that pattern, with a single 0 at the high end, then filled by 1s, is the representation of the maximum positive value for a two's complement integer of any width.
There are other ways to generate that pattern if you prefer, such as ~(1 << 31), and (-1 >>> 1) (where >>> means logical shift right) which is agnostic of the width of the integer.

Why is "i & (i ^ (i - 1))" equivalent to "i & (-i)"

I had this in part of the code. Could anyone explain how i & (i ^ (i - 1)) could be reduced to i & (-i)?
i ^ (i - 1) makes all bits after the last 1 bit of i becomes 1.
For example if i has a binary representation as abc...de10...0 then i - 1 will be abc...de01...1 in binary (See Why does (x-1) toggle all the bits from the rightmost set bit of x?). The part before the last 1 bit is not changed when subtracting 1 from i, so xoring with each other returns 0 in that part, while the remaining will be 1 because of the difference in i and i - 1. After that i & (i ^ (i - 1)) will get the last 1 bit of i
-i will inverse all bits up to the last 1 bit of i because in two's complement -i == ~i + 1, and i & (-i) results the same like the above
For example:
20 = 0001 0100
19 = 0001 0011
20 ^ 19 = 0000 0111 = 7
20 & 7 = 0000 0100
-20 = 1110 1100
20 & -20 = 0000 0100
See also
Why n bitwise and -n always return the right most bit (last bit)
What does (number & -number) mean in bit programming?

2s Complement of a negative zero

I have problem: you know the 2s Complement so you can get the negative number of a positive one with the reverse and adding a one. e.g.
8 Bit
121 = 0111 1001
1st= 1000 0110
+ 0000 0001
---------
1000 0111 --> -121
So now if we have a -0
a zero looks as 8 bit
0000 0000
so a minus 0 should look
1111 1111 + 0000 0001
= 10000 0000
but that is 512
so I think that I've misunderstood something
To expand my previous comment to the question
1111 1111 + 0000 0001 in 8 bit is 0000 0000, the ninth bit is lost because there is no place from it.
And, yes the complement of a negative is a positive
-121 = 1000 0111
1st = 0111 1000
+ 0000 0001
---------
0111 1001 --> 121
Think of them as a circle, at one point there is 0, adding 1 at a time you go up to the opposite point (128 in 8 bit) at that point the sign is switched and the absolute value begin to decrease, e.g.: 128 + 1 = -127, as you continue to add 1 the value go back to 0 and the circle is completed.
So given a number of bit, you only have that much bit, no more, and if you want the value to be signed you really have only x-1 bit for the value, as the most significant bit is used for the sign (0 -> +; 1 -> -)
1 0000 0000b is 256, not 512. Truncated to 8 bits, it's 0.
This is because with two's complement, zero is zero. There is no positive or negative zero.
Compare this to one's complement or sign bit, where positive zero and negative zero are different values.

Can I set a sequence of bits without unsetting the previous values?

I've got a sequence of bits, say
0110 [1011] 1111
Let's say I want to set that myddle nybble to 0111 as the new value.
Using a positional masking approach with AND or OR, I seem to have no choice but to first unset the original value to 0000, because if I trying ANDing or ORing against that original value of 1011, I'm not going to come out with the desired result of 0111.
Is there another logical operator I should be using to get the desired effect? Or am I locked into 2 operations every time?
The result after kindly assistance was:
inline void foo(Clazz* parent, const Uint8& material, const bool& x, const bool& y, const bool& z)
{
Uint8 offset = x | (y << 1) | (z << 2); //(0-7)
Uint64 positionMask = 255 << offset * 8; //255 = length of each entry (8 bits), 8 = number of bits per material entry
Uint64 value = material << offset * 8;
parent->childType &= ~positionMask; //flip bits to clear given range.
parent->childType |= value;
}
...I'm sure this will see further improvement, but this is the (semi-)readable version.
If you happen to already know the current values of the bits, you can XOR:
0110 1011 1111
^ 0000 1100 0000
= 0110 0111 1111
(where the 1100 needs to be computed first as the XOR between the current bits and the desired bits).
This is, of course, still 2 operations. The difference is that you could precompute the first XOR in certain circumstances.
Other than this special case, there is no other way. You fundamentally need to represent 3 states: set to 1, set to 0, don't change. You can't do this with a single binary operand.
You may want to use bit fields (and perhaps unions if you want to be able to access your structure as a set of bit fields and as an int at the same time) , something along the lines of:
struct foo
{
unsigned int o1:4;
unsigned int o2:4;
unsigned int o3:4;
};
foo bar;
bar.o2 = 0b0111;
Not sure if it translates into more efficient machine code than your clear/set...
Well, there's an assembly instruction in MMIX for this:
SETL $1, 0x06BF ; 0110 1011 1111
SETL $2, 0x0070 ; 0000 0111 0000
SETL rM, 0x00F0 ; set mask register
MUX $1,$2,$1 ; result is 0110 0111 1111
But in C++ here's what you're probably thinking of as 'unsetting the previous value'.
int S = 0x6BF; // starting value: 0110 1011 1111
int M = 0x0F0; // value mask: 0000 1111 0000
int V = 0x070; // value: 0000 0111 0000
int N = (S&~M) | V; // new value: 0110 0111 1111
But since the intermediate result 0110 0000 1111 from (S&~M) is never stored in a variable anywhere I wouldn't really call it 'unsetting' anything. It's just a bitwise boolean expression. Any boolean expression with the same truth table will work. Here's another one:
N = ((S^V) & M) ^ A; // corresponds to Oli Charlesworth's answer
The related truth tables:
S M V (S& ~M) | V ((S^V) & M) ^ S
0 0 0 0 1 0 0 0 0
* 0 0 1 0 1 1 1 0 0
0 1 0 0 0 0 0 0 0
0 1 1 0 0 1 1 1 1
1 0 0 1 1 1 1 0 1
* 1 0 1 1 1 1 0 0 1
1 1 0 0 0 0 1 1 0
1 1 1 0 0 1 0 0 1
^ ^
|____________________|
The rows marked with '*' don't matter because they won't occur (a bit in V will never be set when the corresponding mask bit is not set). Except for those rows, the truth tables for the expressions are the same.