I have a very simple question.
Why is a number when XOR'ed with 0 gives the number itself.
Can someone please give the proof using an example.
Lets say I have the number 5
5^0==>
I think the answer should be just the last bit of 5 XOR'ed with 0, but the answer is still 5.
0 is false, and 1 is true.
As per the definition, XOR operation A XOR B is "A or B, but not, A and B". So, since B is false, so the result will be A.
Also, XOR truth table shows that it outputs true whenever the inputs differ:
Input Output
A B XOR Result
0 0 0
0 1 1
1 0 1
1 1 0
As you can see, whatever be the value of A, if it is XORed with 0, the result is the bit itself.
So, as you say:
5 = 101, 0 = 000
When performing XOR operation on the individual bits:
101
000
----
101 = 5.
Hence, the result of X^0 is X itself.
What is there that you did not understand. Please read about XOR
00000101 // = 5
00000000 // = 0
--------
00000101 // = 5
Bit-wise operations operates on set of bits in number - not just on last bit.
So if you perform some bit-wise operation on 32-bit integer, then all 32 bits are affected. So integer 5 is 0.....0000101 (32 bits). If you need just the resulting last bit after xor operation apply binary AND with 1:
<script>
console.log("%i\n",(5^0)&1);
console.log("%i\n",(6^0)&1);
</script>
Related
I stumbled upon this simple line of code, and I cannot figure out what it does. I understand what it does in separate parts, but I don't really understand it as a whole.
// We have an integer(32 bit signed) called i
// The following code snippet is inside a for loop declaration
// in place of a simple incrementor like i++
// for(;;HERE){}
i += (i&(-i))
If I understand correctly it uses the AND binary operator between i and negative i and then adds that number to i. I first thought that this would be an optimized way of calculating the absolute value of an integer, however as I come to know, c++ does not store negative integers simply by flipping a bit, but please correct me if I'm wrong.
Assuming two's complement representation, and assuming i is not INT_MIN, the expression i & -i results in the value of the lowest bit set in i.
If we look at the value of this expression for various values of i:
0 00000000: i&(-i) = 0
1 00000001: i&(-i) = 1
2 00000010: i&(-i) = 2
3 00000011: i&(-i) = 1
4 00000100: i&(-i) = 4
5 00000101: i&(-i) = 1
6 00000110: i&(-i) = 2
7 00000111: i&(-i) = 1
8 00001000: i&(-i) = 8
9 00001001: i&(-i) = 1
10 00001010: i&(-i) = 2
11 00001011: i&(-i) = 1
12 00001100: i&(-i) = 4
13 00001101: i&(-i) = 1
14 00001110: i&(-i) = 2
15 00001111: i&(-i) = 1
16 00010000: i&(-i) = 16
We can see this pattern.
Extrapolating that to i += (i&(-i)), assuming i is positive, it adds the value of the lowest set bit to i. For values that are a power of two, this just doubles the number.
For other values, it rounds the number up by the value of that lowest bit. Repeating this in a loop, you eventually end up with a power of 2. As for what such an increment could be used for, that depends on the context of where this expression was used.
What I know for A XOR B operation is that the output is 1 if A != B, and 0 if A == B. However, I have no insight about this operation when A and B are not binary.
For example, if A = 1, B = 3, then A XOR B = 2; also, if A = 2, B = 3, then A XOR B = 1. Is there any pattern to the XOR operation for non-binary values?
I have a good understanding of boolean mathematics, so I already understand how XOR works. What I am asking is that how do you, for example, predict the outcome of A XOR B without going through the manual calculation, if A and B are not binaries? Let's pretend that 2 XOR 3 = 1 is not just a mathematical artifact.
Thanks!
Just look at the binary representations of the numbers, and perform the following rules on each bit:
0 XOR 0 = 0
0 XOR 1 = 1
1 XOR 0 = 1
1 XOR 1 = 0
So, 1 XOR 3 is:
1 = 001
3 = 011
XOR = 010 = 2
To convert a (decimal) number to binary, repeatedly divide by two until you get to 0, and then the remainders in reverse order is the binary number:
To convert it back, repeatedly subtract it by the largest power of two that's no bigger than it until you get to 0, having each position in the binary number corresponding to the powers you subtracted by set to 1 (the left-most position corresponds to the 0-th power):
(Images reference)
xor on integers and other data is simply xor of the individual bits:
A: 0|0|0|1 = 1
B: 0|0|1|1 = 3
=======
A^B: 0|0|1|0 = 2
^-- Each column is a single bit xor
When you use bit operations on numbers that are more than one bit, it simply performs the operation on each corresponding bit in the inputs, and that becomes the corresponding bit in the output. So:
A = 1 = 00000001
B = 3 = 00000011
--------
result= 00000010 = 2
A = 2 = 00000010
B = 3 = 00000011
--------
result= 00000001 = 1
The result has a 0 bit wherever the input bits were the same, a 1 bit wherever they were different.
You use the same method when performing AND and OR on integers.
I tried to understand how if condition work with bitwise operators.
A way to check if a number is even or odd can be done by:
#include <iostream>
#include <string>
using namespace std;
string test()
{
int i = 8; //a number
if(i & 1)
return "odd";
else
return "even";
}
int main ()
{
cout << test();
return 0;
}
The Part I don't understand is how the if condition work. In this case if i = 8 then the in If statement it is doing 1000 & 1 which should gives back 1000 which equal 8.
If i = 7, then in if statement it should be doing 111 & 1 which gives back 111 which equal 7
Why is it the case that if(8) will return "even" and if(7) return "odd"? I guess I want to understand what the if statement is checking to be True and what to be False when dealing with bit-wise operators.
Just A thought when I wrote this question down is it because it's actually doing
for 8: 1000 & 0001 which gives 0
for 7: 0111 & 0001 which gives 1?
Yes, you are right in the last part. Binary & and | are performed bit by bit. Since
1 & 1 == 1
1 & 0 == 0
0 & 1 == 0
0 & 0 == 0
we can see that:
8 & 1 == 1000 & 0001 == 0000
and
7 & 1 == 0111 & 0001 == 0001
Your test function does correctly compute whether a number is even or odd though, because a & 1 tests whether there is a 1 in the 1s place, which there only is for odd numbers.
Actually, in C, C++ and other major programming languages the & operator do AND operations in each bit for integral types. The nth bit in a bitwise AND is equal to 1 if and only if the nth bit of both operands are equal to 1.
For example:
8 & 1 =
1000 - 8
0001 - 1
----
0000 - 0
7 & 1 =
0111 - 7
0001 - 1
----
0001 - 1
7 & 5 =
0111 - 7
0101 - 5
----
0101 - 5
For this reason doing a bitwise AND between an even number and 1 will always be equal 0 because only odd numbers have their least significant bit equal to 1.
if(x) in C++ converts x to boolean. An integer is considered true iff it is nonzero.
Thus, all if(i & 1) is doing is checking to see if the least-significant bit is set in i. If it is set, i&1 will be nonzero; if it is not set, i&1 will be zero.
The least significant bit is set in an integer iff that integer is odd, so thus i&1 is nonzero iff i is odd.
What you say the code is doing is actually how bit-wise operators are supposed to work. In your example of (8 & 1):
1000 & 0001 = 0000
because in the first value, the last bit is set to 0, while in the second value, the last bit is set to 1. 0 & 1 = 0.
0111 & 0001 = 0001
In both values, the last bit is set to 1, so the result is 1 since 1 & 1 = 1.
The expression i & 1, where i is an int, has type int. Its value is 1 or 0, depending on the value of the low bit of i. In the statement if(i & 1), the result of that expression is converted to bool, following the usual rule for integer types: 0 becomes false and non-zero becomes true.
I'm having trouble with an algorithm.
I have a byte used for IO of which certain bits can be set with a method called XorAndXor.
The algorithm works as follows:
newValue = (((currentValue XOR xorMask1) AND andMask) XOR xorMask2)
The description reads:
If both xor-masks have the same value then this function inserts the
bits of the xor-mask into the bit locations where the and-mask is
1. The other bits remain unchanged.
So what I expect from this function is when I have the following byte: 00101101 and I use 01000000 for both xor-masks and as the and-mask, that only the second bit would be set to 1 and the result would be 01101101.
However, when doing the math and going through the functions, the result is 00000000.
What am I doing wrong or is there something about this function that I don't understand? This kind of low level programming has been a while so I don't really know if this is a methodology used often and why and how you should use it.
Let me just ask this simple question: Is there a way to use this function effectively to set (or unset/change) a single bit (without asking specifically for the current value)?
For example: The current value is 00101101 (I don't know this), but I just want to make sure the second bit is set, so the result must be 01101101.
Important Info In my documentation PDF, it seems there is a little space between XOR and the first xorMask1, so this may be where a ~ or ! or some other negation sign might have been and it could very well be lost due to some weird encoding issues. So I will test the function if it does what the documentation says or what the function declaration says. Hold on to your helmets, will post back with the results (drums please)....
00101101
XOR 01000000
-------------
01101101
AND 01000000
-------------
01000000
XOR 01000000
-------------
00000000
The documentation is not right. This wouldn't be the first time I see an implementation which totally drifted from the initial implementation, but no one bothered to update the documentation.
I did a quick check so I might be wrong but following would be consistent with the documentation:
newValue = (((currentValue XOR xorMask1) AND ~andMask) XOR xorMask2)
00101101
XOR 01100100
-------------
01001001
AND 10011011
-------------
00001001
XOR 01100100
-------------
01101101
here's the logic table for expression New = Curr XOR Xor1 AND ~And XOR Xor2 where Xor1 == Xor2
CURR: 0 1 0 1 0 1 0 1
XOR1: 0 0 1 1 0 0 1 1
AND: 0 0 0 0 1 1 1 1
XOR2: 0 0 1 1 0 0 1 1
-----------------------
NEW: 0 1 0 1 0 0 1 1
---v--- ---v---
same as same as
current xor mask
where where
AND = 0 AND = 1
I've been studying this for a while now, and I think I see what others are not. The XOR AND XOR process is useful for setting multiple bytes without disturbing others. and Example, we have a given byte where we want to set to 1x1x xxx0 where the x's are values we don't care about. Using the XOR AND XOR process, we use the following masks to turn the bits we don't care about on and off. We use the XOR mask to turn bits on and the AND mask to turn bits off, the ones we don't care about for the mask we leave at a default value (0 for an XOR mask [x XOR 0 = x] and 1 for a AND mask [x AND 1 = x]). So given our desired value, our masks look like this:
XOR: 10100000
AND: 01011110
If our mystery bit reads 10010101, the math then follows:
10010101
10100000 XOR
00110101 =
01011110 AND
00010100 =
10100000 XOR
10110100 =
The bits we want on are on, and the bits we want off are off, regardless of their prior state.
This is a nifty bit of logic for managing multiple bits.
EDIT: the last XOR is for toggling. If there is a bit that you know needs to change, but not what to, make it a 1. so lets say we want to toggle the third bit, or masks will be:
XOR1 10100000
AND 01011110
XOR2 10100100
The last interaction would then change to
00010100 =
10100100 XOR
10110000 =
and the third bit is toggled.
To answer your very simple question, this is how to set a bit:
value |= 0x100;
This is how to clear a bit:
value &= ~0x100;
In this example 0x100 is 000100000000 in binary, so it's setting/clearing bit 8 (counting from the right).
Others have already pointed out how your code sample just doesn't do what it claims to, so I won't elaborate on that any further.
The XOR is binary exclusive and will return true only if one or the other bits is set to 1, therefore:
00101101 XOR 01000000 = 01101101
01101101 AND 01000000 = 01000000
01000000 XOR 01000000 = 00000000
p|q|r|s|p^q|(p^q)&r|((p^q)&r)^s|
0|0|0|0| 0 | 0 | 0 |
0|0|0|1| 0 | 0 | 1 |
0|0|1|0| 0 | 0 | 0 |
0|0|1|1| 0 | 0 | 1 |
0|1|0|0| 1 | 0 | 0 |
0|1|0|1| 1 | 0 | 1 |
0|1|1|0| 1 | 1 | 1 |
0|1|1|1| 1 | 1 | 0 |
1|0|0|0| 1 | 0 | 0 |
1|0|0|1| 1 | 0 | 1 |
1|0|1|0| 1 | 1 | 1 |
1|0|1|1| 1 | 1 | 0 |
1|1|0|0| 0 | 0 | 0 |
1|1|0|1| 0 | 0 | 1 |
1|1|1|0| 0 | 0 | 0 |
1|1|1|1| 0 | 0 | 1 |
Check this table for your input values of the bits, to check the output. Change your masks accordingly, to suit your needs of output.
Make yourself a truth table and follow a 1 and a 0 through the process.
Anything Xor 0 will be left unchanged (1 Xor 0 is 1 ; 0 Xor 0 is 0)
Anything Xor 1 will be flipped (1 Xor 1 is 0; 0 Xor 1 is 1)
When Anding, everything goes to 0 except where there is a 1 bit in the And mask - those stay unchanged
So your first Xor can only change the second bit from the left, because that's where you have a 1 in the mask. It flips that bit from 0 to 1. The And leaves that bit alone and sets all the others to 0. The second Xor flips your 1 back to 0 and leaves all the others unchanged.
Result: all zeroes like you said.
Is your question what combination of Xor and And will give you the behaviour the documentation says? To turn on just one bit, use a bitwise Or where the mask has just that bit 1 and the others are zero. To turn off just one bit, use a bitwise And where the mask has just that bit 0 and the others are 1. It's laborious and there's a lot of testing, so if you wanted to turn 2 bits on and 3 bits off, this kind of trickery saves a lot of "if"-ing, but if you just want to affect one bit, do it the simple way and ignore this function, which appears to be written not-quite-right.
XOR is the logical exclusive or. Which means one or the other, but not both and not neither.
Here is the truth table from Wikipedia.
Input
A | B Output
---------------
0 | 0 | 0
0 | 1 | 1
1 | 0 | 1
1 | 1 | 0
currentValue XOR xorMask1 =
00101101 xor 01000000 = 01101101
01010010 AND andMask =
01101101 and 01000000 = 01000000
01000000 XOR xorMask2 =
01000000 xor 01000000 = 00000000
I'm not good in English, I can't ask it better, but please below:
if byte in binary is 1 0 0 0 0 0 0 0 then result is 1
if byte in binary is 1 1 0 0 0 0 0 0 then result is 2
if byte in binary is 1 1 1 0 0 0 0 0 then result is 3
if byte in binary is 1 1 1 1 0 0 0 0 then result is 4
if byte in binary is 1 1 1 1 1 0 0 0 then result is 5
if byte in binary is 1 1 1 1 1 1 0 0 then result is 6
if byte in binary is 1 1 1 1 1 1 1 0 then result is 7
if byte in binary is 1 1 1 1 1 1 1 1 then result is 8
But if for example the byte in binary is 1 1 1 0 * * * * then result is 3.
I would determine how many bit is set contiguous from left to right with one operation.
The results are not necessary numbers from 1-8, just something to distinguish.
I think it's possible in one or two operations, but I don't know how.
If you don't know a solution as short as 2 operations, please write that too, and I won't try it anymore.
Easiest non-branching solution I can think of:
y=~x
y|=y>>4
y|=y>>2
y|=y>>1
Invert x, and extend the lefttmost 1-bit (which corresponds to the leftmost 0-bit in the non-inverted value) to the right. Will give distinct values (not 1-8 though, but it's pretty easy to do a mapping).
110* ****
turns into
001* ****
001* **1*
001* 1*1*
0011 1111
EDIT:
As pointed out in a different answer, using a precomputed lookup table is probably the fastets. Given only 8 bits, it's probably even feasible in terms of memory consumption.
EDIT:
Heh, woops, my bad.. You can skip the invert, and do ands instead.
x&=x>>4
x&=x>>2
x&=x>>1
here
110* ****
gives
110* **0*
110* 0*0*
1100 0000
As you can see all values beginning with 110 will result in the same output (1100 0000).
EDIT:
Actually, the 'and' version is based on undefined behavior (shifting negative numbers), and will usually do the right thing if using signed 8-bit (i.e. char, rather than unsigned char in C), but as I said the behavaior is undefined and might not always work.
I'd second a lookup table... otherwise you can also do something like:
unsigned long inverse_bitscan_reverse(unsigned long value)
{
unsigned long bsr = 0;
_BitScanReverse(&bsr, ~value); // x86 bsr instruction
return bsr;
}
EDIT: Not that you have to be careful of the special case where "value" has no zeroed bits. See the documentation for _BitScanReverse.