The C++ Xor is ^. So if I:
a ^ b
it should do a XOR b
However when the values are 4246661 0 so
4246661 ^ 0
it prints: 4246661 when it really should be 0.
EDIT: Wow I was going off of an online xor calculator which was giving me weird results.. sorry
Am I missing something?
XOR result is 1 if one and only one of the two values is 1, meaning:
0 XOR 0 is 0
0 XOR 1 is 1
1 XOR 0 is 1
1 XOR 1 is 0
So, (4246661 XOR 0), which is (0b10000001100110010000101 XOR 0b0) result is 0b10000001100110010000101...no problem here!
Anything XOR 0 result is Anything
Doing an exclusive or of any number with 0 yields that same number.
bitwise:
1 OR 0 = 1
1 EOR 0 = 1
1 EOR 1 = 0
with numbers :
nbr OR 0 = nbr
nbr EOR 0 = nbr
Related
What I know for A XOR B operation is that the output is 1 if A != B, and 0 if A == B. However, I have no insight about this operation when A and B are not binary.
For example, if A = 1, B = 3, then A XOR B = 2; also, if A = 2, B = 3, then A XOR B = 1. Is there any pattern to the XOR operation for non-binary values?
I have a good understanding of boolean mathematics, so I already understand how XOR works. What I am asking is that how do you, for example, predict the outcome of A XOR B without going through the manual calculation, if A and B are not binaries? Let's pretend that 2 XOR 3 = 1 is not just a mathematical artifact.
Thanks!
Just look at the binary representations of the numbers, and perform the following rules on each bit:
0 XOR 0 = 0
0 XOR 1 = 1
1 XOR 0 = 1
1 XOR 1 = 0
So, 1 XOR 3 is:
1 = 001
3 = 011
XOR = 010 = 2
To convert a (decimal) number to binary, repeatedly divide by two until you get to 0, and then the remainders in reverse order is the binary number:
To convert it back, repeatedly subtract it by the largest power of two that's no bigger than it until you get to 0, having each position in the binary number corresponding to the powers you subtracted by set to 1 (the left-most position corresponds to the 0-th power):
(Images reference)
xor on integers and other data is simply xor of the individual bits:
A: 0|0|0|1 = 1
B: 0|0|1|1 = 3
=======
A^B: 0|0|1|0 = 2
^-- Each column is a single bit xor
When you use bit operations on numbers that are more than one bit, it simply performs the operation on each corresponding bit in the inputs, and that becomes the corresponding bit in the output. So:
A = 1 = 00000001
B = 3 = 00000011
--------
result= 00000010 = 2
A = 2 = 00000010
B = 3 = 00000011
--------
result= 00000001 = 1
The result has a 0 bit wherever the input bits were the same, a 1 bit wherever they were different.
You use the same method when performing AND and OR on integers.
How does the XOR logical operator work on more than two values?
For instance, in an operation such as 1 ^ 3 ^ 7?
0 0 0 1 // 1
0 0 1 1 // 3
0 1 1 1 // 7
__
0 1 0 1 // 5
for some reason yields 0 1 0 1, where as it should have, as I thought, yielded: 0 1 0 0, since XOR is only true when strictly one of the operands is true.
Because of the operator precedence and because the xor is a binary operator, which in this case is left-to-right.
First 1 ^ 3 is evaluated
0 0 0 1 // 1
0 0 1 1 // 3
-------
0 0 1 0 // 2
The result is 2, then this number is the first operand of the last xor operation (2 ^ 7)
0 0 1 0 // 2
0 1 1 1 // 7
-------
0 1 0 1 // 5
The result is 5.
1 ^ 3 ^ 7 is not a function of three arguments, it is: (1 ^ 3) ^ 7 which equals 2 ^ 7 which equals 5.
Though actually this ^ operator is associative: each bit in the result will be set if and only if an odd number of the operands had the bit set.
XOR works bitwise, XORing each position separately
XOR is commutative, so a^b = b^a
XOR is associative, so (a^b)^c = a^(b^c)
Using this, a human can count the number of ones in a given position and the result bit is set exactly for an odd number of ones in the given position of the operands.
Counting ones yields (0101)binary=5
The expression is parsed as (1 ^ 3) ^ 7 so you first get
0001 ^ 0011
which is 0010. The rest is
0010 ^ 0111
which is 0101
^ is a binary operator. It doesn't work on all three numbers at once, i.e. it's (1^3)^7, which is:
1 ^ 3 == 2
2 ^ 7 == 5
I'd need to perform a bitwise operation (or a serie) so that:
0 1 = 0
1 1 = 1
1 0 = 0
so far AND (&) works fine but I also need that
0 0 = 1
and here AND (&) is not the correct one.
I'm using it in a jquery grep function that reads:
jQuery.grep(json, function (e, index) {
return (e.value & (onoff << 3)) != 0;
});
where onoff could be either 1 or 0 and e.value is a representation of a 4 bits string (i.e. could be "1001"). In this above example I'm testing first bit on the left (<< 3).
Can this be done with a serie of AND, OR, XOR?
This is just XNOR(a, b), which is equal to NOT(XOR(a, b)), i.e. exclusive OR with the output inverted. In C and C-like languages this would be:
!(a ^ b)
or in your specific case:
return !((e.value >> 3) ^ onoff);
Having said that, you could just test for equality:
return (e.value >> 3) == onoff;
This looks roughly like XOR which has the following results table:
0 0 = 0
0 1 = 1
1 0 = 1
1 1 = 0
Now you want to have the opposite, meaning that you want 1 if both inputs are the same value. And this leads us to NOT XOR
0 0 = 1
0 1 = 0
1 0 = 0
1 1 = 1
The question seems pretty well formulated
I have a virtual machine which implements only AND, XOR, SHL and SHR, yet I have to do a "OR 0x01" operation.
First of all having a correct bitwise computation for the following two variables is sufficient, because they cover all combinations:
A=0101
B=0011
We want
0101
0011
A or B
0111
for xor we get
0101
0011
A xor B
0110
for and we get
0101
0011
A and B
0001
so if we connect them with an xor we are done.
(A xor B) xor (A and B)
I would just start with
a xor b = ((not a) and b) or (a and (not b))
and unleash some boolean algebra on that until it looks like
a or b = <expression using only and, xor>
Admittedly, this is probably more work to actually do than going the "try every conceivable bit combination" route, but then you did ask for homework solution ideas. :)
The truth table as summarized on Wikipedia here and gasp, basic CS 101 stuff, De Morgan's Law....
AND
0 & 0 0
0 & 1 0
1 & 0 0
1 & 1 1
OR
0 | 0 0
0 | 1 1
1 | 0 1
0 | 0 1
XOR
0 ^ 0 0
0 ^ 1 1
1 ^ 0 1
1 ^ 1 0
A Shift Left involves shifting the bits across from right to left, suppose:
+-+-+-+-+-+-+-+-+
|7|6|5|4|3|2|1|0|
+-+-+-+-+-+-+-+-+
|0|0|0|0|0|1|0|0| = 0x4 hexadecimal or 4 decimal or 100 in binary
+-+-+-+-+-+-+-+-+
Shift Left by 2 places becomes
+-+-+-+-+-+-+-+-+
|7|6|5|4|3|2|1|0|
+-+-+-+-+-+-+-+-+
|0|0|0|1|0|0|0|0| = 0x10 hexadecimal or 16 decimal or 10000 in binary
+-+-+-+-+-+-+-+-+
Shift Right by 1 places becomes
+-+-+-+-+-+-+-+-+
|7|6|5|4|3|2|1|0|
+-+-+-+-+-+-+-+-+
|0|0|0|0|1|0|0|0| = 0x8 hexadecimal or 8 decimal or 1000 in binary
+-+-+-+-+-+-+-+-+
Then it is a matter of combining the bit-wise operations according to the truth table above...
I would just expand DeMorgan's law: A or B = not(not A and not B). You can compute not by XORing with all 1 bits.
i want to generate a pseudo-random bool stream based on a modulo operation on another stream of integers (say X), so the operation would be
return ( X % 2);
The only problem is that X is a stream of integers that always ends in 1, so for instance would be somehing like 1211, 1221, 1231, 1241 .... is there a way for me to disregard the last bit (without using string manip) so the test doesnt always pass or always fail?
How about (X / 10) % 2 then?
If you'd otherwise be happy to use the last bits, use the penultimate bits instead:
return (x & 0x2) >> 1;
So say the next number from your stream is 23:
1 0 1 1 1 // 23 in binary
& 0 0 0 1 0 // 0x2 in binary
-----------
0 0 0 1 0
Shifting that right by one bit (>> 1) gives 1. With 25, the answer would be 0:
1 1 0 0 1
& 0 0 0 1 0
-----------
0 0 0 0 0
return ( x%20/10 );