how to calculate bitwise OR using AND, XOR and shift? - bit-manipulation

The question seems pretty well formulated
I have a virtual machine which implements only AND, XOR, SHL and SHR, yet I have to do a "OR 0x01" operation.

First of all having a correct bitwise computation for the following two variables is sufficient, because they cover all combinations:
A=0101
B=0011
We want
0101
0011
A or B
0111
for xor we get
0101
0011
A xor B
0110
for and we get
0101
0011
A and B
0001
so if we connect them with an xor we are done.
(A xor B) xor (A and B)

I would just start with
a xor b = ((not a) and b) or (a and (not b))
and unleash some boolean algebra on that until it looks like
a or b = <expression using only and, xor>
Admittedly, this is probably more work to actually do than going the "try every conceivable bit combination" route, but then you did ask for homework solution ideas. :)

The truth table as summarized on Wikipedia here and gasp, basic CS 101 stuff, De Morgan's Law....
AND
0 & 0 0
0 & 1 0
1 & 0 0
1 & 1 1
OR
0 | 0 0
0 | 1 1
1 | 0 1
0 | 0 1
XOR
0 ^ 0 0
0 ^ 1 1
1 ^ 0 1
1 ^ 1 0
A Shift Left involves shifting the bits across from right to left, suppose:
+-+-+-+-+-+-+-+-+
|7|6|5|4|3|2|1|0|
+-+-+-+-+-+-+-+-+
|0|0|0|0|0|1|0|0| = 0x4 hexadecimal or 4 decimal or 100 in binary
+-+-+-+-+-+-+-+-+
Shift Left by 2 places becomes
+-+-+-+-+-+-+-+-+
|7|6|5|4|3|2|1|0|
+-+-+-+-+-+-+-+-+
|0|0|0|1|0|0|0|0| = 0x10 hexadecimal or 16 decimal or 10000 in binary
+-+-+-+-+-+-+-+-+
Shift Right by 1 places becomes
+-+-+-+-+-+-+-+-+
|7|6|5|4|3|2|1|0|
+-+-+-+-+-+-+-+-+
|0|0|0|0|1|0|0|0| = 0x8 hexadecimal or 8 decimal or 1000 in binary
+-+-+-+-+-+-+-+-+
Then it is a matter of combining the bit-wise operations according to the truth table above...

I would just expand DeMorgan's law: A or B = not(not A and not B). You can compute not by XORing with all 1 bits.

Related

Unitary number for “&” bitwise operator in c++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a question, I would appreciate it if you helped me to understand it. Imagin I define the following number
c= 0x3FFFFFFF
and a = an arbitrary integer number=Q. My question is, why a &= c always is equal to "Q" and it does not change? for example, if I consider a=10 then the result of a &= c is 10 if a=256 the result of a &= c is 256. Could you please explain why? Thanks a lot.
Both a and c are integer types and are composed of 32 bits in a computer. The first digit of an integer in a computer is the sign bit.The first digit of a positive number is 0, and the first digit of a negative number is 1. 0x3FFFFFFF is a special value. The first two digits of this number are 0, and the other digits are all 1. 1 & 1 = 1, 1 & 0 = 0. So when the number a a is positive and less than c, a & 0x3FFFFFFF is still a itself
a &= c is the same as a = a & c, which calculates the binary and of a and b and then assign that value to a again - just in case you've mistaken what that operator does.
Now a contains almost only 1's. Then just think what each bit becomes: 1 & x will always be x. Since you try with such low numbers only, none of them will change.
Try with c=0xffffffff and you will get a different result.
You have not tested a &= c; with all possible values of a and are incorrect to assert it does not change the value of a in all cases.
a &= c; sets a to a value in which each bit is set if the two bits in the same position in a and in c are both set. If the two bits are not both set, 5he bit in the result is clear.
In 0x3FFFFFFF, the 30 least significant bits are set. When this is used in a &= c; with any number in which higher bits are set, such as 0xC0000000, the higher bits will be cleared.
If you know about bitwise & ("and") operation and how it works, then there should be no question about this. Say, you have two numbers a and b. Each of them are n-bits long. Look,
a => a_(n-1) a_(n-2) a_(n-3) ... a_i ... a_2 a_1 a_0
b => b_(n-1) b_(n-2) b_(n-3) ... b_i ... b_2 b_1 b_0
Where a_0 and b_0 are the least significant bits and a_(n-1) and b_(n-1) are the most significant bits of a and b respectively.
Now, take a look at the & operation on two single binary bits.
1 & 1 = 1
1 & 0 = 0
0 & 1 = 0
0 & 0 = 0
So, the result of the & operation is 1 only when all bits are 1. If at least one bit is 0, then the result is 0.
Now, for n-bits long number,
a & b = (a_i & b_i); where `i` is from 0 to `n-1`
For example, if a and b both are 4 bits long numbers and a = 5, b = 12, then
a = 5 => a = 0101
b = 12 => b = 1100
if c = (a & b), c_i = (a_i & b_i) for i=0..3, here all numbers are 4 bits(0..3)
now, c = c_3 c_2 c_1 c_0
so c_3 = a_3 & b_3
c_2 = a_2 & b_2
c_1 = a_1 & b_1
c_0 = a_0 & b_0
a 0 1 0 1
b 1 1 0 0
-------------
c 0 1 0 0 (that means c = 4)
therefore, c = a & b = 5 & 12 = 4
Now, what would happen, if all of the bits in one number are 1s?
Let's see.
0 & 1 = 0
1 & 1 = 1
so if any bit is fixed and it 1, then the result is the same as the other bit.
if a = 5 (0101) and b = 15 (1111), then
a 0 1 0 1 (5)
b 1 1 1 1 (15)
------------------
c 0 1 0 1 (5, which is equal to a=5)
So, if any of the numbers has all bits are 1s, then the & result is the same as the other number. Actually, for a=any value of 4-bits long number, you will get the result as a, since b is 4-bits long and all 4 bits are 1s.
Now another issue would happen, when a > 15 means a exceeds 4-bits
For the above example, expand the bit size to 1 and change the value of a is 25.
a = 25 (11001) and b = 15 (01111). Still, b is the same as before except the size. So the Most Significant Bit (MSB) is 0. Now,
a 1 1 0 0 1 (25)
b 0 1 1 1 1 (15)
----------------------
c 0 1 0 0 1 (9, not equal to a=25)
So, it is clear that we have to keep every single bit to 1 if we want to get the other number as the result of the & operation.
Now it is time to analyze the scenario you posted.
Here, a &= c is the same as a = a & c.
We assumed that you are using 32-bit integer variables.
You set c = 0x3FFFFFFF means c = (2^30) - 1 or c = 1073741823
a = 0000 0000 0000 0000 0000 0000 0000 1010 (10)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0000 0000 1010 (10, which is equal to a=10)
and
a = 0000 0000 0000 0000 0000 0001 0000 0000 (256)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0001 0000 0000 (256, which is equal to a=256)
but, if a > c, say a=0x40000000 (1073741824, c+1 in base 10), then
a = 0100 0000 0000 0000 0000 0001 0000 0000 (1073741824)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0000 0000 0000 (0, which is not equal to a=1073741823)
So, your assumption ( the value of a after executing statement a &= c is the same as previous a) is true only if a <= c

Invert (flip) last n bits of a number with only bitwise operations

Given a binary integer, how can I invert (flip) last n bits using only bitwise operations in c/c++?
For example:
// flip last 2 bits
0110 -> 0101
0011 -> 0000
1000 -> 1011
You can flip last n bits of your number with
#define flipBits(n,b) ((n)^((1u<<(b))-1))
for example flipBits(0x32, 4) will flip the last 4 bits and result will be 0x3d
this works because if you think how XOR works
0 ^ 0 => 0
1 ^ 0 => 1
bits aren't flipped
0 ^ 1 => 1
1 ^ 1 => 0
bits are flipped
(1<<b)-1
this part gets you the last n bits
for example, if b is 4 then 1<<4 is 0b10000 and if we remove 1 we get our mask which is 0b1111 then we can use this to xor with our number to get the desired output.
works for C and C++

Convert an 8 bit data to 3 bit data

PROGRAMMING LANGUAGE: C
I've a 8 bit data with only 3 bit used, for example:
0110 0001
Where 0 indicate unused bit that are always set to 0 and 1 indicate bits that change.
I want to convert this 0110 0001 8 bit to 3 bit that indicate this 3 used bits.
For example
0110 0001 --> 111
0010 0001 --> 011
0000 0000 --> 000
0100 0001 --> 101
How I can do that with minimal operations?
You can achieve this with a couple of bitwise operations:
((a >> 4) & 6) | (a & 1)
Assuming you start from xYYx xxxY, where x is a bit you don't care about and Y a bit to keep:
left shift by 4 of a will result in xYYx, then masking with 6 (binary 110) will make sure only the second and third bit are retained, resulting in YY0 and preventing flipped x bits from messing up.
a & 1 selects the LSB, resulting in Y.
the two parts, YY0 and Y are combined using a | bitwise or, resulting in YYY.
Now you have the 3 bits you asked. But keep in mind that you can't address single bits, so it will still be byte-aligned as 00000YYY
You can get the k'th bit of n: (where n is 011000001)
(n & ( 1 << k )) >> k
(More details about that at StackOverflow)
so you use that to get bit 1,6 and 7 and just add those:
r=bit1+bit6*16+bit7*32

What does Xor -> And -> Xor do?

I'm having trouble with an algorithm.
I have a byte used for IO of which certain bits can be set with a method called XorAndXor.
The algorithm works as follows:
newValue = (((currentValue XOR xorMask1) AND andMask) XOR xorMask2)
The description reads:
If both xor-masks have the same value then this function inserts the
bits of the xor-mask into the bit locations where the and-mask is
1. The other bits remain unchanged.
So what I expect from this function is when I have the following byte: 00101101 and I use 01000000 for both xor-masks and as the and-mask, that only the second bit would be set to 1 and the result would be 01101101.
However, when doing the math and going through the functions, the result is 00000000.
What am I doing wrong or is there something about this function that I don't understand? This kind of low level programming has been a while so I don't really know if this is a methodology used often and why and how you should use it.
Let me just ask this simple question: Is there a way to use this function effectively to set (or unset/change) a single bit (without asking specifically for the current value)?
For example: The current value is 00101101 (I don't know this), but I just want to make sure the second bit is set, so the result must be 01101101.
Important Info In my documentation PDF, it seems there is a little space between XOR and the first xorMask1, so this may be where a ~ or ! or some other negation sign might have been and it could very well be lost due to some weird encoding issues. So I will test the function if it does what the documentation says or what the function declaration says. Hold on to your helmets, will post back with the results (drums please)....
00101101
XOR 01000000
-------------
01101101
AND 01000000
-------------
01000000
XOR 01000000
-------------
00000000
The documentation is not right. This wouldn't be the first time I see an implementation which totally drifted from the initial implementation, but no one bothered to update the documentation.
I did a quick check so I might be wrong but following would be consistent with the documentation:
newValue = (((currentValue XOR xorMask1) AND ~andMask) XOR xorMask2)
00101101
XOR 01100100
-------------
01001001
AND 10011011
-------------
00001001
XOR 01100100
-------------
01101101
here's the logic table for expression New = Curr XOR Xor1 AND ~And XOR Xor2 where Xor1 == Xor2
CURR: 0 1 0 1 0 1 0 1
XOR1: 0 0 1 1 0 0 1 1
AND: 0 0 0 0 1 1 1 1
XOR2: 0 0 1 1 0 0 1 1
-----------------------
NEW: 0 1 0 1 0 0 1 1
---v--- ---v---
same as same as
current xor mask
where where
AND = 0 AND = 1
I've been studying this for a while now, and I think I see what others are not. The XOR AND XOR process is useful for setting multiple bytes without disturbing others. and Example, we have a given byte where we want to set to 1x1x xxx0 where the x's are values we don't care about. Using the XOR AND XOR process, we use the following masks to turn the bits we don't care about on and off. We use the XOR mask to turn bits on and the AND mask to turn bits off, the ones we don't care about for the mask we leave at a default value (0 for an XOR mask [x XOR 0 = x] and 1 for a AND mask [x AND 1 = x]). So given our desired value, our masks look like this:
XOR: 10100000
AND: 01011110
If our mystery bit reads 10010101, the math then follows:
10010101
10100000 XOR
00110101 =
01011110 AND
00010100 =
10100000 XOR
10110100 =
The bits we want on are on, and the bits we want off are off, regardless of their prior state.
This is a nifty bit of logic for managing multiple bits.
EDIT: the last XOR is for toggling. If there is a bit that you know needs to change, but not what to, make it a 1. so lets say we want to toggle the third bit, or masks will be:
XOR1 10100000
AND 01011110
XOR2 10100100
The last interaction would then change to
00010100 =
10100100 XOR
10110000 =
and the third bit is toggled.
To answer your very simple question, this is how to set a bit:
value |= 0x100;
This is how to clear a bit:
value &= ~0x100;
In this example 0x100 is 000100000000 in binary, so it's setting/clearing bit 8 (counting from the right).
Others have already pointed out how your code sample just doesn't do what it claims to, so I won't elaborate on that any further.
The XOR is binary exclusive and will return true only if one or the other bits is set to 1, therefore:
00101101 XOR 01000000 = 01101101
01101101 AND 01000000 = 01000000
01000000 XOR 01000000 = 00000000
p|q|r|s|p^q|(p^q)&r|((p^q)&r)^s|
0|0|0|0| 0 | 0 | 0 |
0|0|0|1| 0 | 0 | 1 |
0|0|1|0| 0 | 0 | 0 |
0|0|1|1| 0 | 0 | 1 |
0|1|0|0| 1 | 0 | 0 |
0|1|0|1| 1 | 0 | 1 |
0|1|1|0| 1 | 1 | 1 |
0|1|1|1| 1 | 1 | 0 |
1|0|0|0| 1 | 0 | 0 |
1|0|0|1| 1 | 0 | 1 |
1|0|1|0| 1 | 1 | 1 |
1|0|1|1| 1 | 1 | 0 |
1|1|0|0| 0 | 0 | 0 |
1|1|0|1| 0 | 0 | 1 |
1|1|1|0| 0 | 0 | 0 |
1|1|1|1| 0 | 0 | 1 |
Check this table for your input values of the bits, to check the output. Change your masks accordingly, to suit your needs of output.
Make yourself a truth table and follow a 1 and a 0 through the process.
Anything Xor 0 will be left unchanged (1 Xor 0 is 1 ; 0 Xor 0 is 0)
Anything Xor 1 will be flipped (1 Xor 1 is 0; 0 Xor 1 is 1)
When Anding, everything goes to 0 except where there is a 1 bit in the And mask - those stay unchanged
So your first Xor can only change the second bit from the left, because that's where you have a 1 in the mask. It flips that bit from 0 to 1. The And leaves that bit alone and sets all the others to 0. The second Xor flips your 1 back to 0 and leaves all the others unchanged.
Result: all zeroes like you said.
Is your question what combination of Xor and And will give you the behaviour the documentation says? To turn on just one bit, use a bitwise Or where the mask has just that bit 1 and the others are zero. To turn off just one bit, use a bitwise And where the mask has just that bit 0 and the others are 1. It's laborious and there's a lot of testing, so if you wanted to turn 2 bits on and 3 bits off, this kind of trickery saves a lot of "if"-ing, but if you just want to affect one bit, do it the simple way and ignore this function, which appears to be written not-quite-right.
XOR is the logical exclusive or. Which means one or the other, but not both and not neither.
Here is the truth table from Wikipedia.
Input
A | B Output
---------------
0 | 0 | 0
0 | 1 | 1
1 | 0 | 1
1 | 1 | 0
currentValue XOR xorMask1 =
00101101 xor 01000000 = 01101101
01010010 AND andMask =
01101101 and 01000000 = 01000000
01000000 XOR xorMask2 =
01000000 xor 01000000 = 00000000

Can I set a sequence of bits without unsetting the previous values?

I've got a sequence of bits, say
0110 [1011] 1111
Let's say I want to set that myddle nybble to 0111 as the new value.
Using a positional masking approach with AND or OR, I seem to have no choice but to first unset the original value to 0000, because if I trying ANDing or ORing against that original value of 1011, I'm not going to come out with the desired result of 0111.
Is there another logical operator I should be using to get the desired effect? Or am I locked into 2 operations every time?
The result after kindly assistance was:
inline void foo(Clazz* parent, const Uint8& material, const bool& x, const bool& y, const bool& z)
{
Uint8 offset = x | (y << 1) | (z << 2); //(0-7)
Uint64 positionMask = 255 << offset * 8; //255 = length of each entry (8 bits), 8 = number of bits per material entry
Uint64 value = material << offset * 8;
parent->childType &= ~positionMask; //flip bits to clear given range.
parent->childType |= value;
}
...I'm sure this will see further improvement, but this is the (semi-)readable version.
If you happen to already know the current values of the bits, you can XOR:
0110 1011 1111
^ 0000 1100 0000
= 0110 0111 1111
(where the 1100 needs to be computed first as the XOR between the current bits and the desired bits).
This is, of course, still 2 operations. The difference is that you could precompute the first XOR in certain circumstances.
Other than this special case, there is no other way. You fundamentally need to represent 3 states: set to 1, set to 0, don't change. You can't do this with a single binary operand.
You may want to use bit fields (and perhaps unions if you want to be able to access your structure as a set of bit fields and as an int at the same time) , something along the lines of:
struct foo
{
unsigned int o1:4;
unsigned int o2:4;
unsigned int o3:4;
};
foo bar;
bar.o2 = 0b0111;
Not sure if it translates into more efficient machine code than your clear/set...
Well, there's an assembly instruction in MMIX for this:
SETL $1, 0x06BF ; 0110 1011 1111
SETL $2, 0x0070 ; 0000 0111 0000
SETL rM, 0x00F0 ; set mask register
MUX $1,$2,$1 ; result is 0110 0111 1111
But in C++ here's what you're probably thinking of as 'unsetting the previous value'.
int S = 0x6BF; // starting value: 0110 1011 1111
int M = 0x0F0; // value mask: 0000 1111 0000
int V = 0x070; // value: 0000 0111 0000
int N = (S&~M) | V; // new value: 0110 0111 1111
But since the intermediate result 0110 0000 1111 from (S&~M) is never stored in a variable anywhere I wouldn't really call it 'unsetting' anything. It's just a bitwise boolean expression. Any boolean expression with the same truth table will work. Here's another one:
N = ((S^V) & M) ^ A; // corresponds to Oli Charlesworth's answer
The related truth tables:
S M V (S& ~M) | V ((S^V) & M) ^ S
0 0 0 0 1 0 0 0 0
* 0 0 1 0 1 1 1 0 0
0 1 0 0 0 0 0 0 0
0 1 1 0 0 1 1 1 1
1 0 0 1 1 1 1 0 1
* 1 0 1 1 1 1 0 0 1
1 1 0 0 0 0 1 1 0
1 1 1 0 0 1 0 0 1
^ ^
|____________________|
The rows marked with '*' don't matter because they won't occur (a bit in V will never be set when the corresponding mask bit is not set). Except for those rows, the truth tables for the expressions are the same.