When does (i-3)&3 evaluate to true?
I found it here:
https://github.com/thewizard6296/SPOJ_CODE/blob/c03c5900b6a22389a3acf2ddf3ff76f230bcb358/CZ_PROB1.cpp
Consider what "3" is in 4-bit binary.
0011
For some number x, when is the expression "x & 3" true (i.e., non-zero)? It is true when x has 1s in either of its two least significant bit positions. When does a number not have 1s in either of its two least significant bit positions? Consider multiples of 4:
4: 8: 12: etc...
0011 0011 0011
& 0100 & 1000 & 1100
---- ---- ----
0000 0000 0000
TL;DR (i - 3) & 3 evaluates to true when (i - 3) is not a multiple of 4.
Related
I'm making a program that would simulate something like a March Madness tournament where 64 teams were to play against each other. I've used fstream to read in the teams from the file. The problem I've having is that I must make sure that there are at least a power-of-2 teams in the file at all times or it wouldn't run properly. How would I implement a function that checks if there is the power of 2 lines?
Check if there is exactly one bit set in the number of teams.
bool powerOfTwo = std::popcount(numberOfTeams) == 1;
A trick with binary numbers is that if n is a power of 2, or 0, then n & (n - 1) is 0. Otherwise it's not.
i.e.
bool isPowerOfTwo(int n) {
return (n & (n - 1)) == 0;
}
This works because:
Powers of 2 have a single 1 bit in binary and the rest of the bits are 0s.
To subtract 1 from a number in binary, you change the last 1 bit and all the bits after it.
If that was the only 1 bit in the entire number, then all the bits that didn't change were 0s. Otherwise they weren't all 0s.
Example:
0000 0000 0010 0000 (32)
& 0000 0000 0001 1111 (31)
= 0000 0000 0000 0000 (0) (so 32 is a power of 2)
0000 0000 0110 0100 (100)
& 0000 0000 0110 0011 (99)
= 0000 0000 0110 0000 (96) (so 100 is not a power of 2)
In C++ I wrote:
bool ret_is_syscall = (ret_inst_data & 0x00000000000000FF) == 0x000000000000050f;
but clion says it's always wrong, why?
I am trying to check if last 4 are 0x050f
Masking with 0xFF leaves only 8 bits available to look at, but 0x50f takes up 11 bits. So the comparison can never be true.
If you are only interested in the last 4 bits, use a mask of 0x0f instead:
bool ret_is_syscall = (ret_inst_data & 0x000000000000000f) == 0x000000000000000f;
Otherwise, you need a mask of at least 0x7FF (11 bits) in order to compare with 0x50f:
bool ret_is_syscall = (ret_inst_data & 0x00000000000007ff) == 0x000000000000050f;
If you are interested in the last 4 hex digits (16 bits), use a mask of 0xffff instead:
bool ret_is_syscall = (ret_inst_data & 0x000000000000ffff) == 0x000000000000050f;
If you want to check that the least-significant 2 bytes of ret_inst_data have exactly the value 0x050F, then you need to use a mask of 0xFFFF:
bool ret_is_syscall = (ret_inst_data & 0xFFFF) == 0x050F;
As for why your original comparison is incorrect, let's look at just the least-significant two bytes of the numbers involved.
0x050F has the bit pattern 0000 0101 0000 1111
0x00FF has the bit pattern 0000 0000 1111 1111
If we bitwise and those two patterns together, we get the bit pattern
0000 0101 0000 1111
& 0000 0000 1111 1111
---------------------
0000 0000 0000 1111
The binary 0000 0000 0000 1111 is 0x000F in hex.
As you can see, because the second-least-significant-byte of 0x00FF is all 0s, the result of performing a bitwise and between 0x00FF and any number will produce a result with a second-least-significant-byte of all 0s. Since the second-least-significant byte of 0x050F is not all 0s your comparison can never be true.
I have a buffer which consists of data in unsigned char and two bytes form a 12 Bit value.
I found out that my system is little endian. The first byte in the buffer gives me on the console numbers from 0 to 255. The second byte gives always low numbers between 1 and 8 (measured data, so higher values up to 4 bit would be possible too).
I tried to shift them together so that I get an ushort with a correct 12 bit number.
Sadly at the moment I am totally confused about the endianess and what I have to shift how far in which direction.
I tried e.g. this:
ushort value =0;
value= (ushort) firstByte << 8 | (ushort) secondByte << 4;
Sadly the value of value is quite often bigger than 12 bit.
Where is the mistake?
It depends on how the bits are packed within the two bytes exactly, but the solution for the most likely packing would be:
value = firstByte | (secondByte << 8);
This assumes that the second byte contains the 4 most significant bits (bits 8..11), while the first byte contains the 8 least significant bits (bits 0..7).
Note: the above solution assumes that firstByte and secondByte are sensible unsigned types (e.g. uint8_t). If they are not (e.g. if you have used char or some other possibly signed type), then you'll need to add some masking:
value = (firstByte & 0xff) | ((secondByte & 0xf) << 8);
I think the main issue may not be with the values you're shifting alone. If these values are greater than their representative bits, they'll create a large value unless "and'd" out.
picture the following
0000 0000 1001 0010 << 8 | 0000 0000 0000 1101 << 4
1001 0010 0000 0000 | 0000 0000 1101 0000
You should notice the first problem here. The first 4 'lowest' values are not being used, and it's using up 16 bits. you only wanted twelve. This should be modified like so:
(these are new numbers to demonstrate something else)
0000 1101 1001 0010 << 8 | 0000 0000 0000 1101
1101 1001 0010 0000 | (0000 0000 0000 1101 & 0000 0000 0000 1111)
This will create the following value:
1101 1001 0010 1101
here, you should note that the value is still greater than the 12 bits. If your numbers don't extend passed the original 8bit, 4 bit size ignore this. Otherwise, you have to use the 'and' operation on the bits to eliminate the left most 4 bits.
0000 1111 1111 1111 & 0000 1001 0010 1101
These values can be created using either 0bXX macros, the 2^bits - 1 pattern, as well as various other forms.
I'm fairly new to bit manipulation and I'm trying to figure out how (1 << 31) - 1 works.
First I know that 1 << 31 is
1000000000000000000000000000
and I know it's actually complement of minimum int value, but when I tried to figure out (1 << 31) - 1, I found an explanation states that, it's just
10000000000000000000000000000000 - 1 = 01111111111111111111111111111111
I was almost tempted to believe it since it's really straightforward. But is this what really happening? If it's not, why it happens to be right?
My original thought was that, the real process should be: the two's complement of -1 is
11111111111111111111111111111111
then (1 << 31) - 1 =
(1)01111111111111111111111111111111
the leftmost 1 is abandoned, then we have maximum value of int.
I'm really confused about which one is right.
It's both! 1 << 31 is:
1000 0000 0000 0000 0000 0000 0000 0000
Subtracting 1 gives:
0111 1111 1111 1111 1111 1111 1111 1111
One of the nice features about the two's complement layout of signed numbers is that addition and subtraction are exactly the same operations as they are for unsigned numbers. So 10000...000 represents a negative number in two's complement, the largest negative number, which is -2,147,483,648 in this case, and subtracting 1 from it causes wrap-around to the largest positive number, 2,147,483,647, but two's complement numbers are arranged so that we can pretend it's an unsigned number instead, so the subtraction is uncomplicated. Subtracting 1 from 10000...000 simply drops the leading 1 to a 0, and borrows a bunch of 1s, same as in decimal you get a bunch of 9s: 10000 - 1 = 9999.
It's also true that mathematically, (a - b) is the same as (a + (-b)), so we can do (1 << 31) + (-1) instead:
1000 0000 0000 0000 0000 0000 0000 0000 (1 << 31)
1111 1111 1111 1111 1111 1111 1111 1111 (-1)
-----------------------------------------
1 0111 1111 1111 1111 1111 1111 1111 1111 +
0111 1111 1111 1111 1111 1111 1111 1111 (truncate)
A 1 is carried out of the high end, and lost once the result is truncated back into a 32-bit integer.
Either way, that pattern, with a single 0 at the high end, then filled by 1s, is the representation of the maximum positive value for a two's complement integer of any width.
There are other ways to generate that pattern if you prefer, such as ~(1 << 31), and (-1 >>> 1) (where >>> means logical shift right) which is agnostic of the width of the integer.
I have problem: you know the 2s Complement so you can get the negative number of a positive one with the reverse and adding a one. e.g.
8 Bit
121 = 0111 1001
1st= 1000 0110
+ 0000 0001
---------
1000 0111 --> -121
So now if we have a -0
a zero looks as 8 bit
0000 0000
so a minus 0 should look
1111 1111 + 0000 0001
= 10000 0000
but that is 512
so I think that I've misunderstood something
To expand my previous comment to the question
1111 1111 + 0000 0001 in 8 bit is 0000 0000, the ninth bit is lost because there is no place from it.
And, yes the complement of a negative is a positive
-121 = 1000 0111
1st = 0111 1000
+ 0000 0001
---------
0111 1001 --> 121
Think of them as a circle, at one point there is 0, adding 1 at a time you go up to the opposite point (128 in 8 bit) at that point the sign is switched and the absolute value begin to decrease, e.g.: 128 + 1 = -127, as you continue to add 1 the value go back to 0 and the circle is completed.
So given a number of bit, you only have that much bit, no more, and if you want the value to be signed you really have only x-1 bit for the value, as the most significant bit is used for the sign (0 -> +; 1 -> -)
1 0000 0000b is 256, not 512. Truncated to 8 bits, it's 0.
This is because with two's complement, zero is zero. There is no positive or negative zero.
Compare this to one's complement or sign bit, where positive zero and negative zero are different values.