I don't understand SpriteKit physics bodies's categorybitmasks unit - swift3

I don't understand UInt32 in spritekit categorybitmask. They are very confusing to me because they are typed like: "0b01 << 1" or "0x01" or something like that. I understand their usage in spritekit, but I don't understand the unit.

You have to understand a little about binary which can be tricky when you first encounter it. The code that is confusing you is shorthand for working with binary.
For example, in 0b01 << 1 the 0b prefix means the following number is given in binary. This is helpful because if you just saw, for example, 1001, you might think it is the number "one thousand and one". But if you write 0b1001 you know that here, 1001 is a binary number. (Read more)
Similarly, 0x is a prefix used to express hexadecimal numbers (because x kinda sounds like "hex" I guess). I personally prefer to use the 0b prefix when dealing with bitmasks in SpriteKit, but that's just me.
Now on to bit shifting. << is the bitwise left shift operator in Swift. It moves all of the bits in a number to the left by a certain number of places. For example, the decimal number 3 is written as 0011 in binary.
3 = 0011 // the number 3 is 0011 in binary
3 << 1 = 0110 = 6 // take 0011 and shift all bits one place to the left
3 << 2 = 1100 = 12 // take 0011 and shift all bits two places to the left
The example you gave was 0b01 << 1. If you just enter 0b01, that is the binary representation of the decimal number 1. By entering 0b01 << 1 you are telling the program to take the binary number 01 (more explicitly written as 0b01) and shift its bits one place to the left, which results in the binary number 10 (more explicitly written as 0b10) , which is the number 2 in decimal.
You can actually write 0b01 as 0b1 for short and it means the same thing since you are just omitting the leading zero.
Try entering the following in an Xcode Playground and look at the results you get:
0b1 // decimal 1
0b1 << 1 // decimal 2
0b1 << 2 // decimal 4
0b1 << 3 // decimal 8
0b1 << 4 // decimal 16
You can play around with this, bit shifting by greater amounts, and you will see that the numbers continue to double. It turns out that this is a handy way for SpriteKit to implement bitmasks for its physics engine.
UPDATE:
You can define up to 32 different categories to use for physics bitmasks. Here they all are with their corresponding decimal values:
0b1 // 1
0b1 << 1 // 2
0b1 << 2 // 4
0b1 << 3 // 8
0b1 << 4 // 16
0b1 << 5 // 32
0b1 << 6 // 64
0b1 << 7 // 128
0b1 << 8 // 256
0b1 << 9 // 512
0b1 << 10 // 1,024
0b1 << 11 // 2,048
0b1 << 12 // 4,096
0b1 << 13 // 8,192
0b1 << 14 // 16,384
0b1 << 15 // 32,768
0b1 << 16 // 65,536
0b1 << 17 // 131,072
0b1 << 18 // 262,144
0b1 << 19 // 524,288
0b1 << 20 // 1,048,576
0b1 << 21 // 2,097,152
0b1 << 22 // 4,194,304
0b1 << 23 // 8,388,608
0b1 << 24 // 16,777,216
0b1 << 25 // 33,554,432
0b1 << 26 // 67,108,864
0b1 << 27 // 134,217,728
0b1 << 28 // 268,435,456
0b1 << 29 // 536,870,912
0b1 << 30 // 1,073,741,824
0b1 << 31 // 2,147,483,648

Related

How 4<<1<<2 is 32?

when
1<<2 : 4
4<<1<<2 should be 4<<4 which is 64
but it is showing 32.
I am new to bit manipulation, please let me know where I am doing wrong.
The expression is evaluated from left to right.
4 << 1 << 2
is equivalent to
(4 << 1) << 2
which is the same as
8 << 2
which equals
32

How to Concatenate two BitFields

I have two seperate bitfields that make up a "Identity" field that are 11 + 18 bits in length (29 bits total).
In the bitfield they are of the expected size:
header a;
memset(a.arr, 0, sizeof(a.arr));
a = {0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0}; // 1010 0000
cout << hex << a.BID << endl; // 010 0000 1010 -> 20a
cout << hex << a.IDEX << endl; // 00 1010 0000 1010 0000 -> a0a0
and what I need to do is combine these fields into a 29-bit segment, e.g. 010 0000 1010 00 1010 0000 1010 0000.
When attempting to concatenate the two bitfields however the result is not what I expect:
int BID = a.BID;
int IDEX = a.IDEX;
int result = (BID<<11) | IDEX;
cout << BID << endl;
printf("%x %d",result, result); // -> 10f0a0 (21 bits) where I expect 828A0A0 (29 bits)
It's important for me to have all 29 bits as within this 29-bit field there's various subfields and I was going to take this output and put it through another bit-field to resolve those subfields.
Would you be able to assist in how I could combine BID and IDEX mentioned above into one combined bitfield of 29 bits? Unfortunately they have two bits inbetween the BID and IDEX fields another in the header that are ignored which is why I cannot just set my bitfield to 29 bits.
You should shift 18 bits first and then do the OR. For example:
int result = (BID<<18) | IDEX;
Otherwise you are overwriting the first block. What you are doing here is shifting 11 bits and then ORing with 18 bits which corrupts the first 11 bits indeed.

0 < res <= (1 << 31) -1 - What does this mean?

This statement checks whether a number is 32 bits.
0 < res <= (1 << 31) -1
I can't seem to understand how, can someone help understand this bit shift syntax?
Well, lets begin with an example:
1 in binary is 1
2 in binary is 10
4 in binary is 100
We can see that we need to 'add' an 0 at the end of each number to multiply by 2 and in most language we can do this with this syntax: number << 1
Here we are saying that we add a 1 time a 0 to the left. number >> 1 and here we add 1 time a 0 to the right.
So 1 << 31 means 1 * 2 * 2 * 2 ... 31 times which means 2^31 (so 32 bits)

Why do these two functions to print binary representation of an integer have the same output?

I have two functions that print 32bit number in binary.
First one divides the number into bytes and starts printing from the last byte (from the 25th bit of the whole integer).
Second one is more straightforward and starts from the 1st bit of the number.
It seems to me that these functions should have different outputs, because they process the bits in different orders. However the outputs are the same. Why?
#include <stdio.h>
void printBits(size_t const size, void const * const ptr)
{
unsigned char *b = (unsigned char*) ptr;
unsigned char byte;
int i, j;
for (i=size-1;i>=0;i--)
{
for (j=7;j>=0;j--)
{
byte = (b[i] >> j) & 1;
printf("%u", byte);
}
}
puts("");
}
void printBits_2( unsigned *A) {
for (int i=31;i>=0;i--)
{
printf("%u", (A[0] >> i ) & 1u );
}
puts("");
}
int main()
{
unsigned a = 1014750;
printBits(sizeof(a), &a); // ->00000000000011110111101111011110
printBits_2(&a); // ->00000000000011110111101111011110
return 0;
}
Both your functions print binary representation of the number from the most significant bit to the least significant bit. Today's PCs (and majority of other computer architectures) use so-called Little Endian format, in which multi-byte values are stored with least significant byte first.
That means that 32-bit value 0x01020304 stored on address 0x1000 will look like this in the memory:
+--------++--------+--------+--------+--------+
|Address || 0x1000 | 0x1001 | 0x1002 | 0x1003 |
+--------++--------+--------+--------+--------+
|Data || 0x04 | 0x03 | 0x02 | 0x01 |
+--------++--------+--------+--------+--------+
Therefore, on Little Endian architectures, printing value's bits from MSB to LSB is equivalent to taking its bytes in reversed order and printing each byte's bits from MSB to LSB.
This is the expected result when:
1) You use both functions to print a single integer, in binary.
2) Your C++ implementation is on a little-endian hardware platform.
Change either one of these factors (with printBits_2 appropriately adjusted), and the results will be different.
They don't process the bits in different orders. Here's a visual:
Bytes: 4 3 2 1
Bits: 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1
Bits: 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
The fact that the output is the same from both of these functions tells you that your platform uses Little-Endian encoding, which means the most significant byte comes last.
The first two rows show how the first function works on your program, and the last row shows how the second function works.
However, the first function will fail on platforms that use Big-Endian encoding and output the bits in this order shown in the third row:
Bytes: 4 3 2 1
Bits: 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1
Bits: 8 7 6 5 4 3 2 1 16 15 14 13 12 11 10 9 24 23 22 21 20 19 18 17 32 31 30 29 28 27 26 25
For the printbits1 function, it is taking the uint32 pointer and assigning it to a char pointer.
unsigned char *b = (unsigned char*) ptr;
Now, in a big endian processor, b[0] will point to the Most significant byte of the uint32 value. The inner loop prints this byte in binary, and then b[1] will point to the next most significant byte in ptr. Therefore this method prints the uint32 value MSB first.
As for printbits2, you are using
unsigned *A
i.e. an unsigned int. This loop runs from 31 to 0 and prints the uint32 value in binary.

How does this implementation of bitset::count() work?

Here's the implementation of std::bitset::count with MSVC 2010:
size_t count() const
{ // count number of set bits
static char _Bitsperhex[] = "\0\1\1\2\1\2\2\3\1\2\2\3\2\3\3\4";
size_t _Val = 0;
for (int _Wpos = _Words; 0 <= _Wpos; --_Wpos)
for (_Ty _Wordval = _Array[_Wpos]; _Wordval != 0; _Wordval >>= 4)
_Val += _Bitsperhex[_Wordval & 0xF];
return (_Val);
}
Can someone explain to me how this is working? what's the trick with _Bitsperhex?
_Bitsperhex contains the number of set bits in a hexadecimal digit, indexed by the digit.
digit: 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
value: 0 1 1 2 1 2 2 3 1 2 2 3 2 3 3 4
index: 0 1 2 3 4 5 6 7 8 9 A B C D E F
The function retrieves one digit at a time from the value it's working with by ANDing with 0xF (binary 1111), looks up the number of set bits in that digit, and sums them.
_Bitsperhex is a 16 element integer array that maps a number in [0..15] range to the number of 1 bits in the binary representation of that number. For example, _Bitsperhex[3] is equal to 2, which is the number of 1 bits in the binary representation of 3.
The rest is easy: each multi-bit word in internal array _Array is interpreted as a sequence of 4-bit values. Each 4-bit value is fed through the above _Bitsperhex table to count the bits.
It is a slightly different implementation of the lookup table-based method described here: http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetTable. At the link they use a table of 256 elements and split 32-bit words into four 8-bit values.