Calculating morton code - c++

i am trying to interleave(For calculating morton code) 2 signed long numbers say x and y (32 bits) with values
case 1 :
x = 10; //1010
y = 10; //1010
result will be :
11001100
case 2:
x = -10;
y = 10;
Binary representation are,
x = 1111111111111111111111111111111111111111111111111111111111110110
y = 1010
For interleaving ,i am considering only 32 bit representation where i can interleave 31st bit of x with 31st bit of y ,
using the following code,
signed long long x_y;
for (int i = 31; i >= 0; i--)
{
unsigned long long xbit = ((unsigned long) x)& (1 << i);
x_y|= (xbit << i);
unsigned long long ybit = ((unsigned long) y)& (1 << i);
if (i != 0)
{
x_y|= (x_y<< (i - 1));
}
else
{
(x_y= x_y<< 1) |= ybit;
}
}
The above code works fine ,if we have x positive and y negative but the case 2 is failing ,Please help me ,what is going wrong?
The negative numbers uses 64 bits ,whereas positive numbers uses 32 bits.Correct me if iam wrong.

I think below code work according to your requirement,
Morton code is 64 bits and we are making 64 bit number from two 32 bits numbers by interleaving.
Since numbers are signed ,we have to consider negative numbers as,
if (x < 0) //value will be represented as 2's compliment,hence uses all 64 bits
{
value = x; //value is of 32 bit,so use only first lower 32 bits
cout << value;
value &= ~(1 << 31); //make sign bit to 0,as it does not contribute to real value.
}
similarly do for y.
Following code does the interleaving,
unsigned long long x_y_copy = 0; //make a copy of ur morton code
//looping for each bit of two 32 bit numbers starting from MSB.
for (int i = 31; i >=0; i--)
{
//making mort to 0,so because shifting causes loss of data
mort = 0;
//take 32 bit from x
int xbit = ((unsigned long)x)& (1 << i);
mort = (mort |= xbit)<<i+1; /*shifting*/
//copy formed code to copy ,so that next time the value is preserved for appending
x_y_copy|= mort;
mort =0;
//take 32nd bit from 'y' also
int ybit = ((unsigned long)y)& (1 << i);
mort = (mort |= ybit)<<i;
x_y_copy |= mort;
}
//this is important,when 'y' is negative because the 32nd bit of 'y' is set to 0 by above first code,and while moving 32 bit of 'y' to morton code,the value 0 is copied to 63rd bit,which has to be made to 1,as sign bit is not 63rd bit.
if (mapu_y < 0)
{
x_y_copy = (x_y_copy) | (4611686018427387904);//4611686018427387904 = pow(2,63)
}
I hope this helps.:)

Related

stretch mask - bit manipulation

I want to stretch a mask in which every bit represents 4 bits of stretched mask.
I am looking for an elegant bit manipulation to stretch using c++ and systemC
for example:
input:
mask (32 bits) = 0x0000CF00
output:
stretched mask (128 bits) = 0x00000000 00000000 FF00FFFF 00000000
and just to clarify the example let's look at the the byte C:
0xC = 1100 after stretching: 1111111100000000 = 0xFF00
Do this in a elegant form is not easy.
The simple mode maybe is create a loop with shift bit
sc_biguint<128> result = 0;
for(int i = 0; i < 32; i++){
if(bit_test(var, i)){
result +=0x0F;
}
result << 4;
}
Here's a way of stretching a 16-bit mask into 64 bits where every bit represents 4 bits of stretched mask:
uint64_t x = 0x000000000000CF00LL;
x = (x | (x << 24)) & 0x000000ff000000ffLL;
x = (x | (x << 12)) & 0x000f000f000f000fLL;
x = (x | (x << 6)) & 0x0303030303030303LL;
x = (x | (x << 3)) & 0x1111111111111111LL;
x |= x << 1;
x |= x << 2;
It starts of with the mask in the bottom 16 bits. Then it moves the top 8 bits of the mask into the top 32 bits, like this:
0000000000000000 0000000000000000 0000000000000000 ABCDEFGHIJKLMNOP
becomes
0000000000000000 00000000ABCDEFGH 0000000000000000 00000000IJKLMNOP
Then it solves the similar problem of stretching a mask from the bottom 8 bits of a 32 bit word, to the top and bottom 32-bits simultaneously:
000000000000ABCD 000000000000EFGH 000000000000IJKL 000000000000MNOP
Then it does it for 4 bits inside 16 and so on until the bits are spread out:
000A000B000C000D 000E000F000G000H 000I000J000K000L 000M000N000O000P
Then it "smears" them across 4 bits by ORing the result with itself twice:
AAAABBBBCCCCDDDD EEEEFFFFGGGGHHHH IIIIJJJJKKKKLLLL MMMMNNNNOOOOPPPP
You could extend this to 128 bits by adding an extra first step where you shift by 48 bits and mask with a 128-bit constant:
x = (x | (x << 48)) & 0x000000000000ffff000000000000ffffLLL;
You'd also have to stretch the other constants out to 128 bits just by repeating the bit patterns. However (as far as I know) there is no way to declare a 128-bit constant in C++, but perhaps you could do it with macros or something (see this question). You could also make a 128-bit version just by using the 64-bit version on the top and bottom 16 bits separately.
If loading the masking constants turns out to be a difficulty or bottleneck you can generate each one from the previous one using shifting and masking:
uint64_t m = 0x000000ff000000ffLL;
m &= m >> 4; m |= m << 16; // gives 0x000f000f000f000fLL
m &= m >> 2; m |= m << 8; // gives 0x0303030303030303LL
m &= m >> 1; m |= m << 4; // gives 0x1111111111111111LL
Does this work for you?
#include <stdio.h>
long long Stretch4x(int input)
{
long long output = 0;
while (input & -input)
{
int b = (input & -input);
long long s = 0;
input &= ~b;
s = b*15;
while(b>>=1)
{
s <<= 3;
}
output |= s;
}
return output;
}
int main(void) {
int input = 0xCF00;
printf("0x%0x ==> 0x%0llx\n", input, Stretch4x(input));
return 0;
}
Output:
0xcf00 ==> 0xff00ffff00000000
The other solutions are good. However, most them are more C than C++. This solution is pretty straight forward: it uses std::bitset and set four bits for each input bit.
#include <bitset>
#include <iostream>
std::bitset<128>
starch_32 (const std::bitset<32> &input)
{
std::bitset<128> output;
for (size_t i = 0; i < input.size(); ++i) {
// If `input[N]` is `true`, set `output[N*4, N*4+4]` to true.
if (input.test (i)) {
const size_t output_index = i * 4;
output.set (output_index);
output.set (output_index + 1);
output.set (output_index + 2);
output.set (output_index + 3);
}
}
return output;
}
// Example with 0xC.
int main() {
std::bitset<32> input{0b1100};
auto result = starch_32 (input);
std::cout << "0x" << std::hex << result.to_ullong() << "\n";
}
Try it online!
On x86 you could use the PDEP intrinsic to move the 16 mask bits into the correct nibble (into the low bit of each nibble, for example) of a 64-bit word, and then use a couple of shift + or to smear them into the rest of the word:
unsigned long x = _pdep_u64(m, 0x1111111111111111);
x |= x << 1;
x |= x << 2;
You could also replace those two OR and two shift by a single multiplication by 0xF which accomplishes the same smearing.
Finally, you could consider a SIMD approach: solutions such as samgak's above should map naturally to SIMD.

turning off leftmost non zero bit of a number

how can I turn off leftmost non-zero bit of a number in O(1)?
for example
n = 366 (base 10) = 101101110 (in base 2)
then after turning the leftmost non-zero bit off ,number looks like = 001101110
n will always be >0
Well, if you insist on O(1) under any circumstances, the Intel Intrinsics function _bit_scan_reverse() defined in immintrin.h does a hardware find for the most-significant non-zero bit in a int number.
Though the operation does use a loop (functional equivalent), I believe its constant time given its latency at fixed 3 (as per Intel Intrinsics Guide).
The function will return the index to the most-significant non-zero bit thus doing a simple:
n = n & ~(1 << _bit_scan_reverse(n));
should do.
This intrinsic is undefined for n == 0. So you gotta watch out there. I'm following the assumption of your original post where n > 0.
n = 2^x + y.
x = log(n) base 2
Your highest set bit is x.
So in order to reset that bit,
number &= ~(1 << x);
Another approach:
int highestOneBit(int i) {
i |= (i >> 1);
i |= (i >> 2);
i |= (i >> 4);
i |= (i >> 8);
i |= (i >> 16);
return i - (i >> 1);
}
int main() {
int n = 32767;
int z = highestOneBit(n); // returns the highest set bit number i.e 2^x.
cout<< (n&(~z)); // Resets the highest set bit.
return 0;
}
Check out this question, for a possibly faster solution, using a processor instruction.
However, an O(lgN) solution is:
int cmsb(int x)
{
unsigned int count = 0;
while (x >>= 1) {
++count;
}
return x & ~(1 << count);
}
If ANDN is not supported and LZCNT is supported, the fastest O(1) way to do it is not something along the lines of n = n & ~(1 << _bit_scan_reverse(n)); but rather...
int reset_highest_set_bit(int x)
{
const int mask = 0x7FFFFFFF; // 011111111[...]
return x & (mask >> __builtin_clz(x));
}

How to split an unsigned long int (32 bit) into 8 nibbles?

I am sorry if my question is confusing but here is the example of what I want to do,
lets say I have an unsigned long int = 1265985549
in binary I can write this as 01001011011101010110100000001101
now I want to split this binary 32 bit number into 4 bits like this and work separately on those 4 bits
0100 1011 0111 0101 0110 1000 0000 1101
any help would be appreciated.
You can get a 4-bit nibble at position k using bit operations, like this:
uint32_t nibble(uint32_t val, int k) {
return (val >> (4*k)) & 0x0F;
}
Now you can get the individual nibbles in a loop, like this:
uint32_t val = 1265985549;
for (int k = 0; k != 8 ; k++) {
uint32_t n = nibble(val, k);
cout << n << endl;
}
Demo on ideone.
short nibble0 = (i >> 0) & 15;
short nibble1 = (i >> 4) & 15;
short nibble2 = (i >> 8) & 15;
short nibble3 = (i >> 12) & 15;
etc
Based on the comment explaining the actual use for this, here's an other way to count how many nibbles have an odd parity: (not tested)
; compute parities of nibbles
x ^= x >> 2;
x ^= x >> 1;
x &= 0x11111111;
; add the parities
x = (x + (x >> 4)) & 0x0F0F0F0F;
int count = x * 0x01010101 >> 24;
The first part is just a regular "xor all the bits" type of parity calculation (where "all bits" refers to all the bits in a nibble, not in the entire integer), the second part is based on this bitcount algorithm, skipping some steps that are unnecessary because certain bits are always zero and so don't have to be added.

Binary-Decimal Negative bit set

How can I tell if a binary number is negative?
Currently I have the code below. It works fine converting to Binary. When converting to decimal, I need to know if the left most bit is 1 to tell if it is negative or not but I cannot seem to figure out how to do that.
Also, instead of making my Bin2 function print 1's an 0's, how can I make it return an integer? I didn't want to store it in a string and then convert to int.
EDIT: I'm using 8 bit numbers.
int Bin2(int value, int Padding = 8)
{
for (int I = Padding; I > 0; --I)
{
if (value & (1 << (I - 1)))
std::cout<< '1';
else
std::cout<<'0';
}
return 0;
}
int Dec2(int Value)
{
//bool Negative = (Value & 10000000);
int Dec = 0;
for (int I = 0; Value > 0; ++I)
{
if(Value % 10 == 1)
{
Dec += (1 << I);
}
Value /= 10;
}
//if (Negative) (Dec -= (1 << 8));
return Dec;
}
int main()
{
Bin2(25);
std::cout<<"\n\n";
std::cout<<Dec2(11001);
}
You are checking for negative value incorrectly. Do the following instead:
bool Negative = (value & 0x80000000); //It will work for 32-bit platforms only
Or may be just compare it with 0.
bool Negative = (value < 0);
Why don't you just compare it to 0. Should work fine and almost certainly you can't do this in a manner more efficient than the compiler.
I am entirely unclear if this is what the OP is looking for, but its worth a toss:
If you know you have a value in a signed int that is supposed to be representing a signed 8-bit value, you can pull it apart, store it in a signed 8-bit value, then promote it back to a native int signed value like this:
#include <stdio.h>
int main(void)
{
// signed integer, value is 245. 8bit signed value is (-11)
int num = 0xF5;
// pull out the low 8 bits, storing them in a signed char.
signed char ch = (signed char)(num & 0xFF);
// now let the signed char promote to a signed int.
int res = ch;
// finally print both.
printf("%d ==> %d\n",num, res);
// do it again for an 8 bit positive value
// this time with just direct casts.
num = 0x70;
printf("%d ==> %d\n", num, (int)((signed char)(num & 0xFF)));
return 0;
}
Output
245 ==> -11
112 ==> 112
Is that what you're trying to do? In short, the code above will take the 8bits sitting at the bottom of num, treat them as a signed 8-bit value, then promote them to a signed native int. The result is you can now "know" not only whether the 8-bits were a negative number (since res will be negative if they were), you also get the 8-bit signed number as a native int in the process.
On the other hand, if all you care about is whether the 8th bit is set in the input int, and is supposed to denote a negative value state, then why not just :
int IsEightBitNegative(int val)
{
return (val & 0x80) != 0;
}

Check value of least significant bit (LSB) and most significant bit (MSB) in C/C++

I need to check the value of the least significant bit (LSB) and most significant bit (MSB) of an integer in C/C++. How would I do this?
//int value;
int LSB = value & 1;
Alternatively (which is not theoretically portable, but practically it is - see Steve's comment)
//int value;
int LSB = value % 2;
Details:
The second formula is simpler. The % operator is the remainder operator. A number's LSB is 1 iff it is an odd number and 0 otherwise. So we check the remainder of dividing with 2. The logic of the first formula is this: number 1 in binary is this:
0000...0001
If you binary-AND this with an arbitrary number, all the bits of the result will be 0 except the last one because 0 AND anything else is 0. The last bit of the result will be 1 iff the last bit of your number was 1 because 1 & 1 == 1 and 1 & 0 == 0
This is a good tutorial for bitwise operations.
HTH.
You can do something like this:
#include <iostream>
int main(int argc, char **argv)
{
int a = 3;
std::cout << (a & 1) << std::endl;
return 0;
}
This way you AND your variable with the LSB, because
3: 011
1: 001
in 3-bit representation. So being AND:
AND
-----
0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
You will be able to know if LSB is 1 or not.
edit: find MSB.
First of all read Endianess article to agree on what MSB means. In the following lines we suppose to handle with big-endian notation.
To find the MSB, in the following snippet we will focus applying a right shift until the MSB will be ANDed with 1.
Consider the following code:
#include <iostream>
#include <limits.h>
int main(int argc, char **argv)
{
unsigned int a = 128; // we want to find MSB of this 32-bit unsigned int
int MSB = 0; // this variable will represent the MSB we're looking for
// sizeof(unsigned int) = 4 (in Bytes)
// 1 Byte = 8 bits
// So 4 Bytes are 4 * 8 = 32 bits
// We have to perform a right shift 32 times to have the
// MSB in the LSB position.
for (int i = sizeof(unsigned int) * 8; i > 0; i--) {
MSB = (a & 1); // in the last iteration this contains the MSB value
a >>= 1; // perform the 1-bit right shift
}
// this prints out '0', because the 32-bit representation of
// unsigned int 128 is:
// 00000000000000000000000010000000
std::cout << "MSB: " << MSB << std::endl;
return 0;
}
If you print MSB outside of the cycle you will get 0.
If you change the value of a:
unsigned int a = UINT_MAX; // found in <limits.h>
MSB will be 1, because its 32-bit representation is:
UINT_MAX: 11111111111111111111111111111111
However, if you do the same thing with a signed integer things will be different.
#include <iostream>
#include <limits.h>
int main(int argc, char **argv)
{
int a = -128; // we want to find MSB of this 32-bit unsigned int
int MSB = 0; // this variable will represent the MSB we're looking for
// sizeof(int) = 4 (in Bytes)
// 1 Byte = 8 bits
// So 4 Bytes are 4 * 8 = 32 bits
// We have to perform a right shift 32 times to have the
// MSB in the LSB position.
for (int i = sizeof(int) * 8; i > 0; i--) {
MSB = (a & 1); // in the last iteration this contains the MSB value
a >>= 1; // perform the 1-bit right shift
}
// this prints out '1', because the 32-bit representation of
// int -128 is:
// 10000000000000000000000010000000
std::cout << "MSB: " << MSB << std::endl;
return 0;
}
As I said in the comment below, the MSB of a positive integer is always 0, while the MSB of a negative integer is always 1.
You can check INT_MAX 32-bit representation:
INT_MAX: 01111111111111111111111111111111
Now. Why the cycle uses sizeof()?
If you simply do the cycle as I wrote in the comment: (sorry for the = missing in comment)
for (; a != 0; a >>= 1)
MSB = a & 1;
you will get 1 always, because C++ won't consider the 'zero-pad bits' (because you specified a != 0 as exit statement) higher than the highest 1. For example for 32-bit integers we have:
int 7 : 00000000000000000000000000000111
^ this will be your fake MSB
without considering the full size
of the variable.
int 16: 00000000000000000000000000010000
^ fake MSB
int LSB = value & 1;
int MSB = value >> (sizeof(value)*8 - 1) & 1;
Others have already mentioned:
int LSB = value & 1;
for getting the least significant bit. But there is a cheatier way to get the MSB than has been mentioned. If the value is a signed type already, just do:
int MSB = value < 0;
If it's an unsigned quantity, cast it to the signed type of the same size, e.g. if value was declared as unsigned, do:
int MSB = (int)value < 0;
Yes, officially, not portable, undefined behavior, whatever. But on every two's complement system and every compiler for them that I'm aware of, it happens to work; after all, the high bit is the sign bit, so if the signed form is negative, then the MSB is 1, if it's non-negative, the MSB is 0. So conveniently, a signed test for negative numbers is equivalent to retrieving the MSB.
LSB is easy. Just x & 1.
MSSB is a bit trickier, as bytes may not be 8 bits and sizeof(int) may not be 4, and there might be padding bits to the right.
Also, with a signed integer, do you mean the sign bit of the MS value bit.
If you mean the sign bit, life is easy. It's just x < 0
If you mean the most significant value bit, to be completely portable.
int answer = 0;
int rack = 1;
int mask = 1;
while(rack < INT_MAX)
{
rack << = 1;
mask << = 1;
rack |= 1;
}
return x & mask;
That's a long-winded way of doing it. In reality
x & (1 << (sizeof(int) * CHAR_BIT) - 2);
will be quite portable enough and your ints won't have padding bits.