Clear all but the most significant one bit - bit-manipulation

For an unsigned integer j, the operation j&(-j) clears all but the least significant 1 bit, where -j is the negative of j treating j as a signed integer. https://en.wikipedia.org/wiki/Find_first_set
Is there a similar simple operation that clears all but the most significant 1 bit?
An obvious solution is using clz (count leading zeros) operation which is present in nearly all contemporary processors. There is also a question Previous power of 2 with an answer that is said to work even faster than clz on old AMD processors. See also What is fastest method to calculate a number having only bit set which is the most significant digit set in another number?
My question is whether there is something simpler than using clz which may not be easily accessible in certain languages. Note that I need the most significant 1 bit itself, not its position (logarithm).

For locate position of highest "1" in the word, I am using BSR assembly operation:
static inline uint32_t bsr(uint32_t x) {
uint32_t rc = 0;
__asm__("bsr %1,%0":"=r" (rc):"rm" (x));
return rc;
}
It returns position of highest "1". For example, bsr(5) == 2.
if needed this code for 64-bit "x", use bsrq instruction.

Related

crc32 hash default/invalid value?

I am building a simple string ID system using crc32 to generate 32 bit integer handles from my strings. I'd like to default the hash inside my StringID wrapper class to an invalid index by default, is there a value that crc32 will never generate? Will I have to use a separate flag?
Clarification: I am not interested in language specific answers. I'd simply like to know if there is an integer outside of the crc32 range that can be used to represent an unhashed value.
Thanks!
Is there a value that crc32 will never generate?
No, it will generate any/all values in the range of a 32-bit integer.
Will I have to use a separate flag?
Not necessarily.
If you decide that (e.g.) 0x00000000 means "CRC not set" and non-zero is the CRC value; then after calculating the CRC (but before storing it or checking the stored value) you can do if(CRCvalue == 0) CRCvalue = 0xFFFFFFFF;.
This weakens the CRC by an extremely tiny amount. Specifically, for 2 random pieces of data, for pure CRC32 there's 1 chance in 4294967296 of the CRCs matching, and with "zero means unset" there's 1 chance in 4294967295.000000000232830643654 of the CRCs matching.
There is an easy demonstration to the fact that you can generate any crc32 value, as it is de division mod P (where P is the generator polynomial) in a galois field (which happens to be a field, as real or complex numbers are), you can subtract (this is a XOR operation, so adding and subtracting are indeed the same thing) to your polynomial its modulus, giving a 0 remainder, then you can add to this multiple of the modulus any of all the possible crc32 values to it (as they are already remainders of divisions, their crc32 is just themselves) to get any of the 2^32 possible values.
It is a common practice to add as many zero bits as necessary to complete a full 32 bit word (this appears as a multiplication by a constant value x^32), and then subtract(xor) the remainder to that, making the result a multiple of of the modulus (remember that the addition and subtraction are the same ---a xor operation) and so making the crc32(pol) = 0x0000;
edit(easier to see)
Indeed, each of the possible 2^32 values for crc32, when divided by the generator polynomial, give themselves as a result (they are coprime with the generator polynomial, as are the numbers 1 .. N when doing arithmetic modulo N on integers) so they all are possible results of the crc32() operator.
The crc operation, as implemented in many places, is not that simple... as some implementations initialize the remainder register as 0xffffffff and look for 0xffffffff at termination(indeed, crc32 does this).... If you do the maths, you'll guess the reason for that: Initializing the register to 0x11111111 is equivalent to having a previous remainder of 0xffffffff in a longer string... and looking for 0xffffffff at the end is like appending 0xffffffff to the original string. This has the effect of concatenating the bit string 0xffffffff before and after your string, making the remainder sensible to appends of a string of zeros before and after the crc32 calculated string (altering the string of bits by appending zeros at either side). Anyway, this modification doesn't alter the original algorithm of calculating a polynomial remainder, so any of the 2**32 values are possible also in this case.
No. A CRC-32 can be any 32-bit value. You'll need to indicate an invalid index somewhere else.
My spoof code allows you to choose bit locations in a message to modify and the desired CRC, and will solve for which of those locations to flip to get exactly that CRC.

Who defines the sign inversion pattern for integers?

Twos complement means that simply inverting all bits of a number i gets me -i-1:
~0 is -1
~01000001 is 10111110
~65 is -66
etc. To switch the sign of an integer, I have to use the actual minus sign.
int i = 65; int j = -i;
cout << j; // -65
Where is that actual behavior defined, and whose responsibility is it to ensure the twos complement pattern (to make a number negative invert all bits and add 1) is followed? I don't even know if this is a hardware or compiler operation.
It's usually done by the CPU hardware.
Some CPUs have an instruction for calculating the negative of a number. In the x86 architecture, it's the NEG instruction.
If not, it can be done using the multiplication operator, multiplying the number by -1. But many programmers take advantage of the identity that you discovered, and complement the number and then add 1. See
How to convert a positive number to negative in assembly
the reasons for this are simple: consistency with 0 and addition.
You want addition to work the same for positive and negative numbers without special cases... in particulary, incrementing -1 by 1 must yield 0.
The only bit sequence, where the classic overflowing increment produces the value of 0 is the all-1 bit sequence. If you increment by 1, you get all zeros. So that is your -1: all 1s, i.e. bitwise negation of 0.
Now we have (assuming 8 bit integers, incrementing by 1 each line)
-2: 11111110 = ~1
-1: 11111111 = ~0
0: 00000000 = ~-1
+1: 00000001 = ~-2
If you don't like this behavior, you need to handle special cases in addition, and you'll have +0 and -0. Most likely, such a CPU would be a lot slower.
If your question is how
int i = -j;
is implemented, that depends on your compiler and CPU and optimization. Usually it will be optimized together with other operations you specify. But don't be surprised if this ends up being performed as
int i = 0 - j;
Since this probably takes 1-2 cpu ticks to compute (e.g. as one XOR or a register onto itself to get 0, then a SUB operation to compute 0-j), it will barely ever be a bottleneck. Loading j and storing the result i somewhere in memory will be a lot lot lot more expensive. In fact, some CPUs (MIPS?) even have a built-in register that is always zero. Then you don't need a special instruction for negation, you simply subtract j from $zero in usually 1 tick.
Current Intel CPUs are said to recognize such xor operarions and do them in 0 ticks, with register renaming optimizations (i.e. they let the next instruction use a new register that is zero). You have neg on amd64, but a fast xor rax,rax is useful in other situations, too.
C arithmetic is defined in terms of values. When the code is:
int i = 65;
int j = -i;
the compiler will emit whatever CPU instructions are required to give j the value of -65, regardless of the bit representation.
Historically, not all systems used 2's complement. The C compiler would choose a system of negative numbers that leads to the most efficient output for the target CPU, based on the CPU's capabilities.
However 2's complement is a very common choice because it leads to the simplest algorithms for doing arithmetic. For example the same instruction can be used for + - * with signed and unsigned integers.

32 bit CRC with some inputs set to zero. Is this less accurate than dummy data?

Sorry if I should be able to answer this simple question myself!
I am working on an embedded system with a 32bit CRC done in hardware for speed. A utility exists that I cannot modify that initially takes 3 inputs (words) and returns a CRC.
If a standard 32 bit was implemented, would generating a CRC from a 32 bit word of actual data and 2 32 bit words comprising only of zeros produce a less reliable CRC than if I just made up/set some random values for the last 2 32?
Depending on the CRC/polynomial, my limited understanding of CRC would say the more data you put in the less accurate it is. But don't zero'd data reduce accuracy when performing the shifts?
Using zeros will be no different than some other value you might pick. The input word will be just as well spread among the CRC bits either way.
I agree with Mark Adler that zeros are mathematically no worse than other numbers. However, if the utility you can't change does something bad like set the initial CRC to zero, then choose non-zero pad words. An initial CRC=0 + Data=0 + Pads=0 produces a final CRC=0. This is technically valid, but routinely getting CRC=0 is undesirable for data integrity checking. You could compensate for a problem like this with non-zero pad characters, e.g. pad = -1.

Hash 16-bit integer to a 256-bit space efficiently

It sounds weird to be going bigger, but that's what I'm trying to do. I want to take the entire sequence of 16-bit integers and hash each one in such a way that it maps to 256-bit space uniformly.
The reason for this is that I'm trying to put a subset of the 16-bit number space into a 256-bit bloom filter, for fast membership testing.
I could use some well-known hashing function on each integer, but I'm looking for an extremely efficient implementation (just a few instructions) so that this runs well in a GPU shader program. I feel like the fact that the hash input is known to be only 16-bits can inform the hash function is designed somehow, but I am failing to see the solution.
Any ideas?
EDITS
Based on the responses, my original question is confusing. Sorry about that. I will try to restate it with a more concrete example:
I have a subset S1 of n numbers from the set S, which is in the range (0, 2^16-1). I need to represent this subset S1 with a 256-bit bloom filter constructed with a single hashing function. The reason for the bloom filter is a space consideration. I've chosen a 256-bit bloom filter because it fits my space requirements, and has a low enough probability of false positives. I'm looking to find a very simple hashing function that can take a number from set S and represent it in 256 bits such that each bit has roughly equal probability of being 1 or 0.
The reason for the requirement of simplicity in the hashing function is that this hashing function is going to have to run thousands of times per pixel, so anywhere where I can trim instructions is a win.
If you multiply (using uint32_t) a 16 bit value by prime (or for that matter any odd number) p between 2^31 and 2^32, then you "probably" smear the results fairly evenly across the 32 bit space. Then you might want to add another prime value, to prevent 0 mapping to 0 (you want each bit to have an equal probability of being 0 or 1, only one input value in 2^256 should have output all zeros, and since there are only 2^16 inputs that means you want none of them to have output all zeros).
So that's how to expand 16 bits to 32 with one operation (plus whatever instructions are needed to load the constant). Use four different values p1 ... p4 to get 256 bits, and run some tests with different p values to find good ones (i.e. those that produce not too many more false positives than what you expect for your Bloom filter given the size of the set you're encoding and assuming an ideal hashing function). For example I'm pretty sure -1 is a bad p-value.
No matter how good the values you'll see some correlations, though: for example as I've described it above the lowest bit of all 4 separate values will be equal, which is a pretty serious dependency. So you probably want a couple more "mixing" operations. For example you might say that each byte of the final output shall be the XOR of two of the bytes of what I've described (and not two least-siginficant bytes!), just to get rid of the simple arithmetic relations.
Unless I've misunderstood the question, though, this is not how a Bloom filter usually works. Usually you want your hash to produce an exact fixed number of set bits for each input, and all the arithmetic to compute the false positive rate relies on this. That's why for a Bloom filter 256 bits in size you'd normally have k 8-bit hashes, not one 256-bit hash. k is normally rather less than half the size of the filter in bits (the optimal value is the number of bits per value in the filter, times ln(2) which is about 0.7). So normally you don't want the probability of each bit being 1 to be anything like as high as 0.5.
The reason is that once you've ORed as few as 4 such 256-bit values together, almost all the bits in your filter are set (15 in 16 of them). So you're looking at a lot of false positives already.
But if you've done the math and you're happy with a single hash function producing a variable number of set bits averaging half of them, then fair enough. Or is the double-occurrence of the number 256 just a coincidence, because k happens to be 32 for the set size you have chosen and you're actually using the 256-bit hash as 32 8-bit hashes?
[Edit: your comment clarifies this, but anyway k should not get so high that you need 256 bits of hash in total. Clearly there's no point in this case using a Bloom filter with more than 16 bits per value (i.e fewer than 16 values), since using the same amount of space you could just list the values, and have a false positive rate of 0. A filter with 16 bits per value gives a false positive rate of something like 1 in 2200. Even there, optimal k is only 23, that is you should set 23 bits in the filter for each value in the set. If you expect the sets to be bigger than 16 values then you want to set fewer bits for each element, and you'll get a higher false positive rate.]
I believe there is some confusion in the question as posed. I will first try to clear up any inconsistencies I've noticed above.
OP originally states that he is trying to map a smaller space into a larger one. If this is truly the case, then the use of the bloom filter algorithm is unnecessary. Instead, as has been suggested in the comments above, the identity function is the only "hash" function necessary to set and test each bit. However, I make the assertion that this is not really what the OP is looking for. If so, then the OP must be storing 2^256 bits in memory (based on how the question is stated) in order for the space of 16-bit integers (i.e. 2^16) to be smaller than his set size; this is an unreasonable amount of memory to be using and is highly unlikely to be the case.
Therefore, I make the assumption that the problem constraints are as follows: we have a 256-bit bit vector in which we want to map the space of 16-bit integers. That is, we have 256 bits available to map 2^16 possible different integers. Thus, we are not actually mapping into a larger space, but, instead, a much smaller space. Similarly, it does appear (again, as previously pointed out in the comments above) that the OP is requesting a single hash function. If this is the case, there is clear misunderstanding about how bloom filters work.
Bloom filters typically use a set of hash independent hash functions to reduce false positives. Without going into too much detail, every input to the bloom filter runs through all n hash functions and then the resulting index in the bit vector is tested for each function. If all indices tested are set to 1, then the value may be in the set (with proper collisions in all n hash functions or overlap, false positives will occur). Moreover, if any of the indices is set to 0, then the value is absolutely not in the set. With this in mind, it is important to notice that an entirely saturated bloom filter has no benefit. That is, every query to the bloom filter will return that the item is in the set.
Hash Function Concerns
Now, back to the OP's original question. It is likely going to be best to use known hashing algorithms (since these are mathematically difficult to write and "rolling your own" typically doesn't end well). If you are worried about efficiency down to clock-cycles, implement the algorithm yourself in the appropriate assembly language for your architecture to reduce running time for each hash function. Remember, algorithmically, hash functions should run in O(1) time, so they should not contribute too much overhead if implemented properly. To start you off, I would recommend considering the modified bernstein hash. I have written a version for your specific case below (mostly for example purposes):
unsigned char modified_bernstein(short key)
{
unsigned ret = key & 0xff;
ret = 33 * ret ^ (key >> 8);
return ret % 256; // Try to do some modulo math to keep it in range
}
The bernstein method I have adapted generally runs as a function of the number of bytes of the input. Since a short type is 2 bytes or 16-bits, I have removed any variables and loops from the algorithm and simply performed some bit twiddling to get at each byte. Finally, an unsigned char can return a value in the range of [0,256) which forces the hash function to return a valid index in the bit vector.

How can I set all bits to '1' in a binary number of an unknown size?

I'm trying to write a function in assembly (but lets assume language agnostic for the question).
How can I use bitwise operators to set all bits of a passed in number to 1?
I know that I can use the bitwise "or" with a mask with the bits I wish to set, but I don't know how to construct a mask based off some a binary number of N size.
~(x & 0)
x & 0 will always result in 0, and ~ will flip all the bits to 1s.
Set it to 0, then flip all the bits to 1 with a bitwise-NOT.
You're going to find that in assembly language you have to know the size of a "passed in number". And in assembly language it really matters which machine the assembly language is for.
Given that information, you might be asking either
How do I set an integer register to all 1 bits?
or
How do I fill a region in memory with all 1 bits?
To fill a register with all 1 bits, on most machines the efficient way takes two instructions:
Clear the register, using either a special-purpose clear instruction, or load immediate 0, or xor the register with itself.
Take the bitwise complement of the register.
Filling memory with 1 bits then requires 1 or more store instructions...
You'll find a lot more bit-twiddling tips and tricks in Hank Warren's wonderful book Hacker's Delight.
Set it to -1. This is usually represented by all bits being 1.
Set x to 1
While x < number
x = x * 2
Answer = number or x - 1.
The code assumes your input is called "number". It should work fine for positive values. Note for negative values which are twos complement the operation attempt makes no sense as the high bit will always be one.
Use T(~T(0)).
Where T is the typename (if we are talking about C++.)
This prevents the unwanted promotion to int if the type is smaller than int.