Looking for a Hash Function /Ordered Int/ to /Shuffled Int/ - c++

I am looking for constant time algorithm can change an ordered integer index value into a random hash index. It would nice if it is reversible. I need that hash key is unique for each index. I know that this could be done with a table look up in a large file. I.E. create an ordered set of all ints and then shuffle them randomly and write to a file in random sequence. You could then read them back as you need them. But this would require a seek into a large file. I wonder if there is a simple way to use say a pseudo random generator to create the sequence as needed?
Generating shuffled range using a PRNG rather than shuffling the answer by
erikkallen of Linear Feedback Shift Registers looks like the right sort of thing. I just tried it but it produces repeats and holes.
Regards
David Allan Finch

The question is now if you need a really random mapping, or just a "weak" permutation. Assuming the latter, if you operate with unsigned 32-bit integers (say) on 2's complement arithmetics, multiplication by any odd number is a bijective and reversible mapping. Of course the same goes for XOR, so a simple pattern which you might try to use is e.g.
unsigned int hash(int x) {
return (((x ^ 0xf7f7f7f7) * 0x8364abf7) ^ 0xf00bf00b) * 0xf81bc437;
}
There is nothing magical in the numbers. So you can change them, and they can be even randomized. The only thing is that the multiplicands must be odd. And you must be calculating with rollaround (ignoring overflows). This can be inverted. To do the inversion, you need to be able to calculate the correct complementary multiplicands A and B, after which the inversion is
unsigned int rhash(int h) {
return (((x * B) ^ 0xf00bf00b) * A) ^ 0xf7f7f7f7;
}
You can calculate A and B mathematically, but the easier thing for you is just to run a loop and search for them (once offline, that is).
The equation uses XORs mixed with multiplications to make the mapping nonlinear.

You could try building a suitable Feistel network. These are normally used for cryptography (e.g. DES), but with at least 64 bits, so you may need to build one yourself that suits your needs. They are invertible by construction.

Assuming your goal is to spread out grouped values across the whole range,
it seems like shuffling the bits in some pre-defined order might do the trick.
i.e. given 8 bits ABCDEFGH, arrange them like EGDBHCFA, or some such pattern.
The code would just be a simple sequence of masks, shifts and adds.

Mmm... depending if you have a lot of numbers, you could use a normal stl list, and order it by a "random" criteria
bool
nonsort(int i, int j)
{
return random() & 31 >16 ? true : false;
}
std::list<int> li;
// insert elements
li.sort(nonsort);
Then, you can get all the integers with a normal iterator. Remember to initialize random with srand() with time or any other pseudo-random value.

For the set of constraints there really is no solution. An attempt to hash 32 bit unsigned, into a 32 bit unsigned, will give you collisions, unless you do a something simple, like a 1 to 1 mapping. Every number is its own hash.

Related

Checking certain positions on a pattern maching on bit vectors

I am sorry if the title was misleading, but I really did not know how to address this.
Basically, the problem is the following: we have a bit vector T and a bit vector P.
Let's say P[a1], P[a2], ..., P[ak] are the 1 bits in P. I am interested in knowing, for each position i in T, how many bits of 1 are amongst P[i+a1], P[i+a2], ..., P[i+ak]. It's like putting P as a mask over the substring starting at i and checking how many bits of 1 there are in the end. If the number is impossible to get in good time, checking whether all the bits are 1 should suffice.
Is there a better algorithm that can solve this (better than the O(T*P) "naive" one of "sliding" the pattern from position to position and counting the number of occurences)?
Even a better constant of multiplication would be great in this case. I have heard you can use bitsets on C++ and get something like O( 1/32 * T * (T - P) ), but I am not very familiarized with bitsets and how they operate and such. Are there fast (&) operations on bitsets avaliable?

Fast, unbiased, integer pseudo random generator with arbitrary bounds

For a monte carlo integration process, I need to pull a lot of random samples from
a histogram that has N buckets, and where N is arbitrary (i.e. not a power of two) but
doesn't change at all during the course of the computation.
By a lot, I mean something on the order of 10^10, 10 billions, so pretty much any
kind of lengthy precomputation is likely worth it in the face of the sheer number of
samples).
I have at my disposal a very fast uniform pseudo random number generator that
typically produces unsigned 64 bits integers (all the ints in the discussion
below are unsigned).
The naive way to pull a sample : histogram[ prng() % histogram.size() ]
The naive way is very slow: the modulo operation is using an integer division (IDIV)
which is terribly expensive and the compiler, not knowing the value of histogram.size()
at compile time, can't be up to its usual magic (i.e. http://www.azillionmonkeys.com/qed/adiv.html)
As a matter of fact, the bulk of my computation time is spent extracting that darn modulo.
The slightly less naive way: I use libdivide (http://libdivide.com/) which is capable
of pulling off a very fast "divide by a constant not known at compile time".
That gives me a very nice win (25% or so), but I have a nagging feeling that I can do
better, here's why:
First intuition: libdivide computes a division. What I need is a modulo, and to get there
I have to do an additional mult and a sub : mod = dividend - divisor*(uint64_t)(dividend/divisor). I suspect there might be a small win there, using libdivide-type
techniques that produce the modulo directly.
Second intuition: I am actually not interested in the modulo itself. What I truly want is
to efficiently produce a uniformly distributed integer value that is guaranteed to be strictly smaller than N.
The modulo is a fairly standard way of getting there, because of two of its properties:
A) mod(prng(), N) is guaranteed to be uniformly distributed if prng() is
B) mod(prgn(), N) is guaranteed to belong to [0,N[
But the modulo is/does much more that just satisfy the two constraints above, and in fact
it does probably too much work.
All need is a function, any function that obeys constraints A) and B) and is fast.
So, long intro, but here comes my two questions:
Is there something out there equivalent to libdivide that computes integer modulos directly ?
Is there some function F(X, N) of integers X and N which obeys the following two constraints:
If X is a random variable uniformly distributed then F(X,N) is also unirformly distributed
F(X, N) is guranteed to be in [0, N[
(PS : I know that if N is small, I do not need to cunsume all the 64 bits coming out of
the PRNG. As a matter of fact, I already do that. But like I said, even that optimization
is a minor win when compare to the big fat loss of having to compute a modulo).
Edit : prng() % N is indeed not exactly uniformly distributed. But for N large enough, I don't think it's much of problem (or is it ?)
Edit 2 : prng() % N is indeed potentially very badly distributed. I had never realized how bad it could get. Ouch. I found a good article on this : http://ericlippert.com/2013/12/16/how-much-bias-is-introduced-by-the-remainder-technique
Under the circumstances, the simplest approach may work the best. One extremely simple approach that might work out if your PRNG is fast enough would be to pre-compute one less than the next larger power of 2 than your N to use as a mask. I.e., given some number that looks like 0001xxxxxxxx in binary (where x means we don't care if it's a 1 or a 0) we want a mask like 000111111111.
From there, we generate numbers as follows:
Generate a number
and it with your mask
if result > n, go to 1
The exact effectiveness of this will depend on how close N is to a power of 2. Each successive power of 2 is (obviously enough) double its predecessor. So, in the best case N is exactly one less than a power of 2, and our test in step 3 always passes. We've added only a mask and a comparison to the time taken for the PRNG itself.
In the worst case, N is exactly equal to a power of 2. In this case, we expect to throw away roughly half the numbers we generated.
On average, N ends up roughly halfway between powers of 2. That means, on average, we throw away about one out of four inputs. We can nearly ignore the mask and comparison themselves, so our speed loss compared to the "raw" generator is basically equal to the number of its outputs that we discard, or 25% on average.
If you have fast access to the needed instruction, you could 64-bit multiply prng() by N and return the high 64 bits of the 128-bit result. This is sort of like multiplying a uniform real in [0, 1) by N and truncating, with bias on the order of the modulo version (i.e., practically negligible; a 32-bit version of this answer would have small but perhaps noticeable bias).
Another possibility to explore would be use word parallelism on a branchless modulo algorithm operating on single bits, to get random numbers in batches.
Libdivide, or any other complex ways to optimize that modulo are simply overkill. In a situation as yours, the only sensible approach is to
ensure that your table size is a power of two (add padding if you must!)
replace the modulo operation with a bitmask operation. Like this:
size_t tableSize = 1 << 16;
size_t tableMask = tableSize - 1;
...
histogram[prng() & tableMask]
A bitmask operation is a single cycle on any CPU that is worth its money, you can't beat its speed.
--
Note:
I don't know about the quality of your random number generator, but it may not be a good idea to use the last bits of the random number. Some RNGs produce poor randomness in the last bits and better randomness in the upper bits. If that is the case with your RNG, use a bitshift to get the most significant bits:
size_t bitCount = 16;
...
histogram[prng() >> (64 - bitCount)]
This is just as fast as the bitmask, but it uses different bits.
You could extend your histogram to a "large" power of two by cycling it, filling in the trailing spaces with some dummy value (guaranteed to never occur in the real data). E.g. given a histogram
[10, 5, 6]
extend it to length 16 like so (assuming -1 is an appropriate sentinel):
[10, 5, 6, 10, 5, 6, 10, 5, 6, 10, 5, 6, 10, 5, 6, -1]
Then sampling can be done via a binary mask histogram[prng() & mask] where mask = (1 << new_length) - 1, with a check for the sentinel value to retry, that is,
int value;
do {
value = histogram[prng() & mask];
} while (value == SENTINEL);
// use `value` here
The extension is longer than necessary to make retries unlikely by ensuring that the vast majority of the elements are valid (e.g. in the example above only 1/16 lookups will "fail", and this rate can be reduced further by extending it to e.g. 64). You could even use a "branch prediction" hint (e.g. __builtin_expect in GCC) on the check so that the compiler orders code to be optimal for the case when value != SENTINEL, which is hopefully the common case.
This is very much a memory vs. speed trade-off.
Just a few ideas to complement the other good answers:
What percent of time is spent in the modulo operation, and how do you know what that percent is? I only ask because sometimes people say something is terribly slow when in fact it is less than 10% of the time and they only think it's big because they're using a silly self-time-only profiler. (I have a hard time envisioning a modulo operation taking a lot of time compared to a random number generator.)
When does the number of buckets become known? If it doesn't change too frequently, you can write a program-generator. When the number of buckets changes, automatically print out a new program, compile, link, and use it for your massive execution.
That way, the compiler will know the number of buckets.
Have you considered using a quasi-random number generator, as opposed to a pseudo-random generator? It can give you higher precision of integration in much fewer samples.
Could the number of buckets be reduced without hurting the accuracy of the integration too much?
The non-uniformity dbaupp cautions about can be side-stepped by rejecting&redrawing values no less than M*(2^64/M) (before taking the modulus).
If M can be represented in no more than 32 bits, you can get more than one value less than M by repeated multiplication (see David Eisenstat's answer) or divmod; alternatively, you can use bit operations to single out bit patterns long enough for M, again rejecting values no less than M.
(I'd be surprised at modulus not being dwarfed in time/cycle/energy consumption by random number generation.)
To feed the bucket, you may use std::binomial_distribution to directly feed each bucket instead of feeding the bucket one sample by one sample:
Following may help:
int nrolls = 60; // number of experiments
const std::size_t N = 6;
unsigned int bucket[N] = {};
std::mt19937 generator(time(nullptr));
for (int i = 0; i != N; ++i) {
double proba = 1. / static_cast<double>(N - i);
std::binomial_distribution<int> distribution (nrolls, proba);
bucket[i] = distribution(generator);
nrolls -= bucket[i];
}
Live example
Instead of integer division you can use fixed point math, i.e integer multiplication & bitshift. Say if your prng() returns values in range 0-65535 and you want this quantized to range 0-99, then you do (prng()*100)>>16. Just make sure that the multiplication doesn't overflow your integer type, so you may have to shift the result of prng() right. Note that this mapping is better than modulo since it's retains the uniform distribution.
Thanks everyone for you suggestions.
First, I am now thoroughly convinced that modulo is really evil.
It is both very slow and yields incorrect results in most cases.
After implementing and testing quite a few of the suggestions, what
seems to be the best speed/quality compromise is the solution proposed
by #Gene:
pre-compute normalizer as:
auto normalizer = histogram.size() / (1.0+urng.max());
draw samples with:
return histogram[ (uint32_t)floor(urng() * normalizer);
It is the fastest of all methods I've tried so far, and as far as I can tell,
it yields a distribution that's much better, even if it may not be as perfect
as the rejection method.
Edit: I implemented David Eisenstat's method, which is more or less the same as Jarkkol's suggestion : index = (rng() * N) >> 32. It works as well as the floating point normalization and it is a little faster (9% faster in fact). So it is my preferred way now.

Hash 16-bit integer to a 256-bit space efficiently

It sounds weird to be going bigger, but that's what I'm trying to do. I want to take the entire sequence of 16-bit integers and hash each one in such a way that it maps to 256-bit space uniformly.
The reason for this is that I'm trying to put a subset of the 16-bit number space into a 256-bit bloom filter, for fast membership testing.
I could use some well-known hashing function on each integer, but I'm looking for an extremely efficient implementation (just a few instructions) so that this runs well in a GPU shader program. I feel like the fact that the hash input is known to be only 16-bits can inform the hash function is designed somehow, but I am failing to see the solution.
Any ideas?
EDITS
Based on the responses, my original question is confusing. Sorry about that. I will try to restate it with a more concrete example:
I have a subset S1 of n numbers from the set S, which is in the range (0, 2^16-1). I need to represent this subset S1 with a 256-bit bloom filter constructed with a single hashing function. The reason for the bloom filter is a space consideration. I've chosen a 256-bit bloom filter because it fits my space requirements, and has a low enough probability of false positives. I'm looking to find a very simple hashing function that can take a number from set S and represent it in 256 bits such that each bit has roughly equal probability of being 1 or 0.
The reason for the requirement of simplicity in the hashing function is that this hashing function is going to have to run thousands of times per pixel, so anywhere where I can trim instructions is a win.
If you multiply (using uint32_t) a 16 bit value by prime (or for that matter any odd number) p between 2^31 and 2^32, then you "probably" smear the results fairly evenly across the 32 bit space. Then you might want to add another prime value, to prevent 0 mapping to 0 (you want each bit to have an equal probability of being 0 or 1, only one input value in 2^256 should have output all zeros, and since there are only 2^16 inputs that means you want none of them to have output all zeros).
So that's how to expand 16 bits to 32 with one operation (plus whatever instructions are needed to load the constant). Use four different values p1 ... p4 to get 256 bits, and run some tests with different p values to find good ones (i.e. those that produce not too many more false positives than what you expect for your Bloom filter given the size of the set you're encoding and assuming an ideal hashing function). For example I'm pretty sure -1 is a bad p-value.
No matter how good the values you'll see some correlations, though: for example as I've described it above the lowest bit of all 4 separate values will be equal, which is a pretty serious dependency. So you probably want a couple more "mixing" operations. For example you might say that each byte of the final output shall be the XOR of two of the bytes of what I've described (and not two least-siginficant bytes!), just to get rid of the simple arithmetic relations.
Unless I've misunderstood the question, though, this is not how a Bloom filter usually works. Usually you want your hash to produce an exact fixed number of set bits for each input, and all the arithmetic to compute the false positive rate relies on this. That's why for a Bloom filter 256 bits in size you'd normally have k 8-bit hashes, not one 256-bit hash. k is normally rather less than half the size of the filter in bits (the optimal value is the number of bits per value in the filter, times ln(2) which is about 0.7). So normally you don't want the probability of each bit being 1 to be anything like as high as 0.5.
The reason is that once you've ORed as few as 4 such 256-bit values together, almost all the bits in your filter are set (15 in 16 of them). So you're looking at a lot of false positives already.
But if you've done the math and you're happy with a single hash function producing a variable number of set bits averaging half of them, then fair enough. Or is the double-occurrence of the number 256 just a coincidence, because k happens to be 32 for the set size you have chosen and you're actually using the 256-bit hash as 32 8-bit hashes?
[Edit: your comment clarifies this, but anyway k should not get so high that you need 256 bits of hash in total. Clearly there's no point in this case using a Bloom filter with more than 16 bits per value (i.e fewer than 16 values), since using the same amount of space you could just list the values, and have a false positive rate of 0. A filter with 16 bits per value gives a false positive rate of something like 1 in 2200. Even there, optimal k is only 23, that is you should set 23 bits in the filter for each value in the set. If you expect the sets to be bigger than 16 values then you want to set fewer bits for each element, and you'll get a higher false positive rate.]
I believe there is some confusion in the question as posed. I will first try to clear up any inconsistencies I've noticed above.
OP originally states that he is trying to map a smaller space into a larger one. If this is truly the case, then the use of the bloom filter algorithm is unnecessary. Instead, as has been suggested in the comments above, the identity function is the only "hash" function necessary to set and test each bit. However, I make the assertion that this is not really what the OP is looking for. If so, then the OP must be storing 2^256 bits in memory (based on how the question is stated) in order for the space of 16-bit integers (i.e. 2^16) to be smaller than his set size; this is an unreasonable amount of memory to be using and is highly unlikely to be the case.
Therefore, I make the assumption that the problem constraints are as follows: we have a 256-bit bit vector in which we want to map the space of 16-bit integers. That is, we have 256 bits available to map 2^16 possible different integers. Thus, we are not actually mapping into a larger space, but, instead, a much smaller space. Similarly, it does appear (again, as previously pointed out in the comments above) that the OP is requesting a single hash function. If this is the case, there is clear misunderstanding about how bloom filters work.
Bloom filters typically use a set of hash independent hash functions to reduce false positives. Without going into too much detail, every input to the bloom filter runs through all n hash functions and then the resulting index in the bit vector is tested for each function. If all indices tested are set to 1, then the value may be in the set (with proper collisions in all n hash functions or overlap, false positives will occur). Moreover, if any of the indices is set to 0, then the value is absolutely not in the set. With this in mind, it is important to notice that an entirely saturated bloom filter has no benefit. That is, every query to the bloom filter will return that the item is in the set.
Hash Function Concerns
Now, back to the OP's original question. It is likely going to be best to use known hashing algorithms (since these are mathematically difficult to write and "rolling your own" typically doesn't end well). If you are worried about efficiency down to clock-cycles, implement the algorithm yourself in the appropriate assembly language for your architecture to reduce running time for each hash function. Remember, algorithmically, hash functions should run in O(1) time, so they should not contribute too much overhead if implemented properly. To start you off, I would recommend considering the modified bernstein hash. I have written a version for your specific case below (mostly for example purposes):
unsigned char modified_bernstein(short key)
{
unsigned ret = key & 0xff;
ret = 33 * ret ^ (key >> 8);
return ret % 256; // Try to do some modulo math to keep it in range
}
The bernstein method I have adapted generally runs as a function of the number of bytes of the input. Since a short type is 2 bytes or 16-bits, I have removed any variables and loops from the algorithm and simply performed some bit twiddling to get at each byte. Finally, an unsigned char can return a value in the range of [0,256) which forces the hash function to return a valid index in the bit vector.

Using bitwise & instead of modulus operator to randomly sample integers from a range

I need to randomly sample from a uniform distribution of integers over the interval [LB,UB] in C++. To do so, I start with a "good" RN generator (from Numerical Recipes 3rd ed.) that uniformly randomly samples 64-bit integers; let's call it int64().
Using the mod operator, I can sample from the integers in [LB,UB] by:
LB+int64()%(UB-LB+1);
The only issue with using the mod operator is the slowness of the integer division. So, I then tried the method suggested here, which is:
LB + (int64()&(UB-LB))
The bitwise & method is about 3 times as fast. This is huge for me, because one of my simulations in C++ needs to randomly sample about 20 million integers.
But there's 1 big problem. When I analyze the integers sampled using the bitwise & method, they don't appear uniformly distributed over the interval [LB,UB]. The integers are indeed sampled from [LB,UB], but only from the even integers in that range. For example, here is a histogram of 5000 integers sampled from [20,50] using the bitwise & method:
By comparison, here is what a similar histogram looks like when using the mod operator method, which of course works fine:
What's wrong with my bitwise & method? Is there any way to modify it so that both even and odd numbers are sampled over the defined interval?
The bitwise & operator looks at each pair of corresponding bits of its operands, performs an and using only those two bits, and puts that result in the corresponding bit of the result.
So, if the last bit of UB-LB is 0, then the last bit of the result is 0. That is to say, if UB-LB is even then every output will be even.
The & is inappropriate to the purpose, unless UB-LB+1 is a power of 2. If you want to find a modulus, then there's no general shortcut: the compiler will already implement % the fastest way it knows.
Note that I said no general shortcut. For particular values of UB-LB, known at compile time, there can be faster ways. And if you can somehow arrange for UB and LB to have values that the compiler can compute at compile time then it will use them when you write %.
By the way, using % does not in fact produce uniformly-distributed integers over the range, unless the size of the range is a power of 2. Otherwise there must be a slight bias in favour of certain values, because the range of your int64() function cannot be assigned equally across the desired range. It may be that the bias is too small to affect your simulation in particular, but bad random number generators have broken random simulations in the past, and will do so again.
If you want a uniform random number distribution over an arbitrary range, then use std::uniform_int_distribution from C++11, or the class of the same name in Boost.
This works well if the range difference (UB-LB) is 2n-1, but won't work at all well if for example 2n.
The two are equivalent only when the size of the interval is a power of two. In general y%x and y&(x-1) are not the same.
For example, x%5 produces numbers from 0 to 4 (or to -4, for negative x), but x&4 produces either 0 or 4, never 1, 2, or 3, because of how bitwise operators work...

bit shifting - replacing a section of a bitset with a new number

I have a list of numbers encoded as a boost dynamic bitset. I dynamically choose the size of this bitset depending on the maximum value any number in this list can take. So let's say I have numbers from just 0 to 7, I only need three bits and my string 0,2,7 will be encoded as
000010111.
I now need to change say the 2nd number in this list (2) to another number, say 4.
I thought the most efficient way to do this would be to represent 4 as a dynamic bitset of the same length as the list but with all other values set to 1, so 111111011. I would then bitshift this the required amount using with 1s used to fill in values to get 111011111, and then just bitwise AND this with the original bitset to get my desired result.
However, I cannot find a way to do these two things, as it seems with both initialisation of a bitset from an integer, and when bit shifting, the default and fill in values are always set to 0, not 1. How can I get around this problem, or achieve my goal in a different and efficient way.
Thanks
If that is really the implementation, the most general and efficient method I can think of would be to first mask off all the bits for the part you are replacing:
value &= 111000111;
Then "or" in the actual bits for that position:
value |= 000011000;
Hopefully someone here has a better trick for me to learn, but that's what I do.
XOR the old value and the new value:
int valuetoset = oldvalue ^ newvalue; // 4 XOR 2 in your example
Just shift the value you need to set:
int bitstoset = valuetoset << position; // (4 XOR 2) << 3 in your example
Then XOR again bitstoset with your bitset and that's it !
int result = bitstoset ^ bitset;
Would you be able to use a vector of dynamic bitsets? Depending on your needs that might be sufficient and allow for easy updates.
Alternately fill your new bitset similiarly to how you proposed, but exactly inverted. Then right before you do the and at the end, flip all the bits.
I guess your understanding of bitset is elementary wrong:
set means it is NOT ordered, and the idea of a bitset is, that only one bit is necessary to show that the element is in-/outside the set.
So your original set 0,2,7 would have 8 bits because 0..7 are 8 elements and NOT 3 * 3 (3 bits required to represent 0..7), and the bitmap would look like 10000101.
What you describe is just a "packed" coding of the values. In your coding scheme 0,2,7 and 2,0,7 would coded completly different, but in a bitset they are the same.
In a (real) bitset (if that is what you want) you can then really easy "replace" elements by removing the old and adding the new. This happens as T.E.D. describes it.
To get the right mask you can easily use shift operations. So imagine you start counting by 0, you get the mask for value x by doing: 1<<x;
So you remove element x from the set by
value &= ~(1<<x);
and add another elemtn x (which might be the same) with
value | = 1<<x;
From you comment you misuse the bitset, so the masks must be build different (and you already had an almost right idea how to build them).
The command with bitmask for removal of element at position p:
value &= ~(111 p);
This 111 is for the above example where you need 3 bit for a position. If you dont want to hardcode it, you could for just take the next power of 2 and subtract 1 and then you got your only-1-string.
And to add you would just take your suggestest bitlist that contains only the new element and OR it to your bitlist:
value |= new_element_bitlist;