Return non-duplicate random values from a very large range - c++

I would like a function that will produce k pseudo-random values from a set of n integers, zero to n-1, without repeating any previous result. k is less than or equal to n. O(n) memory is unacceptable because of the large size of n and the frequency with which I'll need to re-shuffle.
These are the methods I've considered so far:
Array:
Normally if I wanted duplicate-free random values I'd shuffle an array, but that's O(n) memory. n is likely to be too large for that to work.
long nextvalue(void) {
static long array[4000000000];
static int s = 0;
if (s == 0) {
for (int i = 0; i < 4000000000; i++) array[i] = i;
shuffle(array, 4000000000);
}
return array[s++];
}
n-state PRNG:
There are a variety of random number generators that can be designed so as to have a period of n and to visit n unique states over that period. The simplest example would be:
long nextvalue(void) {
static long s = 0;
static const long i = 1009; // assumed co-prime to n
s = (s + i) % n;
return s;
}
The problem with this is that it's not necessarily easy to design a good PRNG on the fly for a given n, and it's unlikely that that PRNG will approximate a fair shuffle if it doesn't have a lot of variable parameters (even harder to design). But maybe there's a good one I don't know about.
m-bit hash:
If the size of the set is a power of two, then it's possible to devise a perfect hash function f() which performs a 1:1 mapping from any value in the range to some other value in the range, where every input produces a unique output. Using this function I could simply maintain a static counter s, and implement a generator as:
long nextvalue(void) {
static long s = 0;
return f(s++);
}
This isn't ideal because the order of the results is determined by f(), rather than random values, so it's subject to all the same problems as above.
NPOT hash:
In principle I can use the same design principles as above to define a version of f() which works in an arbitrary base, or even a composite, that is compatible with the range needed; but that's potentially difficult, and I'm likely to get it wrong. Instead a function can be defined for the next power of two greater than or equal to n, and used in this construction:
long nextvalue(void) {
static long s = 0;
long x = s++;
do { x = f(x); } while (x >= n);
}
But this still have the same problem (unlikely to give a good approximation of a fair shuffle).
Is there a better way to handle this situation? Or perhaps I just need a good function for f() that is highly parameterisable and easy to design to visit exactly n discrete states.
One thing I'm thinking of is a hash-like operation where I contrive to have the first j results perfectly random through carefully designed mapping, and then any results between j and k would simply extrapolate on that pattern (albeit in a predictable way). The value j could then be chosen to find a compromise between a fair shuffle and a tolerable memory footprint.

First of all, it seems unreasonable to discount anything that uses O(n) memory and then discuss a solution that refers to an underlying array. You have an array. Shuffle it. If that doesn't work or isn't fast enough, come back to us with a question about it.
You only need to perform a complete shuffle once. After that, draw from index n, swap that element with an element located randomly before it and increase n, modulo element count. For example, with such a large dataset I'd use something like this.
Prime numbers are an option for hashes, but probably not the same way you think. Using two Mersenne primes (low and high, perhaps 0xefff and 0xefffffff) you should be able to come up with a much more general-purpose hashing algorithm.
size_t hash(unsigned char *value, size_t value_size, size_t low, size_t high) {
size_t x = 0;
while (value_size--) {
x += *value++;
x *= low;
}
return x % high;
}
#define hash(value, value_size, low, high) (hash((void *) value, value_size, low, high))
This should produce something fairly well distributed for all inputs larger than about two octets for example, with the minor troublesome exception for zero byte prefixes. You might want to treat those differently.

So... what I've ended up doing is digging deeper into pre-existing methods to
try to confirm their ability to approximate a fair shuffle.
I take a simple counter, which itself is guaranteed to visit
every in-range value exactly once, and then 'encrypt' it with an n-bit block
cypher. Rather, I round the range up to a power of two, and apply some 1:1
function; then if the result is out of range I repeat the permutation until the
result is in range.
This can be guaranteed to complete eventually because there are only a finite
number of out-of-range values within the power-of-two range, and they cannot
enter into a always-out-of-range cycle because that would imply that something
in the cycle was mapped from two different previous states (one from the
in-range set, and another from the out-of-range set), which would make the
function not bijective.
So all I need to do is devise a parameterisable function which I can tune to an
arbitrary number of bits. Like this one:
uint64_t mix(uint64_t x, uint64_t k) {
const int s0 = BITS * 4 / 5;
const int s1 = BITS / 5 + (k & 1);
const int s2 = BITS * 2 / 5;
k |= 1;
x *= k;
x ^= (x & BITMASK) >> s0;
x ^= (x << s1) & BITMASK;
x ^= (x & BITMASK) >> s2;
x += 0x9e3779b97f4a7c15;
return x & BITMASK;
}
I know it's bijective because I happen to have its inverse function handy:
uint64_t unmix(uint64_t x, uint64_t k) {
const int s0 = BITS * 4 / 5;
const int s1 = BITS / 5 + (k & 1);
const int s2 = BITS * 2 / 5;
k |= 1;
uint64_t kp = k * k;
while ((kp & BITMASK) > 1) {
k *= kp;
kp *= kp;
}
x -= 0x9e3779b97f4a7c15;
x ^= ((x & BITMASK) >> s2) ^ ((x & BITMASK) >> s2 * 2);
x ^= (x << s1) ^ (x << s1 * 2) ^ (x << s1 * 3) ^ (x << s1 * 4) ^ (x << s1 * 5);
x ^= (x & BITMASK) >> s0;
x *= k;
return x & BITMASK;
}
This allows me to define a simple parameterisable PRNG like this:
uint64_t key[ROUNDS];
uint64_t seed = 0;
uint64_t rand_no_rep(void) {
uint64_t x = seed++;
do {
for (int i = 0; i < ROUNDS; i++) x = mix(x, key[i]);
} while (x >= RANGE);
return x;
}
Initialise seed and key to random values and you're good to go.
Using the inverse function to lets me determine what seed must be to force
rand_no_rep() to produce a given output; making it much easier to test.
So far I've checked the cases where constant a, it is followed by constant
b. For ROUNDS==1 pairs collide on exactly 50% of the keys (and each
pair of collisions is with a different pair of a and b; they don't all converge on 0, 1 or whatever). That is, for
various k, a specific a-followed-by-b cases occurs for more than one k
(this must happen at least one). Subsequent values values do not collide in
that case, so different keys aren't falling into the same cycle at different
positions. Every k gives a unique cycle.
50% collisions comes from 25% being not unique when they're added to the list (count itself, and count the guy it ran into). That might sound bad but it's actually lower than birthday paradox logic would suggest. Selecting randomly, the percentage of new entries that fail to be unique looks to converge between 36% and 37%. Being "better than random" is obviously worse than random, as far as randomness goes, but that's why they're called pseudo-random numbers.
Extending that to ROUNDS==2, we want to make sure that a second round doesn't
cancel out or simply repeat the effects of the first.
This is important because it would mean that multiple rounds are a waste of
time and memory, and that the function cannot be paramaterised to any
substantial degree. It could happen trivially if mix() contained all linear
operations (say, multiply and add, mod RANGE). In that case all of the
parameters could be multiplied/added together to produce a single parameter for
a single round that would have the same effect. That would be disappointing,
as it would reduce the number of attainable permutations to the size of just
that one parameter, and if the set is as small as that then more work would be
needed to ensure that it's a good, representative set.
So what we want to see from two rounds is a large set of outcomes that could
never be achieved by one round. One way to demonstrate this is to look for the
original b-follows-a cases with an additional parameter c, where we want
to see every possible c following a and b.
We know from the one-round testing that in 50% of cases there is only one c
that can follow a and b because there is only one k that places b
immediately after a. We also know that 25% of the pairs of a and b were
unreachable (being the gap left behind by half the pairs that went into
collisions rather than new unique values), and the last 25% appear for two
different k.
The result that I get is that given a free choice of both keys, it's possible
to find about five eights of the values of c following a given a and b.
About a quarter of the a/b pairs are unreachable (it's a less predictable,
now, because of the potential intermediate mappings into or out of the
duplicate or unreachable cases) and a quarter have a, b, and c appear
together in two sequences (which diverge afterwards).
I think there's a lot to be inferred from the difference between one round and
two, but I could be wrong about that and I need to double-check. Further
testing gets harder; or at least slower unless I think more carefully about how
I'm going to do it.
I haven't yet demonstrated that amongst the set of permutations it can produce, that they're all equally likely; but this is normally not guaranteed for any other PRNG either.
It's fairly slow for a PRNG, but it would fit SIMD trivially.

Related

Counting numbers a AND s = a

I am writing a program to meet the following specifications:
You have a list of integers, initially the list is empty.
You have to process Q operations of three kinds:
add s: Add integer s to your list, note that an integer can exist
more than one time in the list
del s: Delete one copy of integer s from the list, it's guaranteed
that at least one copy of s will exist in the list.
cnt s: Count how many integers a are there in the list such that a
AND s = a , where AND is bitwise AND operator
Additional constraints:
1 ≤ Q ≤ 200000
0 ≤ s < 2 ^ 16
I have two approaches but both time out, as the constraints are quite large.
I used the fact that a AND s = a if and only if s has all the set bits of a, and the other bits can be arbitrarily assigned. So we can iterate over all these numbers and increase their count by one.
For example, if we have the number 10: 1010
Then the numbers 1011,1111,1110 will be such that when anded with 1010, they will give 1010. So we increase the count of 10,11,14 and 15 by 1. And for delete we delete one from their respective counts.
Is there a faster method? Should I use a different data structure?
Let's consider two ways to solve it that are two slow, and then merge them into one solution, that will be guaranteed to finish in milliseconds.
Approach 1 (slow)
Allocate an array v of size 2^16. Every time you add an element, do the following:
void add(int s) {
for (int i = 0; i < (1 << 16); ++ i) if ((s & i) == 0) {
v[s | i] ++;
}
}
(to delete do the same, but decrement instead of incrementing)
Then to answer cnt s you just need to return the value of v[s]. To see why, note that v[s] is incremented exactly once for every number a that is added such that a & s == a (I will leave it is an exercise to figure out why this is the case).
Approach 2 (slow)
Allocate an array v of size 2^16. When you add an element s, just increment v[s]. To query the count, do the following:
int cnt(int s) {
int ret = 0;
for (int i = 0; i < (1 << 16); ++ i) if ((s | i) == s) {
ret += v[s & ~i];
}
return ret;
}
(x & ~y is a number that has all the bits that are set in x that are not set in y)
This is a more straightforward approach, and is very similar to what you do, but is written in a slightly different fashion. You will see why I wrote it this way when we combine the two approaches.
Both these approaches are too slow, because in which of them one operation is constant, and one is O(s), so in the worst case, when the entire input consists of the slow operations, we spend O(Q * s), which is prohibitively slow. Now let's merge the two approaches using meet-in-the-middle to get a faster solution.
Fast approach
We will merge the two approaches in the following way: add will work similarly to the first approach, but instead of considering every number a such that a & s == a, we will only consider numbers, that differ from s only in the lowest 8 bits:
void add(int s) {
for (int i = 0; i < (1 << 8); ++ i) if ((i & s) == 0) {
v[s | i] ++;
}
}
For delete do the same, but instead of incrementing elements, decrement them.
For counts we will do something similar to the second approach, but we will account for the fact that each v[a] is already accumulated for all combinations of the lowest 8 bits, so we only need to iterate over all the combinations of the higher 8 bits:
int cnt(int s) {
int ret = 0;
for (int i = 0; i < (1 << 8); ++ i) if ((s | (i << 8)) == s) {
ret += v[s & ~(i << 8)];
}
return ret;
}
Now both add and cnt work in O(sqrt(s)), so the entire approach is O(Q * sqrt(s)), which for your constraints should be milliseconds.
Pay extra attention to overflows -- you didn't provide the upper bound on s, if it is too high, you might want to replace ints with long longs.
One of the ways to solve it is to break list of queries in blocks of about sqrt(S) queries each. This is a standard approach, usually called sqrt-decomposition.
You have to store separately:
Array A[v]: how much times s is present.
Array R[v]: sum of A[i] for all i supersets of v (i.e. result of cnt(v)).
List W of all changes (add, del operations) within current block of queries.
Note: arrays A and R are valid only for all the changes from the fully processed block of queries. All the changes that happened within the currently processed block of queries are stored in W and are not yet applied to A and R.
Now we process queries block by block, for each block of queries we do:
For each query within block:
add(v): store increment for v into W list.
del(v): store decrement for v into W list.
cnt(v): return R[v] + X(W), where X(W) is total changed calculated by trivial processing of all the changes in the list W.
Apply all the changes from W to array A, clear list W.
Recalculate completely array R from array A.
Note that add and del take O(1) time, and cnt takes O(|W|) = O(sqrt(S)) time. So step 1 takes O(Q sqrt(S)) time in total.
Step 2 takes O(|W|) time, which totals in O(Q) time overall.
The most important part is step 3. We need to implement it in O(S). Given that there are Q / sqrt(S) blocks, this would total in O(Q sqrt(S)) time as wanted.
Unfortunately, recalculating array S can be done in only O(S log S) time. That would mean O(Q sqrt(S) log (S)) time. If we choose block size O(sqrt(S log S)), then overall time is O(Q sqrt(S log S)). No perfect, but interesting nonetheless =)
Given the data structure that you described in one of the comments, you could try the following algorithm (I am giving it in pseudo-code):
count-how-many-integers(integer s) {
sum = 0
for i starting from s and increasing by 1 until s*2 {
if (i AND s) == i {
sum = sum + a[i]
}
}
return sum
}
More sophisticated optimizations should be possible in the inner loop to reduce the number of times the test is performed.

Simulate random iteration of array

I have an array of given size. I want to traverse it in pseudorandom order, keeping array intact and visiting each element once. It will be best if current state can be stored in a few integers.
I know you can't have full randomness without storing full array, but I don't need the order to be really random. I need it to be perceived as random by user. The solution should use sub-linear space.
One possible suggestion - using large prime number - is given here. The problem with this solution is that there is an obvious fixed step (taken module array size). I would prefer a solution which is not so obviously non-random. Is there a better solution?
How about this algorithm?
To pseudo-pseudo randomly traverse an array of size n.
Create a small array of size k
Use the large prime number method to fill the small array, i = 0
Randomly remove a position using a RNG from the small array, i += 1
if i < n - k then add a new position using the large prime number method
if i < n goto 3.
the higher k is the more randomness you get. This approach will allow you to delay generating numbers from the prime number method.
A similar approach can be done to generate a number earlier than expected in the sequence by creating another array, "skip-list". Randomly pick items later in the sequence, use them to traverse the next position, and then add them to the skip-list. When they naturally arrive they are searched for in the skip-list and suppressed and then removed from the skip-list at which point you can randomly add another item to the skip-list.
The idea of a random generator that simulates a shuffle is good if you can get one whose maximum period you can control.
A Linear Congruential Generator calculates a random number with the formula:
x[i + 1] = (a * x[i] + c) % m;
The maximum period is m and it is achieved when the following properties hold:
The parameters c and m are relatively prime.
For every prime number r dividing m, a - 1 is a multiple of r.
If m is a multiple of 4 then also a - 1 is multiple of 4.
My first darft involved making m the next multiple of 4 after the array length and then finding suitable a and c values. This was (a) a lot of work and (b) yielded very obvious results sometimes.
I've rethought this approach. We can make m the smallest power of two that the array length will fit in. The only prime factor of m is then 2, which will make every odd number relatively prime to it. With the exception of 1 and 2, m will be divisible by 4, which means that we must make a - 1 a multiple of 4.
Having a greater m than the array length means that we must discard all values that are illegal array indices. This will happen at most every other turn and should be negligible.
The following code yields pseudo random numbers with a period of exaclty m. I've avoided trivial values for a and c and on my (not too numerous) spot cheks, the results looked okay. At least there was no obvious cycling pattern.
So:
class RandomIndexer
{
public:
RandomIndexer(size_t length) : len(length)
{
m = 8;
while (m < length) m <<= 1;
c = m / 6 + uniform(5 * m / 6);
c |= 1;
a = m / 12 * uniform(m / 6);
a = 4*a + 1;
x = uniform(m);
}
size_t next()
{
do { x = (a*x + c) % m; } while (x >= len);
return x;
}
private:
static size_t uniform(size_t m)
{
double p = std::rand() / (1.0 + RAND_MAX);
return static_cast<int>(m * p);
}
size_t len;
size_t x;
size_t a;
size_t c;
size_t m;
};
You can then use the generator like this:
std::vector<int> list;
for (size_t i = 0; i < 3; i++) list.push_back(i);
RandomIndexer ix(list.size());
for (size_t i = 0; i < list.size(); i++) {
std::cout << list[ix.next()]<< std::endl;
}
I am aware that this still isn't a great random number generator, but it is reasonably fast, doesn't require a copy of the array and seems to work okay.
If the approach of picking a and c randomly yields bad results, it might be a good idea to restrict the generator to some powers of two and to hard-code literature values that have proven to be good.
As pointed out by others, you can create a sort of "flight plan" upfront by shuffling an array of array indices and then follow it. This violates the "it will be best if current state can be stored in a few integers" constraint but does it really matter? Are there tight performance constraints? After all, I believe that if you don't accept repetitions, than you need to store the items you already visited somewhere or somehow.
Alternatively, you can opt for an intrusive solution and store a bool inside each element of the array, telling you whether the element was already selected or not. This can be done in an almost clean way by employing inheritance (multiple as needed).
Many problems come with this solution, e.g. thread safety, and of course it violates the "keep the array intact" constraint.
Quadratic residues which you have mentioned ("using a large prime") are well-known, will work, and guarantee iterating each and every element exactly once (if that is required, but it seems that's not strictly the case?). Unluckily they are not "very random looking", and there are a few other requirements to the modulo in addition to being prime for it to work.
There is a page on Jeff Preshing's site which describes the technique in detail and suggests to feed the output of the residue generator into the generator again with a fixed offset.
However, since you said that you merely need "perceived as random by user", it seems that you might be able to do with feeding a hash function (say, cityhash or siphash) with consecutive integers. The output will be a "random" integer, and at least so far there will be a strict 1:1 mapping (since there are a lot more possible hash values than there are inputs).
Now the problem is that your array is most likely not that large, so you need to somehow reduce the range of these generated indices without generating duplicates (which is tough).
The obvious solution (taking the modulo) will not work, as it pretty much guarantees that you get a lot of duplicates.
Using a bitmask to limit the range to the next greater power of two should work without introducing bias, and discarding indices that are out of bounds (generating a new index) should work as well. Note that this needs non-deterministic time -- but the combination of these two should work reasonably well (a couple of tries at most) on the average.
Otherwise, the only solution that "really works" is shuffling an array of indices as pointed out by Kamil Kilolajczyk (though you don't want that).
Here is a java solution, which can be easily converted to C++ and similar to M Oehm's solution above, albeit with a different way of choosing LCG parameters.
import java.util.Enumeration;
import java.util.Random;
public class RandomPermuteIterator implements Enumeration<Long> {
int c = 1013904223, a = 1664525;
long seed, N, m, next;
boolean hasNext = true;
public RandomPermuteIterator(long N) throws Exception {
if (N <= 0 || N > Math.pow(2, 62)) throw new Exception("Unsupported size: " + N);
this.N = N;
m = (long) Math.pow(2, Math.ceil(Math.log(N) / Math.log(2)));
next = seed = new Random().nextInt((int) Math.min(N, Integer.MAX_VALUE));
}
public static void main(String[] args) throws Exception {
RandomPermuteIterator r = new RandomPermuteIterator(100);
while (r.hasMoreElements()) System.out.print(r.nextElement() + " ");
//output:50 52 3 6 45 40 26 49 92 11 80 2 4 19 86 61 65 44 27 62 5 32 82 9 84 35 38 77 72 7 ...
}
#Override
public boolean hasMoreElements() {
return hasNext;
}
#Override
public Long nextElement() {
next = (a * next + c) % m;
while (next >= N) next = (a * next + c) % m;
if (next == seed) hasNext = false;
return next;
}
}
maybe you could use this one: http://www.cplusplus.com/reference/algorithm/random_shuffle/ ?

C++ numerics lib: std::uniform_int_distribution<>, change bounds of distribution between calls

I have code similar to the following:
vector<int> vec;
// stuff vector here
random_device rd;
minstd_rand generator(rd());
uniform_int_distribution<unsigned> dist(0 , vec.size() - 1);
while (vec.size() > 0)
{
auto it = vec.begin() + dist(generator);
// use *it for something
swap(*it, *(vec.end() - 1));
vec.pop_back();
}
I know I can construct/destruct a local distribution inside the loop. But I'd rather just adjust the bounds of dist inside the loop. Can I do this?
What about param?
dist.param( decltype(dist)::param_type(otherMin, otherMax) );
C++11 standard (and following ones), [rand.req.dist]/9:
For each of the constructors of D taking arguments corresponding to
parameters of the distribution, P shall have a corresponding
constructor subject to the same requirements and taking arguments
identical in number, type, and default values.
<random> has some decent parts, and the generators it contains are at least servicable for many purposes. However, the library and its interfaces are very far from mature. Hence you need to build your own header/library to supply the missing parts, or roll out big guns like boost or the code from Numeric Recipes.
One quick and easy way of obtaining uniform integer derivates is to multiply uniform floats in the range [0,1) with the modulus and truncating. That spreads the bias all over the range and it is good enough for many off-the-cuff uses.
By contrast, the standard method of taking the remainder of an integer derivate modulo the range collects the bias at the beginning of the range. E.g. the famous rand() % modulus.
Case in point: if your modulus happens to be 2/3 of the derivate's natural modulus (e.g. 0xAAAAAAAAu for 2^32) then all results in the first half the result range are exactly twice as likely as those in the upper half of the result range. Not recommended for quality code.
To get an unbiassed integer derivate, use the rejection method. Here is one example that uses a full-size random integers as a basis. You can template it on word size and generator, stuff it in your 'fix-the-std' header and be done for all time:
uint64_t random_uint64 ();
uint64_t random_uint64 (uint64_t modulus)
{
if (modulus)
{
for ( ; ; )
{
uint64_t raw_bits = random_uint64();
uint64_t result = raw_bits % modulus;
uint64_t check = uint64_t(raw_bits - result + modulus);
if (check >= raw_bits || check == 0)
{
return result;
}
}
}
return 0;
}
std::uniform_int_distribution<> does something very similar internally... but there the logic is well protected against industrial esponiage by the usual hundreds of lines of fluff, and the awkward interface ensures that people cannot simply use that functionality just because they feel like it.
Just for completeness, here's a simple and fast generator of excellent, proven quality (Sebastiano Vigna's xorshift64*) that makes a nice all-round generator when the extremely long period of a big gun like xorshift1024* is not needed:
uint64_t random_seed64 = 42;
uint64_t random_uint64 ()
{
uint64_t x = random_seed64;
x ^= x >> 12; x ^= x << 25; x ^= x >> 27;
random_seed64 = x;
return x * 2685821657736338717ull;
}
The generators included in the standard all have their peculiarities and problems, you have to know their strengths and weaknesses in order to make a good choice. If you're not aiming for a PhD in random number generation and computational statistics then you might be better off using tried and trusted code that is of proven quality.

Long array performance issue

I have an array of char pointers of length 175,000. Each pointer points to a c-string array of length 100, each character is either 1 or 0. I need to compare the difference between the strings.
char* arr[175000];
So far, I have two for loops where I compare every string with every other string. The comparison functions basically take two c-strings and returns an integer which is the number of differences of the arrays.
This is taking really long on my 4-core machine. Last time I left it to run for 45min and it never finished executing. Please advise of a faster solution or some optimizations.
Example:
000010
000001
have a difference of 2 since the last two bits do not match.
After i calculate the difference i store the value in another array
int holder;
for(int x = 0;x < UsedTableSpace; x++){
int min = 10000000;
for(int y = 0; y < UsedTableSpace; y++){
if(x != y){
//compr calculates difference between two c-string arrays
int tempDiff =compr(similarity[x]->matrix, similarity[y]->matrix);
if(tempDiff < min){
min = tempDiff;
holder = y;
}
}
}
similarity[holder]->inbound++;
}
With more information, we could probably give you better advice, but based on what I understand of the question, here are some ideas:
Since you're using each character to represent a 1 or a 0, you're using several times more memory than you need to use, which creates a big performance impact when it comes to caching and such. Instead, represent your data using numeric values that you can think of in terms of a series of bits.
Once you've implemented #1, you can grab an entire integer or long at a time and do a bitwise XOR operation to end up with a number that has a 1 in every place where the two numbers didn't have the same values. Then you can use some of the tricks mentioned here to count these bits speedily.
Work on "unrolling" your loops somewhat to avoid the number of jumps necessary. For example, the following code:
total = total + array[i];
total = total + array[i + 1];
total = total + array[i + 2];
... will work faster than just looping over total = total + array[i] three times. Jumps are expensive, and interfere with the processor's pipelining. Update: I should mention that your compiler may be doing some of this for you already--you can check the compiled code to see.
Break your overall data set into chunks that will allow you to take full advantage of caching. Think of your problem as a "square" with the i index on one axis and the j axis on the other. If you start with one i and iterate across all 175000 j values, the first j values you visit will be gone from the cache by the time you get to the end of the line. On the other hand, if you take the top left corner and go from j=0 to 256, most of the values on the j axis will still be in a low-level cache as you loop around to compare them with i=0, 1, 2, etc.
Lastly, although this should go without saying, I guess it's worth mentioning: Make sure your compiler is set to optimize!
One simple optimization is to compare the strings only once. If the difference between A and B is 12, the difference between B and A is also 12. Your running time is going to drop almost half.
In code:
int compr(const char* a, const char* b) {
int d = 0, i;
for (i=0; i < 100; ++i)
if (a[i] != b[i]) ++d;
return d;
}
void main_function(...) {
for(int x = 0;x < UsedTableSpace; x++){
int min = 10000000;
for(int y = x + 1; y < UsedTableSpace; y++){
//compr calculates difference between two c-string arrays
int tempDiff = compr(similarity[x]->matrix, similarity[y]->matrix);
if(tempDiff < min){
min = tempDiff;
holder = y;
}
}
similarity[holder]->inbound++;
}
}
Notice the second-th for loop, I've changed the start index.
Some other optimizations is running the run method on separate threads to take advantage of your 4 cores.
What is your goal, i.e. what do you want to do with the Hamming Distances (which is what they are) after you've got them? For example, if you are looking for the closest pair, or most distant pair, you probably can get an O(n ln n) algorithm instead of the O(n^2) methods suggested so far. (At n=175000, n^2 is 15000 times larger than n ln n.)
For example, you could characterize each 100-bit number m by 8 4-bit numbers, being the number of bits set in 8 segments of m, and sort the resulting 32-bit signatures into ascending order. Signatures of the closest pair are likely to be nearby in the sorted list. It is easy to lower-bound the distance between two numbers if their signatures differ, giving an effective branch-and-bound process as less-distant numbers are found.

How to get 2 random (different) elements from a c++ vector

I would like to get 2 random different elements from an std::vector. How can I do this so that:
It is fast (it is done thousands of times in my algorithm)
It is elegant
The elements selection is really uniformly distributed
For elegance and simplicty:
void Choose (const int size, int &first, int &second)
{
// pick a random element
first = rand () * size / MAX_RAND;
// pick a random element from what's left (there is one fewer to choose from)...
second = rand () * (size - 1) / MAX_RAND;
// ...and adjust second choice to take into account the first choice
if (second >= first)
{
++second;
}
}
using first and second to index the vector.
For uniformness, this is very tricky since as size approaches RAND_MAX there will be a bias towards the lower values and if size exceeds RAND_MAX then there will be elements that are never chosen. One solution to overcome this is to use a binary search:
int GetRand (int size)
{
int lower = 0, upper = size;
do
{
int mid = (lower + upper) / 2;
if (rand () > RAND_MAX / 2) // not a great test, perhaps use parity of rand ()?
{
lower = mid;
}
else
{
upper = mid;
}
} while (upper != lower); // this is just to show the idea,
// need to cope with lower == mid and lower != upper
// and all the other edge conditions
return lower;
}
What you need is to generate M uniformly distributed random numbers from [0, N) range, but there is one caveat here.
One needs to note that your statement of the problem is ambiguous. What is meant by the uniformly distributed selection? One thing is to say that each index has to be selected with equal probability (of M/N, of course). Another thing is to say that each two-index combination has to be selected with equal probability. These two are not the same. Which one did you have in mind?
If M is considerably smaller than N, the classic algorithm for selecting M numbers out of [0, N) range is Bob Floyd algorithm that can be found in Bentley's "Programming Peals" book. It looks as follows (a sketch)
for (int j = N - M; i < N; ++j) {
int rand = random(0, j); // generate a random integer in range [0, j]
if (`rand` has not been generated before)
output rand;
else
output j;
}
In order to implement the check of whether rand has already been generated or not for relatively high M some kind of implementation of a set is necessary, but in your case M=2 it is straightforward and easy.
Note that this algorithm distributes the sets of M numbers uniformly. Also, this algorithm requires exactly M iterations (attempts) to generate M random numbers, i.e. it doesn't follow that flawed "trial-and-error" approach often used in various ad-hoc algorithms intended to solve the same problem.
Adapting the above to your specific situation, the correct algorithm will look as follows
first = random(0, N - 2);
second = random(0, N - 1);
if (second == first)
second = N - 1;
(I leave out the internal details of random(a, b) as an implementation detail).
It might not be immediately obvious why the above works correctly and produces a truly uniform distribution, but it really does :)
How about using a std::queue and doing std::random_shuffle on them. Then just pop til your hearts content?
Not elegant, but extreamly simple: just draw a random number in [0, vector.size()[ and check it's not twice the same.
Simplicity is also in some way elegance ;)
What do you call fast ? I guess this can be done thousands of times within a millisecond.
Whenever need something random, you are going to have various questions about the random number properties regarding uniformity, distribution and so on.
Assuming you've found a suitable source of randomness for your application, then the simplest way to generate pairs of uncorrelated entries is just to pick two random indexes and test them to ensure they aren't equal.
Given a vector of N+1 entries, another option is to generate an index i in the range 0..N. element[i] is choice one. Swap elements i and N. Generate an index j in the range 0..(N-1). element[j] is your second choice. This slowly shuffles your vector which may be problematical, but it can be avoided by using a second vector which holds indexes into the first, and shuffling that. This method trades a swap for the index comparison and tends to be more efficient for small vectors (a dozen or fewer elements, typically) as it avoids having to do multiple comparisons as the number of collisions increase.
You might wanna look into the gnu scientific library. There are some pretty nice random number generators in there that are guaranteed to be random down to the bit level.