quickest way to generate random bits - c++

What would be the fastest way to generate a large number of (pseudo-)random bits. Each bit must be independent and be zero or one with equal probability. I could obviously do some variation on
randbit=rand()%2;
but I feel like there should be a faster way, generating several random bits from each call to the random number generator. Ideally I'd like to get an int or a char where each bit is random and independent, but other solutions are also possible.
The application is not cryptographic in nature so strong randomness isn't a major factor, whereas speed and getting the correct distribution is important.

convert a random number into binary
Why not get just one number (of appropriate size to get enough bits you need) and then convert it to binary. You'll actually get bits from a random number which means they are random as well.
Zeros and ones also have the probability of 50%, since taking all numbers between 0 and some 2^n limit and counting the number of zeros and ones are equal > meaning that probability of zeros and ones is the same.
regarding speed
this would probably be very fast, since getting just one random number compared to number of bits in it is faster. it purely depends on your binary conversion now.

Take a look at Boost.Random, especially boost::uniform_int<>.

As you say just generate random integers.
Then you have 32 random bits with ones and zeroes all equally probable.
Get the bits in a loop:
for (int i = 0; i < 32; i++)
{
randomBit = (randomNum >> i) & 1
...
// Do your thing
}
Repeat this for as many times you need to to get the correct amount of bits.

Here's a very fast one I coded in Java based on George Marsaglia's XORShift algorithm: gets you 64 bits at a time!
/**
* State for random number generation
*/
private static volatile long state=xorShift64(System.nanoTime()|0xCAFEBABE);
/**
* Gets a long random value
* #return Random long value based on static state
*/
public static final long nextLong() {
long a=state;
state = xorShift64(a);
return a;
}
/**
* XORShift algorithm - credit to George Marsaglia!
* #param a Initial state
* #return new state
*/
public static final long xorShift64(long a) {
a ^= (a << 21);
a ^= (a >>> 35);
a ^= (a << 4);
return a;
}

SMP Safe (i.e. Fastest way possiable these days) and good bits
Note the use of the [ThreadStatic] attribute, this object automatically handle's new thread's, no locking. That's the only way your going to ensure high-performance random, SMP lockfree.
http://blogs.msdn.com/pfxteam/archive/2009/02/19/9434171.aspx

If I rememeber correctly, the least significant bits are normally having a "less random"
distribution for most pseuodo random number generators, so using modulo and/or each bit in the generated number would be bad if you are worried about the distribution.
(Maybe you should at least google what Knuth says...)
If that holds ( and its hard to tell without knowing exactly what algorithm you are using) just use the highest bit in each generated number.
http://en.wikipedia.org/wiki/Pseudo-random

#include <iostream>
#include <bitset>
#include <random>
int main()
{
const size_t nrOfBits = 256;
std::bitset<nrOfBits> randomBits;
std::default_random_engine generator;
std::uniform_real_distribution<float> distribution(0.0,1.0);
float randNum;
for(size_t i = 0; i < nrOfBits; i++)
{
randNum = distribution(generator);
if(randNum < 0.5) {
randNum = 0;
} else {
randNum = 1;
}
randomBits.set(i, randNum);
}
std::cout << "randomBits =\n" << randomBits << std::endl;
return 0;
}
This took 4.5886e-05s in my test with 256 bits.

You can generate a random number and keep on right shifitng and testing the least significant bit to get the random bits instead of doing a mod operation.

How large do you need the number of generated bits to be? If it is not larger than a few million, and keeping in mind that you are not using the generator for cryptography, then I think the fastest possible way would be to precompute a large set of integers with the correct distribution, convert it to a text file like this:
unsigned int numbers[] =
{
0xABCDEF34, ...
};
and then compile the array into your program and go through it one number at a time.
That way you get 32 bits with every call (on a 32-bit processor), the generation time is the shortest possible because all the numbers are generated in advance, and the distribution is controlled by you. The downside is, of course, that these numbers are not random at all, which depending on what you are using the PRNG for may or may not matter.

if you only need a bit at a time try
bool coinToss()
{
return rand()&1;
} It would technically be a faster way to generate bits because of replacing the %2 with a &1 which are equivalent.

Just read some memory - take a n bit section of raw memory. It will be pretty random.
Alternatively, generate a large random int x and just use the bit values.
for(int i = (bitcount-1); i >= 0 ; i--) bin += x & (1 << i);

Related

Obtain values from multiple distributions with a single generator roll

I am trying to implement the Alias method, also described here. This is an algorithm which allows to sample from a weighted N-sided dice in O(1).
The algorithm calls for the generation of two values:
An uniformly distributed integer i in [0, N]
An uniformly distributed real y in [0, 1)
The paper specifies that these two numbers can be obtained by a single real number x between [0, N). From x one can then derive two values as:
i = floor(x)
y = x - i
Now, the other implementations that I have seen call for the random number generator two times, one to generate i and one to generate y. Given that I am using a fairly expensive generator (std::mt19937) and that I need to sample many times, I was wondering if there was a better approach in terms of performance, while preserving the quality of the result.
I'm not sure whether using an uniform_real_distribution to generate x makes sense as if N is large then y's distribution is going to get sparser as doubles are not uniformly distributed. Is there maybe a way to call the engine, get the random bits out, and then generate i and y from them directly?
You are correct, with their method the distribution of y will become less and less uniform with increasing N.
In fact, for N above 2^52 y will be exactly 0, as all numbers above that value are integers for double precision. 2^52 is 4,503,599,627,370,496 (4.5 quadrillion).
It will not matter at all for reasonable values of N though. You should be fine if your N is less than 2^26 (67 million), intuitively. Your die does not have an astronomical number of sides, does it?
I had similar problem, and would tell you how I solved it in my case. It might be applicable to you or not, but here is the story
I didn't use any kind of 32bit RNG. Basically, no 32 bit platform and software to care about. So I used std::mt19937_64 as baseline generator. One 64bit unsigned int per call. Later I tried to use one of the PCG 64bit RNG, overall faster good outcome.
Top N bits to be used directly for selection from table (dice in your case). You could suffer from modulo bias, so I managed to extend table to be exact power of 2 (210 in my case, 10 bits for index sampling)
Remainder 54 bits were used to get uniform double random number following S. Vigna suggestion.
If you need more than 11 bits for index, you could either live with reduced randomness in mantissa, or replace double y with carefully crafted integer comparison.
Along the lines, some pseudocode (not tested!)
uint64_t mask = (1ULL << 53ULL) - 1ULL;
auto seed{ 98765432101ULL };
auto rng = std::mt19937_64{seed};
for (int k = 0; k != 1000; ++k) {
auto rv = rng();
auto idx = rv >> uint64_t(64 - 10); // needed only 10 bits for index
double y = (rv & mask) * (1. / (1ULL << 53ULL)); // 53 bits used for mantissa
std::cout << idx << "," << y << '\n';
}
Reference to S.Vigna integer2double conversion for RNG: http://xoshiro.di.unimi.it/, at the very end of the page

Fast implementation of a large integer counter (in C/C++)

My goal is as the following,
Generate successive values, such that each new one was never generated before, until all possible values are generated. At this point, the counter start the same sequence again. The main point here is that, all possible values are generated without repetition (until the period is exhausted). It does not matter if the sequence is simple 0, 1, 2, 3,..., or in other order.
For example, if the range can be represented simply by an unsigned, then
void increment (unsigned &n) {++n;}
is enough. However, the integer range is larger than 64-bits. For example, in one place, I need to generated 256-bits sequence. A simple implementation is like the following, just to illustrate what I am trying to do,
typedef std::array<uint64_t, 4> ctr_type;
static constexpr uint64_t max = ~((uint64_t) 0);
void increment (ctr_type &ctr)
{
if (ctr[0] < max) {++ctr[0]; return;}
if (ctr[1] < max) {++ctr[1]; return;}
if (ctr[2] < max) {++ctr[2]; return;}
if (ctr[3] < max) {++ctr[3]; return;}
ctr[0] = ctr[1] = ctr[2] = ctr[3] = 0;
}
So if ctr start with all zeros, then first ctr[0] is increased one by one until it reach max, and then ctr[1], and so on. If all 256-bits are set, then we reset it to all zero, and start again.
The problem is that, such implementation is surprisingly slow. My current improved version is sort of equivalent to the following,
void increment (ctr_type &ctr)
{
std::size_t k = (!(~ctr[0])) + (!(~ctr[1])) + (!(~ctr[2])) + (!(~ctr[3]))
if (k < 4)
++ctr[k];
else
memset(ctr.data(), 0, 32);
}
If the counter is only manipulated with the above increment function, and always start with zero, then ctr[k] == 0 if ctr[k - 1] == 0. And thus the value k will be the index of the first element that is less than the maximum.
I expected the first to be faster, since branch mis-prediction shall happen only once in every 2^64 iterations. The second, though mis-predication only happen every 2^256 iterations, it shall not make a difference. And apart from the branching, it needs four bitwise negation, four boolean negation, and three addition. Which might cost much more than the first.
However, both clang, gcc, or intel icpc generate binaries that the second was much faster.
My main question is that does anyone know if there any faster way to implement such a counter? It does not matter if the counter start by increasing the first integers or if it is implemented as an array of integers at all, as long as the algorithm generate all 2^256 combinations of 256-bits.
What makes things more complicated, I also need non uniform increment. For example, each time the counter is incremented by K where K > 1, but almost always remain a constant. My current implementation is similar to the above.
To provide some more context, one place I am using the counters is using them as input to AES-NI aesenc instructions. So distinct 128-bits integer (loaded into __m128i), after going through 10 (or 12 or 14, depending on the key size) rounds of the instructions, a distinct 128-bits integer is generated. If I generate one __m128i integer at once, then the cost of increment matters little. However, since aesenc has quite a bit latency, I generate integers by blocks. For example, I might have 4 blocks, ctr_type block[4], initialized equivalent to the following,
block[0]; // initialized to zero
block[1] = block[0]; increment(block[1]);
block[2] = block[1]; increment(block[2]);
block[3] = block[2]; increment(block[3]);
And each time I need new output, I increment each block[i] by 4, and generate 4 __m128i output at once. By interleaving instructions, overall I was able to increase the throughput, and reduce the cycles per bytes of output (cpB) from 6 to 0.9 when using 2 64-bits integers as the counter and 8 blocks. However, if instead, use 4 32-bits integers as counter, the throughput, measured as bytes per sec is reduced to half. I know for a fact that on x86-64, 64-bits integers could be faster than 32-bits in some situations. But I did not expect such simple increment operation makes such a big difference. I have carefully benchmarked the application, and the increment is indeed the one slow down the program. Since the loading into __m128i and store the __m128i output into usable 32-bits or 64-bits integers are done through aligned pointers, the only difference between the 32-bits and 64-bits version is how the counter is incremented. I expected that the AES-NI expected, after loading the integers into __m128i, shall dominate the performance. But when using 4 or 8 blocks, it was clearly not the case.
So to summary, my main question is that, if anyone know a way to improve the above counter implementation.
It's not only slow, but impossible. The total energy of universe is insufficient for 2^256 bit changes. And that would require gray counter.
Next thing before optimization is to fix the original implementation
void increment (ctr_type &ctr)
{
if (++ctr[0] != 0) return;
if (++ctr[1] != 0) return;
if (++ctr[2] != 0) return;
++ctr[3];
}
If each ctr[i] was not allowed to overflow to zero, the period would be just 4*(2^32), as in 0-9, 19,29,39,49,...99, 199,299,... and 1999,2999,3999,..., 9999.
As a reply to the comment -- it takes 2^64 iterations to have the first overflow. Being generous, upto 2^32 iterations could take place in a second, meaning that the program should run 2^32 seconds to have the first carry out. That's about 136 years.
EDIT
If the original implementation with 2^66 states is really what is wanted, then I'd suggest to change the interface and the functionality to something like:
(*counter) += 1;
while (*counter == 0)
{
counter++; // Move to next word
if (counter > tail_of_array) {
counter = head_of_array;
memset(counter,0, 16);
break;
}
}
The point being, that the overflow is still very infrequent. Almost always there's just one word to be incremented.
If you're using GCC or compilers with __int128 like Clang or ICC
unsigned __int128 H = 0, L = 0;
L++;
if (L == 0) H++;
On systems where __int128 isn't available
std::array<uint64_t, 4> c[4]{};
c[0]++;
if (c[0] == 0)
{
c[1]++;
if (c[1] == 0)
{
c[2]++;
if (c[2] == 0)
{
c[3]++;
}
}
}
In inline assembly it's much easier to do this using the carry flag. Unfortunately most high level languages don't have means to access it directly. Some compilers do have intrinsics for adding with carry like __builtin_uaddll_overflow in GCC and __builtin_addcll
Anyway this is rather wasting time since the total number of particles in the universe is only about 1080 and you cannot even count up the 64-bit counter in your life
Neither of your counter versions increment correctly. Instead of counting up to UINT256_MAX, you are actually just counting up to UINT64_MAX 4 times and then starting back at 0 again. This is apparent from the fact that you do not bother to clear any of the indices that has reached the max value until all of them have reached the max value. If you are measuring performance based on how often the counter reaches all bits 0, then this is why. Thus your algorithms do not generate all combinations of 256 bits, which is a stated requirement.
You mention "Generate successive values, such that each new one was never generated before"
To generate a set of such values, look at linear congruential generators
the sequence x = (x*1 + 1) % (power_of_2), you thought about it, this are simply sequential numbers.
the sequence x = (x*13 + 137) % (power of 2) , this generates unique numbers with a predictable period (power_of_2 - 1) and the unique numbers look more "random", kind of pseudo-random. You need to resort to arbitrary precision arithmetic to get it working, and also all the trickeries of multiplications by constants. This will get you a nice way to start.
You also complain that your simple code is "slow"
At 4.2 GHz frequency, running 4 intructions per cycle and using AVX512 vectorizations, on a 64-core computer with a multithreaded version of your program doing nothing else than increments, you get only 64x8x4*232=8796093022208 increments per second, that is 264 increments reached in 25 days. This post is old, you might have reached 841632698362998292480 by now, running such a program on such a machine, and you will gloriously reach 1683265396725996584960 in 2 years time.
You also require "until all possible values are generated".
You can only generate a finite number of values, depending how much you are willing to pay for the energy to power your computers. As mentioned in the other responses, with 128 or 256-bit numbers, even being the richest man in the world, you will never wrap around before the first of these conditions occurs:
getting out of money
end of humankind (nobody will get the outcome of your software)
burning the energy from the last particles of the universe
Multi-word addition can easily be accomplished in portable fashion by using three macros that mimic three types of addition instructions found on many processors:
ADDcc adds two words, and sets the carry if their was unsigned overflow
ADDC adds two words plus carry (from a previous addition)
ADDCcc adds two words plus carry, and sets the carry if their was unsigned overflow
A multi-word addition with two words uses ADDcc of the least significant words followed by ADCC of the most significant words. A multi-word addition with more than two words forms sequence ADDcc, ADDCcc, ..., ADDC. The MIPS architecture is a processor architecture without conditions code and therefore without carry flag. The macro implementations shown below basically follow the techniques used on MIPS processors for multi-word additions.
The ISO-C99 code below shows the operation of a 32-bit counter and a 64-bit counter based on 16-bit "words". I chose arrays as the underlying data structure, but one might also use struct, for example. Use of a struct will be significantly faster if each operand only comprises a few words, as the overhead of array indexing is eliminated. One would want to use the widest available integer type for each "word" for best performance. In the example from the question that would likely be a 256-bit counter comprising four uint64_t components.
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
#define ADDCcc(a,b,cy,t0,t1) \
(t0=(b)+cy, t1=(a), cy=t0<cy, t0=t0+t1, t1=t0<t1, cy=cy+t1, t0=t0)
#define ADDcc(a,b,cy,t0,t1) \
(t0=(b), t1=(a), t0=t0+t1, cy=t0<t1, t0=t0)
#define ADDC(a,b,cy,t0,t1) \
(t0=(b)+cy, t1=(a), t0+t1)
typedef uint16_t T;
/* increment a multi-word counter comprising n words */
void inc_array (T *counter, const T *increment, int n)
{
T cy, t0, t1;
counter [0] = ADDcc (counter [0], increment [0], cy, t0, t1);
for (int i = 1; i < (n - 1); i++) {
counter [i] = ADDCcc (counter [i], increment [i], cy, t0, t1);
}
counter [n-1] = ADDC (counter [n-1], increment [n-1], cy, t0, t1);
}
#define INCREMENT (10)
#define UINT32_ARRAY_LEN (2)
#define UINT64_ARRAY_LEN (4)
int main (void)
{
uint32_t count32 = 0, incr32 = INCREMENT;
T count_arr2 [UINT32_ARRAY_LEN] = {0};
T incr_arr2 [UINT32_ARRAY_LEN] = {INCREMENT};
do {
count32 = count32 + incr32;
inc_array (count_arr2, incr_arr2, UINT32_ARRAY_LEN);
} while (count32 < (0U - INCREMENT - 1));
printf ("count32 = %08x arr_count = %08x\n",
count32, (((uint32_t)count_arr2 [1] << 16) +
((uint32_t)count_arr2 [0] << 0)));
uint64_t count64 = 0, incr64 = INCREMENT;
T count_arr4 [UINT64_ARRAY_LEN] = {0};
T incr_arr4 [UINT64_ARRAY_LEN] = {INCREMENT};
do {
count64 = count64 + incr64;
inc_array (count_arr4, incr_arr4, UINT64_ARRAY_LEN);
} while (count64 < 0xa987654321ULL);
printf ("count64 = %016llx arr_count = %016llx\n",
count64, (((uint64_t)count_arr4 [3] << 48) +
((uint64_t)count_arr4 [2] << 32) +
((uint64_t)count_arr4 [1] << 16) +
((uint64_t)count_arr4 [0] << 0)));
return EXIT_SUCCESS;
}
Compiled with full optimization, the 32-bit example executes in about a second, while the 64-bit example runs for about a minute on a modern PC. The output of the program should look like so:
count32 = fffffffa arr_count = fffffffa
count64 = 000000a987654326 arr_count = 000000a987654326
Non-portable code that is based on inline assembly or proprietary extensions for wide integer types may execute about two to three times as fast as the portable solution presented here.

64 bits Seeds for random generators

I am currently running a multithreading simulation application with 8+ pipes (threads). These pipes run a very complex code that depends on a random sequence generated by a seed. The sequence is then boiled down to a single 0/1.
I want this "random processing" to be 100% deterministic after passing a seed to the processing pipe from the main thread. So, I can replicate the results in a second run.
So, for example: (I have this coded and it works)
Pipe 1 -> Seed: 123 -> Result: 0
Pipe 2 -> Seed: 123 -> Result: 0
Pipe 3 -> Seed: 589 -> Result: 1
The problem arises when I need to run 100M or more of these processes and then average the results. It may be the case only 1 of the 100M is a 1, and the rest are 0.
As it is obvious, I cannot sample 100M random values with 32bit seeds feeding to srand().
Is it possible to seed with a 64bit seed in VS2010 to srand(), or use a equivalent approach?
Does rand() repeat itself after 2^32 or does not (has some inner hidden state)?
Thanks
You can use C++11's random facilities to generate random numbers of a given size and seed size, though the process is a bit too complicated to summarize here.
For example, you can construct an std::mersenne_twister<uint64_t, ...> and seed it with a 64-bit integer, then acquire random numbers within a specified distribution, which seems to be what you're looking for.
A simple 64-bit LCG should meet your needs. Bit n (counting from the least significant as bit 1) of an LCG has period at most (and, if parameters are chosen correctly, then exactly) 2^n, so avoid using the lower bits if you don't need them, and/or use a tempering function on the output. A sample implementation can be found in my answer to another question here:
https://stackoverflow.com/a/19083740/379897
And reposted:
static uint32_t temper(uint32_t x)
{
x ^= x>>11;
x ^= x<<7 & 0x9D2C5680;
x ^= x<<15 & 0xEFC60000;
x ^= x>>18;
return x;
}
uint32_t lcg64_temper(uint64_t *seed)
{
*seed = 6364136223846793005ULL * *seed + 1;
return temper(*seed >> 32);
}
you could use an XOR SHIFT psuedorandom number generator
It is fast and works a treat - this is the actual generation part from my implementation class of it. I found the information on this algorithm in a wikipedia search on psuedorandom number generators...
uint64_t XRS_64::generate(void)
{
seed ^= seed >> 12; // a
seed ^= seed << 25; // b
seed ^= seed >> 27; // c
return seed * UINT64_C(2685821657736338717);
}
it is fast and for initialisation you do that inside the constructor
XRS_64::XRS_64()
{
seed = 6394358446697381921;
}
seed is an unsigned int 64 bit variable and it is declared inside the class.
class XRS_64
{
public:
XRS_64();
~XRS_64();
void init(uint64_t newseed);
uint64_t generate();
private :
uint64_t seed; /* The state must be seeded with a nonzero value. */
};
I can't answer your questions, but if you find out you can't do what you want, you can implement your own pseudo-random algorithm generator which takes a uint64_t as a seed.
There are better algorithms for this purpose if you want some more serious generator (for cryptography purposes, for instance), but LCG is the easiest I've seen to be implemented.
EDIT
Actually you cannot use a 64-bit seed for the rand() function. You will have to go for your own. In this Wikipedia table there some parameters used by MMIX Donald Knuth to implement it. Be aware that depending on the parameters you use, your random number generator period will have a much lesser value than 2^64 and because of the multiplications, you may need a Big Number library to handle the math operations.
My recommendation is that you take direct control over the process and set up your own high-quality random number generator. None of the answers here have been properly tested or validated - and that is an important criterion that needs to be taken into account.
High-quality random number generators can be made for large periods even on 16-bit and 32-bit machines by just running several of them in parallel - subject to certain preconditions. This is described, in further depth, here
P.L'Ecuyer, ‟Efficient and portable combined random number generators”, CACM 31(6), June 1988, 742-751.
with testing & validation results also provided. Accessible versions of the article can be found on the net.
For a 32-bit implementation the recommendation issued there was to take M₀ = 1 + 2×3×7×631×81031 (= 2³¹ - 85) and M₁ = 1 + 2×19×31×1019×1789 (= 2³¹ - 249) to produce a random number generator of period (M₀ - 1)(M₁ - 1)/2 ≡ 2×3×7×19×31×631×1019×1789×81031 ≡ 2⁶¹ - 360777242114. They also posted a recommendation for 16-bit CPU's.
The seeds are updated as (S₀, S₁) ← ((A₀×S₀) mod M₀, (A₁×S₁) mod M₁), and a 32-bit value may be produced from this as S₀ - S₁ with the result adjusted upward by M₀ - 1 if S₀ ≤ S₁. If (S₀, S₁) is initialized to integer values in the interval [0,M₀)×[0,M₁), then it remains in that interval with each update. You'll have to modify the output value to suit your needs, since their version is specifically geared toward producing strictly positive results; and no 0's.
The preconditions are that (M₀ - 1)/2 and (M₁ - 1)/2 be relatively prime and that A₀² < M₀, A₁² < M₁; and the values (A₀, A₁) = (40014, 40692) were recommended, based on their analysis. Also listed were optimized routines that allow all the computations to be done with 16-bit or 32-bit arithmetic.
For 32-bits the updates were done as (S₀, S₁) ← (A₀×(S₀ - K₀×Q₀) - K₀×R₀, A₁×(S₁ - K₁×Q₁) - K₁×R₁) with any S₀ < 0 or S₁ < 0 results adjusted upward, respectively, to S₀ + M₀ or S₁ + M₁; where (K₀, K₁) = (S₀ div Q₀, S₁ div Q₁), (Q₀, Q₁) = (M₀ div A₀, M₁ div A₁) and (R₀, R₁) = (M₀ mod A₀, M₁ mod A₁).

CMAC: Very long integer as seed for random() function

I've checked the documentation of stdlib and there it says that we can use an unsigned long int as seed for srand(). The problem is : I need to use a number up to 40 digits as seed. This seed is retrieved from the Association Matrix used for a multivariate CMAC problem modulation.
How can I overcome this problem?
As an example, see the code below:
#include <stdlib.h>
int main(int argc, char ** argv)
{
int inputVariable = getStateOfAdressedSpace();
int generatedNumber;
unsigned long long mySeed = getSeedFromMatrix( inputVariable );
srandom( mySeed );
generatedNumber = random( );
}
This is a very weak example, but that is because the whole code would be too big to demonstrate, just imagine that the mySeed variable would be a very long integer, that's where lies my problem. I would be very grateful if anyone shows me how to work this out, maybe even using an pseudorandom number generator (PRNG) method or something. Keep in mind that the generated number must be unique.
A simple way to achieve something that will be "indistinguishable from random", and that uses all digits of an arbitrary length seed, would be the following (untested - this is just to show the principle):
char* mySeed = "123454321543212345678908765434234576897654267349587623459872039487102367529364520";
char bitOfString[6];
int ii;
long int randomNumber=0;
for(ii=0; ii<strlen(mySeed)-5; ii+=5) {
strncpy(bitOfString, mySeed+ii, 5);
bitOfString[5]='\0';
srandom(atoi(bitOfString));
randomNumber += random();
}
randomNumber = randomNumber % RAND_MAX;
This generates random numbers based on "something that is small enough to be a seed" (I used the number 5 for string length but you could pick another number; depends on the size of int on your machine). You could make it "more random" by not picking just the first random number generated in each loop, but the Nth (so that swapping around blocks of digits would not produce the same result).
Bottom line is - you generate a different random sequence. It is a mathematical impossibility that each of 1040 seeds will give a different random number - this method should map a seed of "arbitrary size" to a uniformly distributed number in the range of the random number generator.
Note that I used long int for randomNumber although random() produces an int random. This allows the summation of multiple random numbers without fear of overflow - and the final modulo division ensures that the number you end up with will be (approximately) uniformly distributed (especially if you ended up making a large number of calls to random() ).
Looking forward to your thoughts on this.

Extend rand() max range

I created a test application that generates 10k random numbers in a range from 0 to 250 000. Then I calculated MAX and min values and noticed that the MAX value is always around 32k...
Do you have any idea how to extend the possible range? I need a range with MAX value around 250 000!
This is according to the definition of rand(), see:
http://cplusplus.com/reference/clibrary/cstdlib/rand/
http://cplusplus.com/reference/clibrary/cstdlib/RAND_MAX/
If you need larger random numbers, you can use an external library (for example http://www.boost.org/doc/libs/1_49_0/doc/html/boost_random.html) or calculate large random numbers out of multiple small random numbers by yourself.
But pay attention to the distribution you want to get. If you just sum up the small random numbers, the result will not be equally distributed.
If you just scale one small random number by a constant factor, there will be gaps between the possible values.
Taking the product of random numbers also doesn't work.
A possible solution is the following:
1) Take two random numbers a,b
2) Calculate a*(RAND_MAX+1)+b
So you get equally distributed random values up to (RAND_MAX+1)^2-1
Presumably, you also want an equal distribution over this extended
range. About the only way you can effectively do this is to generate a
sequence of smaller numbers, and scale them as if you were working in a
different base. For example, for 250000, you might 4 random numbers
in the range [0,10) and one in range [0,25), along the lines:
int
random250000()
{
return randomInt(10) + 10 * randomInt(10)
+ 100 * randomInt(10) + 1000 * randomInt(10)
+ 10000 * randomInt(25);
}
For this to work, your random number generator must be good; many
implementations of rand() aren't (or at least weren't—I've not
verified the situation recently). You'll also want to eliminate the
bias you get when you map RAND_MAX + 1 different values into 10 or
25 different values. Unless RAND_MAX + 1 is an exact multiple of
10 and 25 (e.g. is an exact multiple of 50), you'll need something
like:
int
randomInt( int upperLimit )
{
int const limit = (RAND_MAX + 1) - (RAND_MAX + 1) % upperLimit;
int result = rand();
while ( result >= limit ) {
result = rand();
return result % upperLimit;
}
(Attention when doing this: there are some machines where RAND_MAX + 1
will overflow; if portability is an issue, you'll need to take
additional precautions.)
All of this, of course, supposes a good quality generator, which is far
from a given.
You can just manipulate your number bitwise by generating smaller random numbers.
For instance, if you need a 32-bit random number:
int32 x = 0;
for (int i = 0; i < 4; ++i) { // 4 == 32/8
int8 tmp = 8bit_random_number_generator();
x <<= 8*i; x |= tmp;
}
If you don't need good randomness in your numbers, you can just use rand() & 0xff for the 8-bit random number generator. Otherwise, something better will be necessary.
Are you using short ints? If so, you will see 32,767 as your max number because anything larger will overflow the short int.
Scale your numbers up by N / RAND_MAX, where N is your desired maximum. If the numbers fit, you can do something like this:
unsigned long long int r = rand() * N / RAND_MAX;
Obviously if the initial part overflows you can't do this, but with N = 250000 you should be fine. RAND_MAX is 32K on many popular platforms.
More generally, to get a random number uniformly in the interval [A, B], use:
A + rand() * (B - A) / RAND_MAX;
Of course you should probably use the proper C++-style <random> library; search this site for many similar questions explaining how to use it.
Edit: In the hope of preventing an escalation of comments, here's yet another copy/paste of the Proper C++ solution for truly uniform distribution on an interval [A, B]:
#include <random>
typedef std::mt19937 rng_type;
typedef unsigned long int int_type; // anything you like
std::uniform_int_distribution<int_type> udist(A, B);
rng_type rng;
int main()
{
// seed rng first:
rng_type::result_type const seedval = get_seed();
rng.seed(seedval);
int_type random_number = udist(rng);
// use random_number
}
Don't forget to seend the RNG! If you store the seed value, you can replay the same random sequence later on.