I've checked the documentation of stdlib and there it says that we can use an unsigned long int as seed for srand(). The problem is : I need to use a number up to 40 digits as seed. This seed is retrieved from the Association Matrix used for a multivariate CMAC problem modulation.
How can I overcome this problem?
As an example, see the code below:
#include <stdlib.h>
int main(int argc, char ** argv)
{
int inputVariable = getStateOfAdressedSpace();
int generatedNumber;
unsigned long long mySeed = getSeedFromMatrix( inputVariable );
srandom( mySeed );
generatedNumber = random( );
}
This is a very weak example, but that is because the whole code would be too big to demonstrate, just imagine that the mySeed variable would be a very long integer, that's where lies my problem. I would be very grateful if anyone shows me how to work this out, maybe even using an pseudorandom number generator (PRNG) method or something. Keep in mind that the generated number must be unique.
A simple way to achieve something that will be "indistinguishable from random", and that uses all digits of an arbitrary length seed, would be the following (untested - this is just to show the principle):
char* mySeed = "123454321543212345678908765434234576897654267349587623459872039487102367529364520";
char bitOfString[6];
int ii;
long int randomNumber=0;
for(ii=0; ii<strlen(mySeed)-5; ii+=5) {
strncpy(bitOfString, mySeed+ii, 5);
bitOfString[5]='\0';
srandom(atoi(bitOfString));
randomNumber += random();
}
randomNumber = randomNumber % RAND_MAX;
This generates random numbers based on "something that is small enough to be a seed" (I used the number 5 for string length but you could pick another number; depends on the size of int on your machine). You could make it "more random" by not picking just the first random number generated in each loop, but the Nth (so that swapping around blocks of digits would not produce the same result).
Bottom line is - you generate a different random sequence. It is a mathematical impossibility that each of 1040 seeds will give a different random number - this method should map a seed of "arbitrary size" to a uniformly distributed number in the range of the random number generator.
Note that I used long int for randomNumber although random() produces an int random. This allows the summation of multiple random numbers without fear of overflow - and the final modulo division ensures that the number you end up with will be (approximately) uniformly distributed (especially if you ended up making a large number of calls to random() ).
Looking forward to your thoughts on this.
Related
I have this bit of code when trying to create a random number between n and n^2, and somehow it sometimes produces a negative number. I've checked rand() and time(NULL) and both of them produce a positive number, so how can it be possible for it to produce a negative number as the result?
I'm suppose to generate many numbers to store into an array, but somehow only the first few numbers are negative.
int randomNum = (((rand()*time(NULL))%(n*n-n))+n);
Integer overflow. time(NULL) currently returns a value around 1.49 billion. Multiplying that by rand() will overflow on almost any value of rand(), and will result in a negative value about half of the time.
Don't multiply by time(NULL). It serves no purpose here. Just use rand().
What you want to do is seed the random number generator once with the time
srand(time(NULL));
After this, call the function without involving the time
int randomNum = ((rand()%(n*n-n))+n);
Seeding the random number generator repeatedly is a bad idea especially with time. In c++11 there are better ways to do this with std::random.
I would like to add to the answer given by Raziman T V. If you write in C++11, then I strongly recommend using new ‘generators’ and ‘distributions’. ‘Generators’ replace ‘rand()’ and allow generating pseudo-random numbers that are evenly distributed (like ‘rand()’). 'Distributions' transform the sequence of generated numbers according to the given distribution (uniform, normal, Weibull, etc). More information can be found here. Here is example how you can generate pseudo-random numbers in the range [10; 20]:
#include <chrono>
#include <random>
....
const size_t Min = 10, Max = 20;
size_t seed = std::chrono::system_clock::now().time_since_epoch().count();
// Similarly to srand(time(NULL))
std::mt19937_64 generator(seed);
//uniform distribution from 10 to 20
std::uniform_int_distribution<size_t> distribution(Min, Max);
//Similarly to rand ()
size_t random = distribution(generator);
The main advantage is you can use an arbitrary number of generators (initialized with different seeds) and distributions to generate different pseudo-random sequences. It would be hard to generate two different sequences simultaneously with ‘rand()’.
I am new to cpp programing, and new to stackoverflow.
I have a simple situation and a problem that is taking more time than reasonable to solve, so I thought I'd ask it here.
I want to take one digit from a rand() at a time. I have managed to strip of the digit, but I can't convert it to an int which I need because it's used as an array index.
Can anyone help? I'd be appreciative.
Also if anyone has a good solution to get evenly-distributed-in-base-10 random numbers, I'd like that too... of course with a rand max that isn't all 9s we don't have that.
KTM
Well you can use the modulus operator to get the digit of what number rand returns.
int digit = rand()%10;
As for your first question, if you have a character digit you can subtract the value of '0' to get the digit.
int digit = char_digit - '0';
If one wants a pedantic even distribution of 0 to 9 one could
Assume rand() itself is evenly distributed. (Not always a good assumption.)
Call rand() again as needed.
int ran10(void) {
const static int r10max = RAND_MAX - (RAND_MAX % 10);
int r;
while ((r = rand()) >= r10max);
return r%10;
}
Example:
If RAMD_MAX was 32767, r10max would have the value of 32760. Any rand value in the range 32760 to 32767 would get tossed and a new random value would be fetched.
While not as fast as modulo-arithmetic implementations like rand() % 10, this will return evenly distributed integers and avoid perodicity in the least significant bits that occur in some pseudo-random number generators (see http://www.gnu.org/software/gsl/manual/html_node/Other-random-number-generators.html).
int rand_integer(int exclusive_upperbound)
{
return int((double)exclusive_upperbound*rand()/RAND_MAX);
}
I want to use a deterministic random bit generator for my application. I' m using openssl for random number generator apis. Currently I'm using RAND_pseuso_bytes() api for generating pseudo random numbers. And I'm giving seed through RAND_add(). Then if I called the Random generator function two times, I'm getting two different random values at these two calls. If the seed is same, then it should give me the same values, where I have gone wrong ?
The code I have written is
int nSize = 8; /* 64 bit random number is required */
int nEntropy = 5; /* 40 bit entropy required */
/* generate random nonce for making seed */
RAND_bytes(cSeed_64, nSize);
RAND_add(cSeed_64, nSize, nEntropy); /* random nonce is cSeed_64, seedin 64 bit with
* 40 bit entropy */
/* calling random byte function to generate random number function 10 times with same seed*/
int j = 10;
while( j--)
{
RAND_pseudo_bytes(cRandBytes_64, 8);
printf("generated 64 bit random number \n");
for(i = 0 ; i < nSize; i++)
printf("%x ",cRandBytes_64[i]);
printf("\n");
}
But you're NOT calling RAND_psuedo_bytes() with the same seed, you're making successive calls to it, which should produce different outputs. That's the whole point of a "generator" function--it produces a different value on each call based on internal state.
When you "seed" and random number generator, you fix its internal state, after which it will generate random numbers by evolving that state. For each seed, it will generate a unique and reproducible sequence of numbers from repeated calls, but it certainly won't generate the same numbers on each call, that would be pointless.
The line:
RAND_bytes(cSeed_64, nSize);
creates a random see value based on system entropy. You really should check for errors here, as it may fail if not enough entropy is available.
The line
RAND_add(cSeed_64, nSize, nEntropy);
DOES NOT seed the PRNG, it adds the seed to the existing PRNG state. If you want to set the PRNG state to a fixed value, you have to use RAND_seed(). If you call RAND_seed() with a given value, RAND_pseudo_bytes() will thereafter generate a given sequence of random numbers. If you call RAND_seed() again with the same value, it will then repeat the same sequence.
You might want to seed it every time you call it, else it is expected to return a random number every time you call it.
In your code
RAND_bytes(cSeed_64, nSize);
RAND_add(cSeed_64, nSize, nEntropy);
You are generating a random number using RAND_bytes. You are adding this random number into seed. Since, you are adding randomness to already something random, it should generate random number every time. So, seed is not same everytime since it is random.
To keep the seed same, try RAND_seed with a fixed seed to get the expected behaviour.
I created a test application that generates 10k random numbers in a range from 0 to 250 000. Then I calculated MAX and min values and noticed that the MAX value is always around 32k...
Do you have any idea how to extend the possible range? I need a range with MAX value around 250 000!
This is according to the definition of rand(), see:
http://cplusplus.com/reference/clibrary/cstdlib/rand/
http://cplusplus.com/reference/clibrary/cstdlib/RAND_MAX/
If you need larger random numbers, you can use an external library (for example http://www.boost.org/doc/libs/1_49_0/doc/html/boost_random.html) or calculate large random numbers out of multiple small random numbers by yourself.
But pay attention to the distribution you want to get. If you just sum up the small random numbers, the result will not be equally distributed.
If you just scale one small random number by a constant factor, there will be gaps between the possible values.
Taking the product of random numbers also doesn't work.
A possible solution is the following:
1) Take two random numbers a,b
2) Calculate a*(RAND_MAX+1)+b
So you get equally distributed random values up to (RAND_MAX+1)^2-1
Presumably, you also want an equal distribution over this extended
range. About the only way you can effectively do this is to generate a
sequence of smaller numbers, and scale them as if you were working in a
different base. For example, for 250000, you might 4 random numbers
in the range [0,10) and one in range [0,25), along the lines:
int
random250000()
{
return randomInt(10) + 10 * randomInt(10)
+ 100 * randomInt(10) + 1000 * randomInt(10)
+ 10000 * randomInt(25);
}
For this to work, your random number generator must be good; many
implementations of rand() aren't (or at least weren't—I've not
verified the situation recently). You'll also want to eliminate the
bias you get when you map RAND_MAX + 1 different values into 10 or
25 different values. Unless RAND_MAX + 1 is an exact multiple of
10 and 25 (e.g. is an exact multiple of 50), you'll need something
like:
int
randomInt( int upperLimit )
{
int const limit = (RAND_MAX + 1) - (RAND_MAX + 1) % upperLimit;
int result = rand();
while ( result >= limit ) {
result = rand();
return result % upperLimit;
}
(Attention when doing this: there are some machines where RAND_MAX + 1
will overflow; if portability is an issue, you'll need to take
additional precautions.)
All of this, of course, supposes a good quality generator, which is far
from a given.
You can just manipulate your number bitwise by generating smaller random numbers.
For instance, if you need a 32-bit random number:
int32 x = 0;
for (int i = 0; i < 4; ++i) { // 4 == 32/8
int8 tmp = 8bit_random_number_generator();
x <<= 8*i; x |= tmp;
}
If you don't need good randomness in your numbers, you can just use rand() & 0xff for the 8-bit random number generator. Otherwise, something better will be necessary.
Are you using short ints? If so, you will see 32,767 as your max number because anything larger will overflow the short int.
Scale your numbers up by N / RAND_MAX, where N is your desired maximum. If the numbers fit, you can do something like this:
unsigned long long int r = rand() * N / RAND_MAX;
Obviously if the initial part overflows you can't do this, but with N = 250000 you should be fine. RAND_MAX is 32K on many popular platforms.
More generally, to get a random number uniformly in the interval [A, B], use:
A + rand() * (B - A) / RAND_MAX;
Of course you should probably use the proper C++-style <random> library; search this site for many similar questions explaining how to use it.
Edit: In the hope of preventing an escalation of comments, here's yet another copy/paste of the Proper C++ solution for truly uniform distribution on an interval [A, B]:
#include <random>
typedef std::mt19937 rng_type;
typedef unsigned long int int_type; // anything you like
std::uniform_int_distribution<int_type> udist(A, B);
rng_type rng;
int main()
{
// seed rng first:
rng_type::result_type const seedval = get_seed();
rng.seed(seedval);
int_type random_number = udist(rng);
// use random_number
}
Don't forget to seend the RNG! If you store the seed value, you can replay the same random sequence later on.
What would be the fastest way to generate a large number of (pseudo-)random bits. Each bit must be independent and be zero or one with equal probability. I could obviously do some variation on
randbit=rand()%2;
but I feel like there should be a faster way, generating several random bits from each call to the random number generator. Ideally I'd like to get an int or a char where each bit is random and independent, but other solutions are also possible.
The application is not cryptographic in nature so strong randomness isn't a major factor, whereas speed and getting the correct distribution is important.
convert a random number into binary
Why not get just one number (of appropriate size to get enough bits you need) and then convert it to binary. You'll actually get bits from a random number which means they are random as well.
Zeros and ones also have the probability of 50%, since taking all numbers between 0 and some 2^n limit and counting the number of zeros and ones are equal > meaning that probability of zeros and ones is the same.
regarding speed
this would probably be very fast, since getting just one random number compared to number of bits in it is faster. it purely depends on your binary conversion now.
Take a look at Boost.Random, especially boost::uniform_int<>.
As you say just generate random integers.
Then you have 32 random bits with ones and zeroes all equally probable.
Get the bits in a loop:
for (int i = 0; i < 32; i++)
{
randomBit = (randomNum >> i) & 1
...
// Do your thing
}
Repeat this for as many times you need to to get the correct amount of bits.
Here's a very fast one I coded in Java based on George Marsaglia's XORShift algorithm: gets you 64 bits at a time!
/**
* State for random number generation
*/
private static volatile long state=xorShift64(System.nanoTime()|0xCAFEBABE);
/**
* Gets a long random value
* #return Random long value based on static state
*/
public static final long nextLong() {
long a=state;
state = xorShift64(a);
return a;
}
/**
* XORShift algorithm - credit to George Marsaglia!
* #param a Initial state
* #return new state
*/
public static final long xorShift64(long a) {
a ^= (a << 21);
a ^= (a >>> 35);
a ^= (a << 4);
return a;
}
SMP Safe (i.e. Fastest way possiable these days) and good bits
Note the use of the [ThreadStatic] attribute, this object automatically handle's new thread's, no locking. That's the only way your going to ensure high-performance random, SMP lockfree.
http://blogs.msdn.com/pfxteam/archive/2009/02/19/9434171.aspx
If I rememeber correctly, the least significant bits are normally having a "less random"
distribution for most pseuodo random number generators, so using modulo and/or each bit in the generated number would be bad if you are worried about the distribution.
(Maybe you should at least google what Knuth says...)
If that holds ( and its hard to tell without knowing exactly what algorithm you are using) just use the highest bit in each generated number.
http://en.wikipedia.org/wiki/Pseudo-random
#include <iostream>
#include <bitset>
#include <random>
int main()
{
const size_t nrOfBits = 256;
std::bitset<nrOfBits> randomBits;
std::default_random_engine generator;
std::uniform_real_distribution<float> distribution(0.0,1.0);
float randNum;
for(size_t i = 0; i < nrOfBits; i++)
{
randNum = distribution(generator);
if(randNum < 0.5) {
randNum = 0;
} else {
randNum = 1;
}
randomBits.set(i, randNum);
}
std::cout << "randomBits =\n" << randomBits << std::endl;
return 0;
}
This took 4.5886e-05s in my test with 256 bits.
You can generate a random number and keep on right shifitng and testing the least significant bit to get the random bits instead of doing a mod operation.
How large do you need the number of generated bits to be? If it is not larger than a few million, and keeping in mind that you are not using the generator for cryptography, then I think the fastest possible way would be to precompute a large set of integers with the correct distribution, convert it to a text file like this:
unsigned int numbers[] =
{
0xABCDEF34, ...
};
and then compile the array into your program and go through it one number at a time.
That way you get 32 bits with every call (on a 32-bit processor), the generation time is the shortest possible because all the numbers are generated in advance, and the distribution is controlled by you. The downside is, of course, that these numbers are not random at all, which depending on what you are using the PRNG for may or may not matter.
if you only need a bit at a time try
bool coinToss()
{
return rand()&1;
} It would technically be a faster way to generate bits because of replacing the %2 with a &1 which are equivalent.
Just read some memory - take a n bit section of raw memory. It will be pretty random.
Alternatively, generate a large random int x and just use the bit values.
for(int i = (bitcount-1); i >= 0 ; i--) bin += x & (1 << i);