C++ Get random number from 0 to max long long integer - c++

I have the following function:
typedef unsigned long long int UINT64;
UINT64 getRandom(const UINT64 &begin = 0, const UINT64 &end = 100) {
return begin >= end ? 0 : begin + (UINT64) ((end - begin)*rand()/(double)RAND_MAX);
};
Whenever I call
getRandom(0, ULLONG_MAX);
or
getRandom(0, LLONG_MAX);
I always get the same value 562967133814800. How can I fix this problem?

What is rand()?
According to this the rand() function returns a value in the range [0,RAND_MAX].
What is RAND_MAX?
According to this, RAND_MAX is "an integral constant expression whose value is the maximum value returned by the rand function. This value is library-dependent, but is guaranteed to be at least 32767 on any standard library implementation."
Precision Is An Issue
You take rand()/(double)RAND_MAX, but you have perhaps only 32767 discrete values to work with. Thus, although you have big numbers, you don't really have more numbers. That could be an issue.
Seeding May Be An Issue
Also, you don't talk about how you are calling the function. Do you run the program once with LLONG_MAX and another time with ULLONG_MAX? In that case, the behaviour you are seeing is because you are implicitly using the same random seed each time. Put another way, each time you run the program it will generate the exact same sequence of random numbers.
How can I seed?
You can use the srand() function like so:
#include <stdlib.h> /* srand, rand */
#include <time.h> /* time */
int main (){
srand (time(NULL));
//The rest of your program goes here
}
Now you will get a new sequence of random numbers each time you run your program.
Overflow Is An Issue
Consider this part ((end - begin)*rand()/(double)RAND_MAX).
What is (end-begin)? It is LLONG_MAX or ULLONG_MAX these are, by definition, the largest possible values those data types can hold. Therefore, it would be bad to multiply them by anything. Yet you do! You multiply them by rand(), which is non-zero. This will cause an overflow. But we can fix that...
Order of Operations Is An Issue
You then divide them by RAND_MAX. I think you've got your order of operations wrong here. You really meant to say:
((end - begin) * (rand()/(double)RAND_MAX) )
Note the new parantheses! (rand()/(double)RAND_MAX)
Now you are multiplying an integer by a fraction, so you are guaranteed not to overflow. But that introduces a new problem...
Promotion Is An Issue
But there's an even deeper problem. You divide an int by a double. When you do that the int is promoted to a double. A double is a floating-point number which basically means that it sacrifices precision in order to have a big range. That's probably what's biting you. As you get to bigger and bigger numbers both your ullong and your llong end up getting cast to the same value. This could be especially true if you overflowed your data type first (see above).
Uh oh
So, basically, everything about the PRNG you have presented is wrong.
Perhaps this is why John von Neumann said
Anyone who attempts to generate random numbers by deterministic means
is, of course, living in a state of sin.
And, sometimes, we pay for those sins.
How can I absolve myself?
C++11 provides some nice functionality. You can use it as follows
#include <iostream>
#include <random>
#include <limits>
int main(){
std::random_device rd; //Get a random seed from the OS entropy device, or whatever
std::mt19937_64 eng(rd()); //Use the 64-bit Mersenne Twister 19937 generator
//and seed it with entropy.
//Define the distribution, by default it goes from 0 to MAX(unsigned long long)
//or what have you.
std::uniform_int_distribution<unsigned long long> distr;
//Generate random numbers
for(int n=0; n<40; n++)
std::cout << distr(eng) << ' ';
std::cout << std::endl;
}
(Note that appropriately seeding the generator is difficult. This question addresses that.)

typedef unsigned long long int UINT64;
UINT64 getRandom(UINT64 const& min = 0, UINT64 const& max = 0)
{
return (((UINT64)(unsigned int)rand() << 32) + (UINT64)(unsigned int)rand()) % (max - min) + min;
}
Using shift operation is unsafe since unsigned long long might be less than 64 bits on some machines. You can use unsigned __int64 instead, but keep in mind it's compiler dependant, therefore is available only in certain compilers.
unsigned __int64 getRandom(unsigned __int64 const& min, unsigned __int64 const& max)
{
return (((unsigned __int64)(unsigned int)rand() << 32) + (unsigned __int64)(unsigned int)rand()) % (max - min) + min;
}

Use your own PRNG that meets your requirements rather than the one provided with rand that seems not to and was never guaranteed to.

Given that ULLONG_MAX and LLONG_MAX are both way bigger than the RAND_MAX value, you will certainly get "less precision than you want".
Other than that, there's 50% chance that your value is below the LLONG_MAX, as it is halfway throuogh the range of 64-bit values.
I would suggest using the Mersenne-Twister from the C++11, which has a 64-bit variant
http://www.cplusplus.com/reference/random/mt19937_64/
That should give you a value that fits in a 64-bit number.
If you "always get the same value", then it's because you haven't seeded the random number generator, using for example srand(time(0)) - you should normally only seed once, because this sets the "sequence". If the seed is very similar, e.g. you run the same program twice in short succession, you will still get the same sequence, because "time" only ticks once a second, and even then, doesn't change that much. There are various other ways to seed a random number, but for most purposes, time(0) is reasonably good.

You are overflowing the computation, in the expression
((end - begin)*rand()/(double)RAND_MAX)
you are telling the compiler to multiply (ULLONG_MAX - 0) * rand() and then divide by RAND_MAX, you should divide by RAND_MAX first, then multiply by rand().
// http://stackoverflow.com/questions/22883840/c-get-random-number-from-0-to-max-long-long-integer
#include <iostream>
#include <stdlib.h> /* srand, rand */
#include <limits.h>
using std::cout;
using std::endl;
typedef unsigned long long int UINT64;
UINT64 getRandom(const UINT64 &begin = 0, const UINT64 &end = 100) {
//return begin >= end ? 0 : begin + (UINT64) ((end - begin)*rand()/(double)RAND_MAX);
return begin >= end ? 0 : begin + (UINT64) rand()*((end - begin)/RAND_MAX);
};
int main( int argc, char *argv[] )
{
cout << getRandom(0, ULLONG_MAX) << endl;
cout << getRandom(0, ULLONG_MAX) << endl;
cout << getRandom(0, ULLONG_MAX) << endl;
return 0;
}
See it live in Coliru

union bigRand {
uint64_t ll;
uint32_t ii[2];
};
uint64_t rand64() {
bigRand b;
b.ii[0] = rand();
b.ii[1] = rand();
return b.ll;
}
I am not sure how portable it is. But you could easily modify it depending on how wide RAND_MAX is on the particular platform. As an upside, it is brutally simple. I mean the compiler will likely optimize it to be quite efficient, without extra arithmetic whatsoever. Just the cost of calling rand twice.

The most reasonable solution would be to use C++11's <random>, mt19937_64 would do.
Alternativelly you might try:
return ((double)rand() / ((double)RAND_MAX + 1.0)) * (end - begin + 1) + begin;
to produce numbers in more reasonable way. However note that just like your first attempt, this will still not be producing uniformly distributed numbers (although it might be good enough).

The term (end - begin)*rand() seems produce an overflow. You can alleviate that problem by using (end - begin) * (rand()/(double)RAND_MAX). Using the second way, I get the following results:
15498727792227194880
7275080918072332288
14445630964995612672
14728618955737210880
with the following calls:
std::cout << getRandom(0, ULLONG_MAX) << std::endl;
std::cout << getRandom(0, ULLONG_MAX) << std::endl;
std::cout << getRandom(0, ULLONG_MAX) << std::endl;
std::cout << getRandom(0, ULLONG_MAX) << std::endl;

Related

C++ random generator with provided (at least estimated) entropy

Using C++ standard random generator I can more or less efficiently create sequences with pre-defined distributions using language-provided tools. What about Shannon entropy? Is it possible some way to define output Shannon entropy for the provided sequence?
I tried a small experiment, generated a long enough sequence with linear distribution, and implemented a Shannon entropy calculator. Resulting value is from 0.0 (absolute order) to 8.0 (absolute chaos)
template <typename T>
double shannon_entropy(T first, T last)
{
size_t frequencies_count{};
double entropy = 0.0;
std::for_each(first, last, [&entropy, &frequencies_count] (auto item) mutable {
if (0. == item) return;
double fp_item = static_cast<double>(item);
entropy += fp_item * log2(fp_item);
++frequencies_count;
});
if (frequencies_count > 256) {
return -1.0;
}
return -entropy;
}
std::vector<uint8_t> generate_random_sequence(size_t sequence_size)
{
std::vector<uint8_t> random_sequence;
std::random_device rnd_device;
std::cout << "Random device entropy: " << rnd_device.entropy() << '\n';
std::mt19937 mersenne_engine(rnd_device());
std::uniform_int_distribution<unsigned> dist(0, 255);
auto gen = std::bind(dist, mersenne_engine);
random_sequence.resize(sequence_size);
std::generate(random_sequence.begin(), random_sequence.end(), gen);
return std::move(random_sequence);
}
std::vector<double> read_random_probabilities(size_t sequence_size)
{
std::vector<size_t> bytes_distribution(256);
std::vector<double> bytes_frequencies(256);
std::vector<uint8_t> random_sequence = generate_random_sequence(sequence_size);
size_t rnd_seq_size = random_sequence.size();
std::for_each(random_sequence.begin(), random_sequence.end(), [&](uint8_t b) mutable {
++bytes_distribution[b];
});
std::transform(bytes_distribution.begin(), bytes_distribution.end(), bytes_frequencies.begin(),
[&rnd_seq_size](size_t item) {
return static_cast<double>(item) / rnd_seq_size;
});
return std::move(bytes_frequencies);
}
int main(int argc, char* argv[]) {
size_t sequence_size = 1024 * 1024;
std::vector<double> bytes_frequencies = read_random_probabilities(sequence_size);
double entropy = shannon_entropy(bytes_frequencies.begin(), bytes_frequencies.end());
std::cout << "Sequence entropy: " << std::setprecision(16) << entropy << std::endl;
std::cout << "Min possible file size assuming max theoretical compression efficiency:\n";
std::cout << (entropy * sequence_size) << " in bits\n";
std::cout << ((entropy * sequence_size) / 8) << " in bytes\n";
return EXIT_SUCCESS;
}
First, it appears that std::random_device::entropy() hardcoded to return 32; in MSVC 2015 (which is probably 8.0 according to Shannon definition). As you can try it's not far from the truth, this example it's always close to 7.9998..., i.e. absolute chaos.
The working example is on IDEONE (by the way, their compiler hardcode entropy to 0)
One more, the main question - is it possible to create such a generator that generate linearly-distributed sequence with defined entropy, let's say 6.0 to 7.0? Could it be implemented at all, and if yes, if there are some implementations?
First, you're viewing Shannon's theory entirely wrong. His argument (as you're using it) is simply, "given the probably of x (Pr(x)), the bits required to store x is -log2 Pr(x). It has nothing to do with the probability of x. In this regard, you're viewing Pr(x) wrong. -log2 Pr(x) given a Pr(x) that should be uniformly 1/256 results in a required bitwidth of 8 bits to store. However, that's not how statistics work. Go back to thinking about Pr(x) because the bits required means nothing.
Your question is about statistics. Given an infinite sample, if-and-only-if the distribution matches the ideal histogram, as the sample size approaches infinite the probability of each sample will approach the expected frequency. I want to make it clear that you're not looking for "-log2 Pr(x) is absolute chaos when it's 8 given Pr(x) = 1/256." A uniform distribution is not chaos. In fact, it's... well, uniform. It's properties are well known, simple, and easy to predict. You're looking for, "Is the finite sample set of S meeting the criteria of a independently-distributed uniform distribution (commonly known as "Independently and Identically Distributed Data" or "i.i.d") of Pr(x) = 1/256?" This has nothing to do with Shannon's theory and goes much further back in time to the basic probability theories involving flips of a coin (in this case binomial given assumed uniformity).
Assuming for a moment that any C++11 <random> generator meets the criteria for "statistically indistinguishable from i.i.d." (which, by the way, those generators don't), you can use them to emulate i.i.d. results. If you would like a range of data that is storable within 6..7 bits (it wasn't clear, did you mean 6 or 7 bits, because hypothetically, everything in between is doable as well), simply scale the range. For example...
#include <iostream>
#include <random>
int main() {
unsigned long low = 1 << 6; // 2^6 == 64
unsigned long limit = 1 << 7; // 2^7 == 128
// Therefore, the range is 6-bits to 7-bits (or 64 + [128 - 64])
unsigned long range = limit - low;
std::random_device rd;
std::mt19937 rng(rd()); //<< Doesn't actually meet criteria for i.d.d.
std::uniform_int_distribution<unsigned long> dist(low, limit - 1); //<< Given an engine that actually produces i.i.d. data, this would produce exactly what you're looking for
for (int i = 0; i != 10; ++i) {
unsigned long y = dist(rng);
//y is known to be in set {2^6..2^7-1} and assumed to be uniform (coin flip) over {low..low + (range-1)}.
std::cout << y << std::endl;
}
return 0;
}
The problem with this is that, while the <random> distribution classes are accurate, the random number generators (presumably aside from std::random_device, but that's system-specific) are not designed to stand up to statistical tests of fitness as i.i.d. generators.
If you would like one that does, implement a CSPRNG (my go-to is Bob Jenkins' ISAAC) that has an interface meeting the requirements of the <random> class of generators (probably just covering the basic interface of std::random_device is good enough).
To test for statistically sound "no" or "we can't say no" for whether a set follows a specific model (and therefore Pr(x) is accurate and therefore Shannon's entropy function is an accurate prediction), that's a whole thing else entirely. Like I said, no generator in <random> meets these criteria (except maybe std::random_device). My advice is to do research into things like Central limit theorem, Goodness-of-fit, Birthday-spacing, et cetera.
To drive my point a bit more, under the assumptions of your question...
struct uniform_rng {
unsigned long x;
constexpr uniform_rng(unsigned long seed = 0) noexcept:
x{ seed }
{ };
unsigned long operator ()() noexcept {
unsigned long y = this->x++;
return y;
}
};
... would absolutely meet your criteria of being uniform (or as you say "absolute chaos"). Pr(x) is most certainly 1/N and the bits required to store any number of the set is -log2 Pr(1/N) which is whatever 2 to the power of the bitwidth of unsigned long is. However, it's not independently distributed. Because we know it's properties, you can "store" it's entire sequence by simply storing seed. Surprise, all PRNGs work this way. Therefore the bits required to store the entire sequence of an PRNG is -log2(1/2^bitsForSeed). As your sample grows, the bits required to store vs the bits your able to generate that sample (aka, the compression ratio) approaches a limit of 0.
I cannot comment yet, but I would like to start the discussion:
From communication/information theory, it would seem that you would require probabilistic shaping methods to achieve what you want. You should be able to feed the output of any distribution function through a shaping coder, which then should re-distribute the input to a specific target shannon entropy.
Probabilistic constellation shaping has been succesfully applied in fiber-optic communication: Wikipedia with some other links
You are not clear what you want to achieve, and there are several ways of lowering the Shannon entropy for your sequence:
Correlation between the bits, e.g. putting random_sequence through a
simple filter.
Individual bits are not fully random.
As an example below you could make the bytes less random:
std::vector<uint8_t> generate_random_sequence(size_t sequence_size,
int unit8_t cutoff=10)
{
std::vector<uint8_t> random_sequence;
std::vector<uint8_t> other_sequence;
std::random_device rnd_device;
std::cout << "Random device entropy: " << rnd_device.entropy() << '\n';
std::mt19937 mersenne_engine(rnd_device());
std::uniform_int_distribution<unsigned> dist(0, 255);
auto gen = std::bind(dist, mersenne_engine);
random_sequence.resize(sequence_size);
std::generate(random_sequence.begin(), random_sequence.end(), gen);
other_sequence.resize(sequence_size);
std::generate(other_sequence.begin(), other_sequence.end(), gen);
for(size_t j=0;j<size;++j) {
if (other_sequence[j]<=cutoff) random_sequence[j]=0; // Or j or ...
}
return std::move(random_sequence);
}
I don't think this was the answer you were looking for - so you likely need to clarify the question more.

First random number is always smaller than rest

I happen to notice that in C++ the first random number being called with the std rand() method is most of the time significant smaller than the second one. Concerning the Qt implementation the first one is nearly always several magnitudes smaller.
qsrand(QTime::currentTime().msec());
qDebug() << "qt1: " << qrand();
qDebug() << "qt2: " << qrand();
srand((unsigned int) time(0));
std::cout << "std1: " << rand() << std::endl;
std::cout << "std2: " << rand() << std::endl;
output:
qt1: 7109361
qt2: 1375429742
std1: 871649082
std2: 1820164987
Is this intended, due to error in seeding or a bug?
Also while the qrand() output varies strongly the first rand() output seems to change linearly with time. Just wonder why.
I'm not sure that could be classified as a bug, but it has an explanation. Let's examine the situation:
Look at rand's implementation. You'll see it's just a calculation using the last generated value.
You're seeding using QTime::currentTime().msec(), which is by nature bounded by the small range of values 0..999, but qsrand accepts an uint variable, on the range 0..4294967295.
By combining those two factors, you have a pattern.
Just out of curiosity: try seeding with QTime::currentTime().msec() + 100000000
Now the first value will probably be bigger than the second most of the time.
I wouldn't worry too much. This "pattern" seems to happen only on the first two generated values. After that, everything seems to go back to normal.
EDIT:
To make things more clear, try running the code below. It'll compare the first two generated values to see which one is smaller, using all possible millisecond values (range: 0..999) as the seed:
int totalCalls, leftIsSmaller = 0;
for (totalCalls = 0; totalCalls < 1000; totalCalls++)
{
qsrand(totalCalls);
if (qrand() < qrand())
leftIsSmaller++;
}
qDebug() << (100.0 * leftIsSmaller) / totalCalls;
It will print 94.8, which means 94.8% of the time the first value will be smaller than the second.
Conclusion: when using the current millisecond to seed, you'll see that pattern for the first two values. I did some tests here and the pattern seems to disappear after the second value is generated. My advice: find a "good" value to call qsrand (which should obviously be called only once, at the beginning of your program). A good value should span the whole range of the uint class. Take a look at this other question for some ideas:
Recommended way to initialize srand?
Also, take a look at this:
PCG: A Family of Better Random Number Generators
Neither current Qt nor C standard run-time have a quality randomizer and your test shows. Qt seems to use C run-time for that (this is easy to check but why). If C++ 11 is available in your project, use much better and way more reliable method:
#include <random>
#include <chrono>
auto seed = std::chrono::system_clock::now().time_since_epoch().count();
std::default_random_engine generator(seed);
std::uniform_int_distribution<uint> distribution;
uint randomUint = distribution(generator);
There is good video that covers the topic. As noted by commenter user2357112 we can apply different random engines and then different distributions but for my specific use the above worked really well.
Keeping in mind that making judgments about a statistical phenomena based on a small number of samples might be misleading, I decided to run a small experiment. I run the following code:
int main()
{
int i = 0;
int j = 0;
while (i < RAND_MAX)
{
srand(time(NULL));
int r1 = rand();
int r2 = rand();
if (r1 < r2)
++j;
++i;
if (i%10000 == 0) {
printf("%g\n", (float)j / (float)i);
}
}
}
which basically printed the percentage of times the first generated number was smaller than the second. Below you see the plot of that ratio:
and as you can see it actually approaches 0.5 after less than 50 actual new seeds.
As suggested in the comment, we could modify the code to use consecutive seeds every iteration and speed up the convergence:
int main()
{
int i = 0;
int j = 0;
int t = time(NULL);
while (i < RAND_MAX)
{
srand(t);
int r1 = rand();
int r2 = rand();
if (r1 < r2)
++j;
++i;
if (i%10000 == 0) {
printf("%g\n", (float)j / (float)i);
}
++t;
}
}
This gives us:
which stays pretty close to 0.5 as well.
While rand is certainly not the best pseudo random number generator, the claim that it often generates a smaller number during the first run does not seem to be warranted.

C/C++ algorithm to produce same pseudo-random number sequences from same seed on different platforms? [duplicate]

This question already has answers here:
Consistent pseudo-random numbers across platforms
(5 answers)
Closed 9 years ago.
The title says it all, I am looking for something preferably stand-alone because I don't want to add more libraries.
Performance should be good since I need it in a tight high-performance loop. I guess that will come at a cost of the degree of randomness.
Any particular pseudo-random number generation algorithm will behave like this. The problem with rand is that it's not specified how it is implemented. Different implementations will behave in different ways and even have varying qualities.
However, C++11 provides the new <random> standard library header that contains lots of great random number generation facilities. The random number engines defined within are well-defined and, given the same seed, will always produce the same set of numbers.
For example, a popular high quality random number engine is std::mt19937, which is the Mersenne twister algorithm configured in a specific way. No matter which machine, you're on, the following will always produce the same set of real numbers between 0 and 1:
std::mt19937 engine(0); // Fixed seed of 0
std::uniform_real_distribution<> dist;
for (int i = 0; i < 100; i++) {
std::cout << dist(engine) << std::endl;
}
Here's a Mersenne Twister
Here is another another PRNG implementation in C.
You may find a collection of PRNG here.
Here's the simple classic PRNG:
#include <iostream>
using namespace std;
unsigned int PRNG()
{
// our initial starting seed is 5323
static unsigned int nSeed = 5323;
// Take the current seed and generate a new value from it
// Due to our use of large constants and overflow, it would be
// very hard for someone to predict what the next number is
// going to be from the previous one.
nSeed = (8253729 * nSeed + 2396403);
// Take the seed and return a value between 0 and 32767
return nSeed % 32767;
}
int main()
{
// Print 100 random numbers
for (int nCount=0; nCount < 100; ++nCount)
{
cout << PRNG() << "\t";
// If we've printed 5 numbers, start a new column
if ((nCount+1) % 5 == 0)
cout << endl;
}
}

Generate random long number

I know that to generate random long number, I do following steps in Java:
Random r = new Random();
return r.nextLong();
What will be equivalent of this code in C++? like this?
return (long)rand();
<cstdlib> provides int rand(). You might want to check out the man page. If long is bigger than int on your system, you can call rand() twice and put the first value in the high word.
#include <cstdlib>
long lrand()
{
if (sizeof(int) < sizeof(long))
return (static_cast<long>(rand()) << (sizeof(int) * 8)) |
rand();
return rand();
}
(it's very unlikely that long is neither the same as or double the size of int, so this is practical if not theoretically perfect)
Check your docs for rand() though. It's not a great generator, but good enough for most things. You'll want to call srand() to initialise the random-number generation system. Others have commented that Windows doesn't return sizeof(int) randomised bits, so you may need to tweak the above.
Using boost random library can save you of quite nasty surprises with (pseudo)random numbers
First, you have ton know that in the current standard C++ there is no random library. In fact there is one, but it's available in a sperate namespace called TR1 because it's the result of a Technical Report done in 2003. It will be available in the standard library for the next standard (coming next year if all goes well).
So if you have a recent compiler (VS2008 or lasts versions of GCC) you have access to the std::tr1::random library; If you have a compiler implementing the parts of the next standard, then you have it std::random.
If you don't have access to that library, there is an implementation available in the boost libraries : http://www.boost.org/doc/libs/1_44_0/doc/html/boost_random.html
Now in all cases, the way to get a random number is the same as it's all the same library (from the boost doc):
boost::mt19937 rng; // produces randomness out of thin air
// see pseudo-random number generators
boost::uniform_int<> six(1,6); // distribution that maps to 1..6
// see random number distributions
boost::variate_generator<boost::mt19937&, boost::uniform_int<> >
die(rng, six); // glues randomness with mapping
int x = die(); // simulate rolling a die
C++11 provides the <random> library. To generate a long, you would use code like:
#include <random>
#include <climits>
...
std::default_random_engine generator;
std::uniform_int_distribution<long> distribution(LONG_MIN,LONG_MAX);
long result = distribution(generator);
Portable hack:
long r = 0;
for (int i = 0; i < sizeof(long)/sizeof(int); i++)
{
r = r << (sizeof(int) * CHAR_BITS);
r |= rand();
}
return r;
Why do you need a random long anyway?
This is the method I use. It is returning numbers in range [0, 2^64-1].
unsigned long long unsignedLongLongRand()
{
unsigned long long rand1 = abs(rand());
unsigned long long rand2 = abs(rand());
rand1 = rand1 << (sizeof(int)*8);
unsigned long long randULL = (rand1 | rand2);
return randULL;
}
this function works like rand() and uses Unsigned Long Type:
unsigned long _LongRand ()
{
unsigned char MyBytes[4];
unsigned long MyNumber = 0;
unsigned char * ptr = (unsigned char *) &MyNumber;
MyBytes[0] = rand() % 256; //0-255
MyBytes[1] = rand() % 256; //256 - 65535
MyBytes[2] = rand() % 256; //65535 -
MyBytes[3] = rand() % 256; //16777216
memcpy (ptr+0, &MyBytes[0], 1);
memcpy (ptr+1, &MyBytes[1], 1);
memcpy (ptr+2, &MyBytes[2], 1);
memcpy (ptr+3, &MyBytes[3], 1);
return(MyNumber);
}

boost::random generate the same number every time

main .cpp
#include "stdafx.h"
#include "random_generator.h"
int
main ( int argc, char *argv[] )
{
cout.setf(ios::fixed);
base_generator_type base_generator;
int max = pow(10, 2);
distribution_type dist(1, max);
boost::variate_generator<base_generator_type&,
distribution_type > uni(base_generator, dist);
for ( int i=0; i<10; i++ ) {
//cout << random_number(2) << endl;
cout << uni() << endl;
}
return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */
random_gemerator.h
#include "stdafx.h"
#include <boost/random.hpp>
#include <boost/generator_iterator.hpp>
typedef boost::mt19937 base_generator_type;
typedef boost::lagged_fibonacci19937 fibo_generator_type;
typedef boost::uniform_int<> distribution_type;
typedef boost::variate_generator<fibo_generator_type&,
distribution_type> gen_type;
int
random_number ( int bits )
{
fibo_generator_type fibo_generator;
int max = pow(10, bits);
distribution_type dist(1, max);
gen_type uni(fibo_generator, dist);
return uni();
} /* ----- end of function random_number ----- */
stdafx.h
#include <iostream>
#include <cstdlib>
#include <cmath>
using namespace std;
every time I run it, it all generate the same number sequence
like 77, 33,5, 22 , ...
how to use boost:random correctly?
that is it. but maybe have a little problem, like the following:
it seems sound
get_seed(); for (;;) {cout << generate_random() << endl; } // is ok
it genereate the same random number
int get_random() {get_seed();return generate_random();} for (;;) {cout << get_random() <<endl;} // output the same random number yet
if you want the sequence of random numbers to change every time you run your program, you need to change the random seed by initializing it with the current time for instance
you will find an example there, excerpt:
/*
* Change seed to something else.
*
* Caveat: std::time(0) is not a very good truly-random seed. When
* called in rapid succession, it could return the same values, and
* thus the same random number sequences could ensue. If not the same
* values are returned, the values differ only slightly in the
* lowest bits. A linear congruential generator with a small factor
* wrapped in a uniform_smallint (see experiment) will produce the same
* values for the first few iterations. This is because uniform_smallint
* takes only the highest bits of the generator, and the generator itself
* needs a few iterations to spread the initial entropy from the lowest bits
* to the whole state.
*/
generator.seed(static_cast<unsigned int>(std::time(0)));
You need to seed your random number generator so it doesn't start from the same place each time.
Depending on what you are doing with the numbers, you may need to put some thought into how you choose your seed value. If you need high quality randomness (if you are generating cryptographic keys and want them fairly secure), you will need a good seed value. If this were Posix, I would suggest /dev/random - but you look to be using Windows so I'm not sure what a good seed source would be.
But if you don't mind a predictable seed (for games, simulations, etc.), a quick and dirty seed is the current timestamp returned by time().
If you are running on a 'nix system, you could always try something like this;
int getSeed()
{
ifstream rand("/dev/urandom");
char tmp[sizeof(int)];
rand.read(tmp,sizeof(int));
rand.close();
int* number = reinterpret_cast<int*>(tmp);
return (*number);
}
I'm guessing seeding the random number generator this way is faster than simply reading the /dev/urandom (or /dev/random) for all your random number needs.
You can use the boost::random::random_device class either as-is, or to seed your other generator.
You can get a one-off random number out of it with a simple:
boost::random::random_device()()