Why is this random number generator generating same numbers? - c++

The first one works, but the second one always returns the same value. Why would this happen and how am I supposed to fix this?
int main() {
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_real_distribution<> dis(0, 1);
for(int i = 0; i < 10; i++) {
std::cout << dis(gen) << std::endl;
}return 0;
}
The one dosen't work:
double generateRandomNumber() {
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_real_distribution<> dis(0, 1);
return dis(gen);
}
int main() {
for(int i = 0; i < 10; i++) {
std::cout << generateRandomNumber() << std::endl;
}return 0;
}

What platform are you working on? std::random_device is allowed to be a pseudo-RNG if hardware or OS functionality to generate random numbers doesn't exist. It might initialize using the current time, in which case the intervals at which you're calling it might be too close apart for the 'current time' to take on another value.
Nevertheless, as mentioned in the comments, it is not meant to be used this way. A simple fix will be to declare rd and gen as static. A proper fix would be to move the initialization of the RNG out of the function that requires the random numbers, so it can also be used by other functions that require random numbers.

The first one uses the same generator for all the numbers, the second creates a new generator for each number.

Let's compare the differences between your two cases and see why this happening.
Case 1:
int main() {
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_real_distribution<> dis(0, 1);
for(int i = 0; i < 10; i++) {
std::cout << dis(gen) << std::endl;
}return 0;
}
In your first case the program executes the main function and the first thing that happens here is that you are creating an instance of a std::random_device, std::mt19337 and a std::uniform_real_distribution<> on the stack that belong to main()'s scope. Your mersenne twister gen is initialized once with the result from your random device rd. You have also initialized your distribution dis to have the range of values from 0 to 1. These only exist once per each run of your application.
Now you create a for loop that starts at index 0 and increments to 9 and on each iteration you are displaying the resulting value to cout by using the distribution dis's operator()() passing to it your already seeded generation gen. Each time on this loop dis(gen) is going to produce a different value because gen was already seeded only once.
Case 2:
double generateRandomNumber() {
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_real_distribution<> dis(0, 1);
return dis(gen);
}
int main() {
for(int i = 0; i < 10; i++) {
std::cout << generateRandomNumber() << std::endl;
}return 0;
}
In this version of the code let's see what's similar and what's different. Here the program executes and enters the main() function. This time the first thing it encounters is a for loop from 0 to 9 similar as in the main above however this loop is the first thing on main's stack. Then there is a call to cout to display results from a user defined function named generateRandomNumber(). This function is called a total of 10 times and each time you iterate through the for loop this function has its own stack memory that will be wound and unwound or created and destroyed.
Now let's jump execution into this user defined function named generateRandomNumber().
The code looks almost exactly the same as it did before when it was in main() directly but these variables live in generateRandomNumber()'s stack and have the life time of its scope instead. These variables will be created and destroyed each time this function goes in and out of scope. The other difference here is that this function also returns dis(gen).
Note: I'm not 100% sure if this will return a copy or not or if the compiler will end up doing some kind of optimizations, but returning by value usually results in a copy.
Finally when then function generateRandomNumber() returns and just before it goes completely out of scope where std::uniform_real_distribrution<>'s operator()() is being called and it goes into it's own stack and scope before returning back to main generateRandomNumber() ever so briefly and then back to main.
-Visualizing The Differences-
As you can see these two programs are quite different, very different to be exact. If you want more visual proof of them being different you can use any available online compiler to enter each program to where it shows you that program in assembly and compare the two assembly versions to see their ultimate differences.
Another way to visualize the difference between these two programs is not only to see their assembly equivalents but to step through each program line by line with a debugger and keep an eye on the stack calls and the winding and unwinding of them and keep an eye of all values as they become initialized, returned and destroyed.
-Assessment and Reasoning-
The reason the first one works as expected is because your random device, your generator and your distribution all have the life time of main and your generator is seeded only once with your random device and you only have one distribution that you are using each time in the for loop.
In your second version main doesn't know anything about any of that and all it knows is that it is going through a for loop and sending returned data from a user function to cout. Now each time it goes through the for loop this function is being called and it's stack as I said is being created and destroyed each time so all if its variables are being created and destroyed. So in this instance you are creating and destroying 10: rd, gen(rd()), and dis(0,1)s instances.
-Conclusion-
There is more to this than what I have described above and the other part that pertains to the behavior of your random number generators is what was mentioned by user Kane in his statement to you from his comment to your question:
From en.cppreference.com/w/cpp/numeric/random/random_device:
"std::random_device may be implemented in terms of an
implementation-defined pseudo-random number engine [...].
In this case each std::random_device object may generate
the same number sequence."
Each time you create and destroy you are seeding the generator over and over again with a new random_device however if your particular machine or OS doesn't have support for using random_device it can either end up using some arbitrary value as its seed value or it could end up using the system clock to generate a seed value.
So let's say it does end up using the system clock, the execution of main()'s for loop happens so fast that all of the work that is being done by the 10 calls to generateRandomNumber() is already executed before a few milliseconds have passed. So here the delta time is minimally small and negligible that it is generating the same seed value on each pass as well as it is generating the same values from the distributions.

Note that std::mt19937 gen(rd()) is very problematic. See this question, which says:
rd() returns a single unsigned int. This has at least 16 bits and probably 32. That's not enough to seed [this generator's huge state].
Using std::mt19937 gen(rd());gen() (seeding with 32 bits and looking at the first output) doesn't give a good output distribution. 7 and 13 can never be the first output. Two seeds produce 0. Twelve seeds produce 1226181350. (Link)
std::random_device can be, and sometimes is, implemented as a simple PRNG with a fixed seed. It might therefore produce the same sequence on every run. (Link)
Furthermore, random_device's approach to generating "nondeterministic" random numbers is "implementation-defined", and random_device allows the implementation to "employ a random number engine" if it can't generate "nondeterministic" random numbers due to "implementation limitations" ([rand.device]). (For example, under the C++ standard, an implementation might implement random_device using timestamps from the system clock, or using fast-moving cycle counters, since both are nondeterministic.)
An application should not blindly call random_device's generator (rd()) without also, at a minimum, calling the entropy() method, which gives an estimate of the implementation's entropy in bits.

Related

Re-initializing random distribution

Is it reasonable to expect that a distribution from <random> re-initialized before each next number request behaves the same way as if it was initialized once? In other words, does this:
std::default_random_engine generator;
int p[10]={};
for (int i=0; i<nrolls; ++i) {
std::uniform_int_distribution<int> distribution(0,9);
int number = distribution(generator);
++p[number];
}
have the same distribution as that
std::uniform_int_distribution<int> distribution(0,9);
std::default_random_engine generator;
int p[10]={};
for (int i=0; i<nrolls; ++i) {
int number = distribution(generator);
++p[number];
}
I've checked that for uniform and normal distribution it empirically holds true. Can I expect it from every distribution in <random>?
I essentially do what your first implementation does. Construct one every time I need a distribution.
That said, yes, the behavior of the different distributions are guaranteed by the standard to behave in specific ways.
STL recommends, that since distributions are relatively cheap, don't worry about constructing one every time you need one or need a new range. He also says if you don't want to construct one every time, you can use the param member function to change the distribution range.
Microsoft Channel9 link if the above direct Youtube link dies (seek to 30 minutes in): https://channel9.msdn.com/Events/GoingNative/2013/rand-Considered-Harmful
EDIT
I was re-watching a CppCon talk from a few years ago that discusses this exact question. The result? Constructing distributions as local variables even inside loops is 4.5 times faster.

Why do these 4 different random number generator functions produce the same series of numbers?

A neuroevolution program I am in the process of debugging does not produce random values every time it is called. In the program, a vector of Network objects are initialized with the following statement:
vector<Network> population(POPULATION_SIZE, Network(sizes, inputCount));
Why I believe the program not to be converging to an optimal solution is that, always, the first 100 of the population are the same. When a network is initialized in this manner, the connection weights and neuron biases are (each) initialized with the following class function:
double Network::randDouble(double low, double high) {
/*default_random_engine generator(std::chrono::system_clock::now().time_since_epoch().count());
uniform_real_distribution<double> distribution(low, high);
return distribution(generator);*/
/*srand(time(NULL));
double temp;
if (low > high) {
temp = low;
low = high;
high = temp;
}
temp = (rand() / (static_cast<double>(RAND_MAX) + 1.0)) * (high - low) + low;
return temp;*/
/*mt19937 rgn(std::chrono::system_clock::now().time_since_epoch().count());
uniform_real_distribution<double> gen(low, high);
return gen(rgn);*/
default_random_engine rd;
uniform_real_distribution<double> gen(low, high);
auto val = std::bind(gen, rd);
return val();
}
The 3 commented-out sections are previously attempted means of generating the functionality required. In each case, they produce the same numbers for each network (differing from 1 weight to another, but not 1 network to another). The methods attempted are based on answers from here:
c++-default_random_engine creates all the time same series of numbers
http://en.cppreference.com/w/cpp/numeric/random/uniform_real_distribution
In addition, the second method produces the same results with or without the seed. I must be missing something.
Another, albeit potentially irrelevant concern, is that functions using this may be parallel-ized using OpenMP, and that when called in parallel, the results could be the same.
Your problem is that you are initializing (seeding) the random generator every time you generate a number. In the simple srand() case, you should call srand() just once during program start, then call rand() every time you need one number. In the more complex cases, you should construct the generator just once (in the entire program run), and use it as many times as you need.
The C++11 standard random-number engines (and most other random generators) are in fact generators of pseudo-random sequences of numbers. Pseudo-random means that the sequences are repeatable. Every time a given pseudo-random generator is seeded with the same seed, it will always produce the same sequence. (But this is not exactly what is happening in your code. Read on.)
In C++11, the seeding happens at the time the random-number engine is instantiated. This means that you need to instantiate the engine once per pseudorandom sequence. The way your code seeds the engine in every call to the Network::randDouble() method, you cannot expect to get the pseudorandom sequence that the engine is designed to produce. Instead, you will get a series of the first numbers from sequences seeded by the call to the system_clock::... or the time() methods.
The call to the system_clock::now().time_since_epoch().count() returns time in integer number of periods. The period refers to the specialization of the template class std::chrono::duration which is returned by time_since_epoch(). The period may be seconds by default, which could explain why all your Network objects were getting the same seed in every call to Network::randDouble().
If you want a different sequence for each of the Networks, you should better instantiate the pseudorandom engine in the c-tor of the Network class, and seed it with a different seed for each object of the Network class. This means that the engine, or a pointer to the engine object should be member of the class.
Example:
class Network {
...
protected:
mt19937 rd;
...
}
Network::Network(int rndseed) :
rd(rndseed)
{
...
}
double Network::randDouble(double low, double high) {
uniform_real_distribution<double> gen(low, high);
auto val = gen(rd);
return val;
}
To make sure that each instance of the pseudorandom engine is getting a different seed, you may use something as simple as consequent integer numbers. If you want to use the system clock, it is far more tricky to guarantee that the seeds are different every time, even if you use std::chrono::high_resolution_clock. CPUs are simply very fast and you need to take special care to make sure that the count of the clock that you are using has actually changed between two calls.

Generating Gaussian Noise

I created a function that is suppose to generate a set of normal random numbers from 0 to 1. Although, it seems that each time I run the function the output is the same. I am not sure what is wrong.
Here is the code:
MatrixXd generateGaussianNoise(int n, int m){
MatrixXd M(n,m);
normal_distribution<double> nd(0.0, 1.0);
random_device rd;
mt19937 gen(rd());
for(int i = 0; i < n; i++){
for(int j = 0; j < m; j++){
M(i,j) = nd(gen);
}
}
return M;
}
The output when n = 4 and m = 1 is
0.414089
0.225568
0.413464
2.53933
I used the Eigen library for this, I am just wondering why each time I run it produces the same numbers.
From:
http://en.cppreference.com/w/cpp/numeric/random/random_device
std::random_device may be implemented in terms of an implementation-defined pseudo-random number engine if a non-deterministic source (e.g. a hardware device) is not available to the implementation. In this case each std::random_device object may generate the same number sequence.
Thus, I think you should look into what library stack you are actually using here, and what's known about random_device in your specific implementation.
I realize that this then might in fact be a duplicate of "Why do I get the same sequence for every run with std::random_device with mingw gcc4.8.1?".
Furthermore, it at least used to be that initializating a new mt19937 instance would be kind of expensive. Thus, you have performance reasons in addition to quality of randomness to not re-initalize both your random_device and mt19937 instance for every function call. I would go for some kind of singleton here, unless you have very clear constraints (building in a library, unclear concurrency) that would make that an unuistable choice.

Best place to initialise random generator

In my program I use a random number generator quite a lot. I believe the general rule is that you should define things as close to the place where they're "called", but does this also hold true for random number generators?
For example, in my code I have the choice between:
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<int> uni(-2147483647, 2147483646);
lots of code
for (i = 0; i < 10000; i++)
{
variable x = uni(rng);
}
Or
lots of code
for (i = 0; i < 10000; i++)
{
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<int> uni(-2147483647, 2147483646);
variable x = uni(rng);
}
I would say the first method is faster, but I've gotten a bit confused due to reading many threads in which it is stated to always place everything as close to the place where it's called.
In this case, it's much better to create the RNG outside your loop:
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<int> uni(-2147483647, 2147483646);
for (i = 0; i < 10000; i++)
{
variable x = uni(rng);
}
The reason for this has little to do with performance (although it will likely perform better, too). The reason is to do with correctness:
You're initialising a new random sequence each time through the loop, and reading just one value. Instead, you should be initialising the sequence just once, and consuming many values from it. Initialise outside the loop, and consume within the loop.
On the performance side, reading from a std::random_device is much slower than taking the next value from a PRNG such as std::mt19937. Doing this just once, outside the loop, will save a lot of time. Further, the std::mt19937 PRNG has a large state (624 integers). It generates this initial state from the value passed to its constructor. Again, doing this just once will give you a performance boost.
Of course, initialising outside the loop has the advantage of also being the correct usage model for the standard RNGs.
The reason is, when you located your random generator definings on top of your code, they will become global and they will be defined automatically when you first hit the "Run" button. If you are using those variables in more than one place, probably it would be the best idea. But if you are not, you don't need it. Because in some scenarios, they might not even called. Anyway, this suggestion is for class or method usages.
However, from what I see, you are going to use that number in a for loop, which will cause your computer to run below code 1000 times.
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<int> uni(-2147483647, 2147483646);
That is unnecessary, and useless. I beleive your first code will work better on performance side.

several random numbers c++

I am a physicist, writing a program that involves generating several (order of a few billions) random numbers, drawn from a Gaussian distribution. I am trying to use C++11. The generation of these random numbers is separated by an operation that should take very little time. My biggest worry is if the fact that I am generating so many random numbers, with such a little time gap, could potentially lead to sub-optimal performance. I am testing certain statistical properties, which rely heavily on the independence of the randomness of the numbers, so, my result is particularly sensitive to these issues. My question is, with the kinds of numbers I mention below in the code (a simplified version of my actual code), am I doing something obviously (or even, subtly) wrong?
#include <random>
// Several other includes, etc.
int main () {
int dim_vec(400), nStats(1e8);
vector<double> vec1(dim_vec), vec2(dim_vec);
// Initialize the above vectors, which are order 1 numbers.
random_device rd;
mt19937 generator(rd());
double y(0.0);
double l(0.0);
for (int i(0);i<nStats;i++)
{
for (int j(0);j<dim_vec;j++)
{
normal_distribution<double> distribution(0.0,1/sqrt(vec1[j]));
l=distribution(generator);
y+=l*vec2[j];
}
cout << y << endl;
y=0.0;
}
}
The normal_distribution is allowed to have state. And with this particular distribution, it is common to generate numbers in pairs with every other call, and on the odd calls, return the second cached number. By constructing a new distribution on each call you are throwing away that cache.
Fortunately you can "shape" a single distribution by calling with different normal_distribution::param_type's:
normal_distribution<double> distribution;
using P = normal_distribution<double>::param_type;
for (int i(0);i<nStats;i++)
{
for (int j(0);j<dim_vec;j++)
{
l=distribution(generator, P(0.0,1/sqrt(vec1[j])));
y+=l*vec2[j];
}
cout << y << endl;
y=0.0;
}
I'm not familiar with all implementations of std::normal_distribution. However I wrote the one for libc++. So I can tell you with some amount of certainty that my slight rewrite of your code will have a positive performance impact. I am unsure what impact it will have on the quality, except to say that I know it won't degrade it.
Update
Regarding Severin Pappadeux's comment below about the legality of generating pairs of numbers at a time within a distribution: See N1452 where this very technique is discussed and allowed for:
Distributions sometimes store values from their associated source of
random numbers across calls to their operator(). For example, a common
method for generating normally distributed random numbers is to
retrieve two uniformly distributed random numbers and compute two
normally distributed random numbers out of them. In order to reset the
distribution's random number cache to a defined state, each
distribution has a reset member function. It should be called on a
distribution whenever its associated engine is exchanged or restored.
Some thoughts on top of excellent HH answer
Normal distribution (mu,sigma) is generated from normal (0,1) by shift and scale:
N(mu, sigma) = mu + N(0,1)*sigma
if your mean (mu) is always zero, you could simplify and speed-up (by not adding 0.0) your code by doing something like
normal_distribution<double> distribution;
for (int i(0);i<nStats;i++)
{
for (int j(0);j<dim_vec;j++)
{
l = distribution(generator);
y += l*vec2[j]/sqrt(vec1[j]);
}
cout << y << endl;
y=0.0;
}
If speed is of utmost importance, I would try to precompute everything I can outside the main 10^8 loop. Is it possible to precompute sqrt(vec1[j]) so you save on sqrt() call? Is it possible to
have vec2[j]/sqrt(vec1[j]) as a single vector?
If it is not possible to precompute those vectors, I would try to save on memory access. Keeping pieces of vec2[j] and vec1[j] together might help with fetching one cache line instead of two. So declare vector<pair<double,double>> vec12(dim_vec); and use in sampling y+=l*vec12[j].first/sqrt(vec12[j].second)