The copy constructor for std::random_device is deleted, and I have no idea why.
The only note I found from the docs is:
2) The copy constructor is deleted: std::random_device is not copyable.
There doesn't seem to be a clear reason for why it is deleted. What is the reason behind this?
The reason why std::random_device is not copyable is that if it were, the copied instance may return exactly the same numbers as the original (this is implementation defined though)!
This is because (docs):
std::random_device may be implemented in terms of an implementation-defined pseudo-random number engine if a non-deterministic source (e.g. a hardware device) is not available to the implementation. In this case each std::random_device object may generate the same number sequence.
Some implementations will thus implement a PRNG. PRNGs are generally implemented with a seed value (and some other state), from which the "random" numbers are generated. By copying std::random_device, this will copy the seed value, along any other internal state that the generator uses to generate random numbers (which is implementation defined).
You would have 2 random devices, which are deterministic because they generate the same number sequence:
std::random_device device1;
std::random_device device2 = device1; //For demonstration purposes only
std::uniform_int_distribution dis{ 0, 10 };
int randomNumber1 = dis(device1);
int randomNumber2 = dis(device2);
//Note that "randomNumber1 == randomNumber2"! They use exactly the same random number
//generator with the same seed value, etc. -> Same numbers are generated!
The implementation might always use the same seed values for every std::random_device, which would mean that the same number sequence is generated every time for different random devices. Or it might use some non-deterministic source (taken from above):
In this case each std::random_device object may generate the same number sequence.
[...] if a non-deterministic source (e.g a hardware device) [...]
In those cases it doesn't matter that the random device is copyable (one would not think that the copy is in fact a copy), but it does matter where the implementation generates different values for different random devices using a PRNG.
Implementing the copy constructor (and the copy assignment operator) would break that assumption for the implementations that use a PRNG, and so it is deleted, to maintain the "randomness" of the numbers generated for every implementation (as long as the implementation allows it).
Related
According to : http://www.cplusplus.com/reference/cstdlib/rand/
In C, the generation algorithm used by rand is guaranteed to only be
advanced by calls to this function. In C++, this constraint is
relaxed, and a library implementation is allowed to advance the
generator on other circumstances (such as calls to elements of
).
But then over here it says :
The function accesses and modifies internal state objects, which may
cause data races with concurrent calls to rand or srand.
Some libraries provide an alternative function that explicitly avoids
this kind of data race: rand_r (non-portable).
C++ library implementations are allowed to guarantee no data races for
calling this function.
Ideally I would like to have some kind of "instance" of rand, so that for that instance, and a given seed, I always generate the same sequence of numbers for calls to THAT instance . With the current versions it seems that in some platforms, calls by other functions to rand() (perhaps even on different threads), could affect the sequence of numbers generated in my thread by my code.
Is there an alternative, where I can hold on to some kind of "instance", where I am guaranteed to generate a particular sequence, given a seed, and where other calls to different "instances" do not affect it ?
EDIT: For clarity - my code is going to run on multiple different platforms (iOS, Android, Windows 8.1, Windows 10, Linux etc), and it isn't possible for me to test every implementation at present. I would just like to implement things based on what is guaranteed by the standard...
You can make use of std::uniform_int_distribution and std::mt19937 to keep a generator with your common seed (all from <random> library).
std::mt19937 gen(SEED);
std::uniform_int_distribution<> dis(MIN, MAX);
auto random_number = dis(gen);
Here, SEED is the seed number you want to specify. You can set another seed later with the .seed method too:
std::mt19937 gen{};
gen.seed(SEED);
If you need to generate one, you can use std::random_device for that:
std::random_device rd{};
std::mt19937 gen(rd());
The dis(MIN, MAX) part sets a range of min and max values this distribution can come up with, which means it will never generate a value bigger than MAX, or smaller than MIN.
Finally, you can use your generator with this distribution to generate your wanted random values like so: dis(gen). The distribution can take any generator, so if you want other distributions with the same sequence of random numbers, you may make a copy of gen, or use the same seed and construct two or more generators.
use random() instead of rand().
https://www.securecoding.cert.org/confluence/display/c/MSC30-C.+Do+not+use+the+rand%28%29+function+for+generating+pseudorandom+numbers
https://www.securecoding.cert.org/confluence/display/c/CON33-C.+Avoid+race+conditions+when+using+library+functions
Besides being a rubbish programmer, my jargon is not up to scratch. I am going to try my best to explain myself.
I have implemented a Merssene twister random number generator using randomlib.
Admittedly I am not too familiar on how Visual 8 C++'s random number generator works, but I find I can seed it once srand(time(NULL)) in main() and I can safely use rand() in my other classes.
The Merssene twister that I have one needs to create an object, and then seed that object.
#include <RandomLib/Random.hpp>
RandomLib::Random r; // create random number object
r.Reseed(); // seed with a "unique" seed
float d = r.FloatN(); // a random in [0,1] rounded to the nearest double
If I want to generate a random number in a class how do I do this without having to define an object each time. I am just worried that if I use the computer clock I will use the same seed each run (only changes every second).
Am I explaining myself right?
Thanks in advance
The Random object is essentially state information that you need to preserve. You can use all the normal techniques: You could have it as a global variable or pass it around as a parameter. If a particular class needs random numbers you can keep a Random object as a class member to provide randomness for that class.
The C++ <random> library is similar in that it requires the construction of an object as the source of randomness/RNG state. This is a good design because it allows the program to control access to the state and, for example, guarantee good behavior with multiple threads. The C++ <random> library even includes mersenne twister algorithm.
Here's an example showing saving a RNG state as a class member (using std::mt19937 instead of Random)
#include <random> // for mt19937
#include <algorithm> // for std::shuffle
#include <vector>
struct Deck {
std::vector<Cards> m_cards;
std::mt19937 eng; // save RNG state as class member so we don't have to keep creating one
void shuffle() {
std::shuffle(std::begin(m_cards), std::end(m_cards), eng);
}
};
int main() {
Deck d;
d.shuffle();
d.shuffle(); // this reuses the RNG state as it was at the end of the first shuffle, no reseeding
}
The accepted answer does not actually seed its mt19937, see this Q&A for a more thorough and complete answer on how this might be achieved and why there is no single solution:
How to succinctly, portably, and thoroughly seed the mt19937 PRNG?
TL;DR:
The question is relating to RandomLib but I will answer by referring to the STL implementations due to <random> being more accessible 10 years on. The principles should apply to all mt19937 implementations however.
std::mt19937 and std::mt19937_64 have an internal default seed which provides some state for the engine to work off. The default seeds will cause the engine to produce the same values every time unless re-seeded.
std::mt19937 provides two methods to seed it, both via the seed() function.
The first overload accepts a param of result_type (uint32_t for std::mt19937 and uint64_t for std::mt19937_64). Internally (at least in the MSVC implementation) this function will use the provided seed value to fill its internal state through a series of bit fiddling ops. Most quick-and-dirty examples will use a std::random_device to provide this seed value, but due to the standard allowing random_device to be just another PRNG it cannot be relied on in all circumstances, apparently this is (or was) the case with the MinGW compiler on Windows.
The second overload accepts a more generic generator/range param which can be used with std::seed_seq. The linked question has an example of how to create one of these.
Creating a seed_seq or a sufficiently random initial seed is a challenge and why the linked question is provided.
It is not recommended that you create a new Mersenne Twister PRNG every time you need one due to the seeding process being non-trivial. Instead, it is better to declare one once and hold onto it, either as a static, thread_local, global, or member of a class with a long lifetime.
I use random numbers in several places and usually construct a random number generator whenever I need it. Currently I use the Marsaglia Xorshift algorithm seeding it with the current system time.
Now I have some doubts about this strategy:
If I use several generators the independence (randomness) of the numbers between the generators depends on the seed (same seed same number). Since I use the time (ns) as seed and since this time changes this works but I am wondering whether it would not be better to use only one singular generator and e.g. to make it available as a singleton. Would this increase the random number quality ?
Edit: Unfortunately c++11 is not an option yet
Edit: To be more specific: I am not suggesting that the singleton could increase the random number quality but the fact that only one generator is used and seeded. Otherwise I have to be sure that the seeds of the different generators are independent (random) from another.
Extreme example: I seed two generators with exactly the same number -> no randomness between them
Suppose you have several variables, each of which needs to be random, independent from the others, and will be regularly reassigned with a new random value from some random generator. This happens quite often with Monte Carlo analysis, and games (although the rigor for games is much less than it is for Monte Carlo). If a perfect random number generator existed, it would be fine to use a single instantiation of it. Assign the nth pseudo random number from the generator to variable x1, the next random number to variable x2, the next to x3, and so on, eventually coming back to variable x1 on the next cycle. around. There's a problem here: Far too many PRNGs fail the independence test fail the independence test when used this way, some even fail randomness tests on individual sequences.
My approach is to use a single PRNG generator as a seed generator for a set of N instances of self-contained PRNGs. Each instance of these latter PRNGs feeds a single variable. By self-contained, I mean that the PRNG is an object, with state maintained in instance members rather than in static members or global variables. The seed generator doesn't even need to be from the same family as those other N PRNGs. It just needs to be reentrant in the case that multiple threads are simultaneously trying to use the seed generator. However, In my uses I find that it is best to set up the PRNGs before threading starts so as to guarantee repeatability. That's one run, one execution. Monte Carlo techniques typically need thousands of executions, maybe more, maybe a lot more. With Monte Carlo, repeatability is essential. So yet another a random seed generator is needed. This one seeds the seed generator used to generate the N generators for the variables.
Repeatability is important, at least in the Monte Carlo world. Suppose run number 10234 of a long Monte Carlo simulation results in some massive failure. It would be nice to see what in the world happened. It might have been a statistical fluke, it might have been a problem. The problem is that in a typical MC setup, only the bare minimum of data are recorded, just enough for computing statistics. To see what happened in run number 10234, one needs to repeat that particular case but now record everything.
You should use the same instance of your random generator class whenever the clients are interrelated and the code needs "independent" random number.
You can use different objects of your random generator class when the clients do not depend on each other and it does not matter whether they receive the same numbers or not.
Note that for testing and debugging it is very useful to be able to create the same sequence of random numbers again. Therefore you should not "randomly seed" too much.
I don't think that its increasing the randomness but it reduces the memory you need to create an object every time you want to use the random generator. If this generator doesn't have any instance specific settings you can make a singleton.
Since I use the time (ns) as seed and since this time changes this works but I am wondering whether it would not be better to use only one singular generator and e.g. to make it available as a singleton.
This is a good example when the singleton is not an anti-pattern. You could also use some kind of inversion of control.
Would this increase the random number quality ?
No. The quality depends on the algorithm that generate random numbers. How you use it is irrelevant (assuming it is used correctly).
To your edit : you could create some kind of container that holds objects of your RNG classes (or use existing containers). Something like this :
std::vector< Rng > & RngSingleton()
{
static std::vector< Rng > allRngs( 2 );
return allRngs;
}
struct Rng
{
void SetSeed( const int seen );
int GenerateNumber() const;
//...
};
// ...
RngSingleton().at(0).SetSeed( 55 );
RngSingleton().at(1).SetSeed( 55 );
//...
const auto value1 = RngSingleton().at(0).GenerateNumber;
const auto value2 = RngSingleton().at(1).GenerateNumber;
Factory pattern to the rescue.
A client should never have to worry about the instantiation rules of its dependencies.
It allows for swapping creation methods. And the other way around, if you decide to use a different algorithm you can swap the generator class and the clients need no refactoring.
http://www.oodesign.com/factory-pattern.html
--EDIT
Added pseudocode (sorry, it's not c++, it's waaaaaay too long ago since I last worked in it)
interface PRNG{
function generateRandomNumber():Number;
}
interface Seeder{
function getSeed() : Number;
}
interface PRNGFactory{
function createPRNG():PRNG;
}
class MarsagliaPRNG implements PRNG{
constructor( seed : Number ){
//store seed
}
function generateRandomNumber() : Number{
//do your magic
}
}
class SingletonMarsagliaPRNGFactory implements PRNGFactory{
var seeder : Seeder;
static var prng : PRNG;
function createPRNG() : PRNG{
return prng ||= new MarsagliaPRNG( seeder.getSeed() );
}
}
class TimeSeeder implements Seeder{
function getSeed():Number{
return now();
}
}
//usage:
seeder : Seeder = new TimeSeeder();
prngFactory : PRNGFactory = new SingletonMarsagliaPRNGFactory();
clientA.prng = prngFactory.createPRNG();
clientB.prng = prngFactory.createPRNG();
//both clients got the same instance.
The big advantage is now that if you want/need to change any of the implementation details, nothing has to change in the clients. You can change seeding method, RNG algorithm and the instantiation rule w/o having to touch any client anywhere.
If a random generator function is not supplied to the random_shuffle algorithm in the standard library, will successive runs of the program produce the same random sequence if supplied with the same data?
For example, if
std::random_shuffle(filenames.begin(), filenames.end());
is performed on the same list of filenames from a directory in successive runs of the program, is the random sequence produced the same as that in the prior run?
If you use the same random generator, with the same seed, and the same starting
sequence, the results will be the same. A computer is, after all,
deterministic in its behavior (modulo threading issues and a few other
odds and ends).
If you do not specify a generator, the default generator is
implementation defined. Most implementations, I think, use
std::rand() (which can cause problems, particularly when the number of
elements in the sequence is larger than RAND_MAX). I would recommend
getting a generator with known quality, and using it.
If you don't correctly seed the generator which is being used (another
reason to not use the default, since how you seed it will depend on the
implementation), then you'll get what you get. In the case of
std::rand(), the default always uses the same seed. How you seed
depends on the generator used. What you use to seed it should be vary
from one run to the other; for many applications, time(NULL) is
sufficient; on a Unix platform, I'd recommend reading however many bytes
it takes from /dev/random. Otherwise, hashing other information (IP
address of the machine, process id, etc.) can also improve things---it
means that two users starting the program at exactly the same second
will still get different sequences. (But this is really only relevant
if you're working in a networked environment.)
25.2.11 just says that the elements are shuffled with uniform distribution. It makes no guarantees as to which RNG is used behind the scenes (unless you pass one in) so you can't rely on any such behavior.
In order to guarantee the same shuffle outcome you'll need to provide your own RNG that provides those guarantees, but I suspect even then if you update your standard library the random_shuffle algorithm itself could change effects.
You may produce an identical result every run of the program. You can add a custom random number generator (which can be seeded from an external source) as an additional argument to std::random_shuffle if this is a problem. The function would be the third argument. Some people recommend call srand(unsigned(time(NULL))); before random_shuffle, but the results are often times implementation defined (and unreliable).
I'm using the Mersenne twister algorithm to shuffle playing cards. Each time the deck needs to be shuffled I seed it with time(NULL) + deckCutCardNumber which is where the user chose to cut the deck. Would I get better results from only seeding it the first hand and continuing to generate them with the same seed or is this method more random?
Thanks
Only seed the PRNG once. The statistical properties of the generated sequence are only guaranteed after the seed. If you reseed every time, the resulting sequence may not have any predictable statistical properties.
For instance, consider a PRNG which always returns the seed value itself as the first number in the sequence, but which is perfectly uniform over its range. This constitutes a great PRNG, as long as you don't use the first number. However, if you reseed it before every use, say to an incrementing counter value, you have no randomness at all!
Assuming the user doesn't mess with the clock (or carefully reduce their cut number by exactly the time that has passed), they'll never see a repeated state of the PRNG anyway, so it doesn't make much difference what you do. You'll get a reasonable distribution out of the Mersenne Twister from any seed value[*], and at any feasible number of steps after re-seeding.
If you're keen to reseed, though, you could combine both approaches by seeding with the time, plus the user-chosen number, plus an output taken from the generator just before reseeding. That combines (part of, not all) the current state of the PRNG with the new seed data, so to some degree all of the past times and cut values (and number of uses of the PRNG) can affect the state, not just the most recent. Pouring more information into the seed value in this way could be considered "more random" than a seed involving less information and hence fewer plausible values.
The only thing about Mersenne Twister in particular is that if you can observe 600-odd outputs of it, then you can deduce its internal state and predict the rest of the output until it's reseeded. Then again, you probably wouldn't use MT for an application where that sort of thing matters: if you're relying on the reseed in any way then you should probably use a more secure PRNG to begin with. Clearly it doesn't matter for your application if the user can predict the values out of the PRNG, since the user knows the time just as well as you do. All of this tells you that it shouldn't matter how it's seeded, just so long as it isn't seeded with exactly the same value so that two games are identical. Hence it doesn't matter whether it's reseeded either.
[*] That's not strictly true, there are classes of weak seeds for MT. But as long as you take that into account when seeding (for instance, hash the seed before use so that bad values are unlikely to crop up by chance), you work around that.
It will be less random if you seed off of the user choice every time than if you only seed once. The reason being that the choice of cut will probably have a skewed distribution (maybe cutting at the 10th card is the most likely etc). If you want to continuously seed you should use something like the system time as the seed.
Yes, you would get better results when not seeding every time. That's the purpose of a (good) random number generator.
In this special case the first value would just increase by the time you waited between the shuffles, while a continuously applied rng would give you numbers across it's whole range.
It's neither more nor less random. It's not really random at all anyway, but you won't notice any difference if you reseed it every time or not.
However, I'd recommend against it because time returns an unsigned int, so if you call it twice in the same second, you'll get the same number, and hence the same numbers from the RNG. Then there's distribution and all that.
I would suggest initializing the PRNG for each shuffle for a completely different reason: It allows you to quantify the state of the deck using only the seed, which means you can provide the seed to the user, or log it, or whatever suits, and be able to easily recreate the hand as dealt at a later stage.
You really should avoid seeding based on time, though - it's generally a better idea to use a source of randomness such as /dev/urandom instead.
Edit: Another argument for re-seeding occurs if you're worried about players guessing the internal state and therefore knowing what cards will be dealt in future. This is possible after observing 624 outputs from the Mersenne Twister (at least according to Wikipedia); this is only possible if you reuse the same PRNG. If this does matter, though, you certainly shouldn't be seeding based on time, and you should probably be using a cryptographically secure PRNG anyway.
Re-seeding the random number generator will not give you any higher quality random numbers than seeding it once (quite the contrary in many cases, depending on your seed values).