I'm using the Mersenne twister algorithm to shuffle playing cards. Each time the deck needs to be shuffled I seed it with time(NULL) + deckCutCardNumber which is where the user chose to cut the deck. Would I get better results from only seeding it the first hand and continuing to generate them with the same seed or is this method more random?
Thanks
Only seed the PRNG once. The statistical properties of the generated sequence are only guaranteed after the seed. If you reseed every time, the resulting sequence may not have any predictable statistical properties.
For instance, consider a PRNG which always returns the seed value itself as the first number in the sequence, but which is perfectly uniform over its range. This constitutes a great PRNG, as long as you don't use the first number. However, if you reseed it before every use, say to an incrementing counter value, you have no randomness at all!
Assuming the user doesn't mess with the clock (or carefully reduce their cut number by exactly the time that has passed), they'll never see a repeated state of the PRNG anyway, so it doesn't make much difference what you do. You'll get a reasonable distribution out of the Mersenne Twister from any seed value[*], and at any feasible number of steps after re-seeding.
If you're keen to reseed, though, you could combine both approaches by seeding with the time, plus the user-chosen number, plus an output taken from the generator just before reseeding. That combines (part of, not all) the current state of the PRNG with the new seed data, so to some degree all of the past times and cut values (and number of uses of the PRNG) can affect the state, not just the most recent. Pouring more information into the seed value in this way could be considered "more random" than a seed involving less information and hence fewer plausible values.
The only thing about Mersenne Twister in particular is that if you can observe 600-odd outputs of it, then you can deduce its internal state and predict the rest of the output until it's reseeded. Then again, you probably wouldn't use MT for an application where that sort of thing matters: if you're relying on the reseed in any way then you should probably use a more secure PRNG to begin with. Clearly it doesn't matter for your application if the user can predict the values out of the PRNG, since the user knows the time just as well as you do. All of this tells you that it shouldn't matter how it's seeded, just so long as it isn't seeded with exactly the same value so that two games are identical. Hence it doesn't matter whether it's reseeded either.
[*] That's not strictly true, there are classes of weak seeds for MT. But as long as you take that into account when seeding (for instance, hash the seed before use so that bad values are unlikely to crop up by chance), you work around that.
It will be less random if you seed off of the user choice every time than if you only seed once. The reason being that the choice of cut will probably have a skewed distribution (maybe cutting at the 10th card is the most likely etc). If you want to continuously seed you should use something like the system time as the seed.
Yes, you would get better results when not seeding every time. That's the purpose of a (good) random number generator.
In this special case the first value would just increase by the time you waited between the shuffles, while a continuously applied rng would give you numbers across it's whole range.
It's neither more nor less random. It's not really random at all anyway, but you won't notice any difference if you reseed it every time or not.
However, I'd recommend against it because time returns an unsigned int, so if you call it twice in the same second, you'll get the same number, and hence the same numbers from the RNG. Then there's distribution and all that.
I would suggest initializing the PRNG for each shuffle for a completely different reason: It allows you to quantify the state of the deck using only the seed, which means you can provide the seed to the user, or log it, or whatever suits, and be able to easily recreate the hand as dealt at a later stage.
You really should avoid seeding based on time, though - it's generally a better idea to use a source of randomness such as /dev/urandom instead.
Edit: Another argument for re-seeding occurs if you're worried about players guessing the internal state and therefore knowing what cards will be dealt in future. This is possible after observing 624 outputs from the Mersenne Twister (at least according to Wikipedia); this is only possible if you reuse the same PRNG. If this does matter, though, you certainly shouldn't be seeding based on time, and you should probably be using a cryptographically secure PRNG anyway.
Re-seeding the random number generator will not give you any higher quality random numbers than seeding it once (quite the contrary in many cases, depending on your seed values).
Related
My overall goal is to create a round-based game. So, I need some seemingly randomly generated numbers, example in fights - but not only, there are some different occasions. Anyway, I want those numbers to be reliably the same if certain parameters are the same.
Those parameters will be an integer seed - it will take the value of a generalRandomSeed, changing every round; but I need more parameters, like IDs of attacker and defender. I would be very convenient to call this function with the parameters (maybe all combined in a vector) like getRandom(generalRandomSeed,id1,id2).
So, in the end I am hoping for a function that takes one or more ints as parameters (ideally, a vector), returning one single integer: int getRandom(std::vector<int> parameters);
I canot quite figure out how I could solve that problem; if it was only about one parameter, I might just create a new mt19937 every time with my seed generalRandomSeed.
To explain Maarten's issues with std::seed_seq (and to make sure I understand things correctly!) I'll attempt an answer.
The main issue I'd have with potentially suggesting using it (and it sounds like Maarten has the same one) is that the algorithm used by std::seed_seq isn't defined by the standard. Hence it's possible for other standard libraries (e.g. if you use a different compiler or even a different version of the same compiler) to change the implementation and you'd get different values back from the same inputs. Depending on your use case this lack of stability may, or may not, matter. That's a domain specific issue you'd have to decide on as you haven't specified it.
The cryptographic approach suggested would would be to use something like a HKDF, where you use a cryptographic hash (like SHA256) to extract the entropy (i.e. "randomness") from your input values in a deterministic manner (e.g. taking care of endianness) and then use another cryptographic primitive to "stretch" this entropy out to produce your random output. If you're not familiar with the cryptographic world these things can be awkward as there's a lot of terminology.
As a minor point, I'd suggest against using the MT19937 PRNG as it's relatively expensive to seed and it sounds like you'd be doing this a lot. There are other algorithms that are much cheaper, I personally like Sebastiano's xoshiro256 family or you could use Melissa's PCG family. Melissa O'Neill's site is a very useful resource if you're new to PRNGs.
That said, if you're going the HKDF route you may as well just use the "expansion" step as a PRNG as it will directly produce uniform values. These can be transformed into bounded/uniform values easily: Melissa has a good review for ints here, or Vigna describes a conventional transform for binary IEEE-754 floats at the bottom of here (so is valid for float and double on most common CPUs).
update: The MT19937 would seem to be difficult to predict given its enormous state space, but in fact it's been shown almost trivial. For example, searching for "predicting mt19937" leads to https://github.com/kmyk/mersenne-twister-predictor which will give you the state of the RNG from 624 consecutive 32bit integer draws. Using a CSPRNG would protect you from this, and is what using the output of a HKDF would give you. PCG makes this more difficult than the Mersenne Twister, but given that it's optimised for speed can't expend too much work doing this.
Apparently, it was indeed a seed_seq that I was looking for.
Why is std::uniform_real_distribution better than rand() as the random number generator? Can someone give an example please?
First, it should be made clear that the proposed comparison is nonsensical.
uniform_real_distribution is not a random number generator. You cannot produce random numbers from a uniform_real_distribution without having a random number generator that you pass to its operator(). uniform_real_distribution "shapes" the output of that random number generator into an uniform real distribution. You can plug various kinds of random number generators into a distribution.
I don't think this makes for a decent comparison, so I will be comparing the use of uniform_real_distribution with a C++11 random number generator against rand() instead.
Another obvious difference that makes the comparison even less useful is the fact that uniform_real_distribution is used to produce floating point numbers, while rand() produces integers.
That said, there are several reasons to prefer the new facilities.
rand() is global state, while when using the facilities from <random> there is no global state involved: you can have as many generators and distributions as you want and they are all independent from each other.
rand() has no specification about the quality of the sequence generated. The random number generators from C++11 are all well-specified, and so are the distributions. rand() implementations can be, and in practice have been, of very poor quality, and not very uniform.
rand() provides a random number within a predefined range. It is up to the programmer to adjust that range to the desired range. This is not a simple task. No, it is not enough to use % something. Doing this kind of adjustment in such a naive manner will most likely destroy whatever uniformity was there in the original sequence. uniform_real_distribution does this range adjustment for you, correctly.
The real comparison is between rand and one of the random number engines provided by the C++11 standard library. std::uniform_real_distribution just distributes the output of an engine according to some parameters (for example, real values between 10 and 20). You could just as well make an engine that uses rand behind the scenes.
Now the difference between the standard library random number engines and using plain old rand is in guarantee and flexibility. rand provides no guarantee for the quality of the random numbers - in fact, many implementations have shortcomings in their distribution and period. If you want some high quality random numbers, rand just won't do. However, the quality of the random number engines is defined by their algorithms. When you use std::mt19937, you know exactly what you're getting from this thoroughly tested and analysed algorithm. Different engines have different qualities that you may prefer (space efficiency, time efficiency, etc.) and are all configurable.
This is not to say you should use rand when you don't care too much. You might as well just start using the random number generation facilities from C++11 right away. There's no downside.
The reason is actually in the name of the function, which is the fact that the uniformity of the distribution of random numbers is better with std::uniform_real_distribution compared to the uniform distribution of random numbers that rand() provides.
The distribution for std::uniform_real_distribution is of course between a given interval [a,b).
Essentially, that is saying that the probability density that when you ask for a random number between 1 and 10 is as great of getting 5 or getting 9 or any other of the possible values with std::uniform_real_distribution, as when you'd do it with rand() and call it several times, the probability of getting 5 instead of 9 may be different.
I use random numbers in several places and usually construct a random number generator whenever I need it. Currently I use the Marsaglia Xorshift algorithm seeding it with the current system time.
Now I have some doubts about this strategy:
If I use several generators the independence (randomness) of the numbers between the generators depends on the seed (same seed same number). Since I use the time (ns) as seed and since this time changes this works but I am wondering whether it would not be better to use only one singular generator and e.g. to make it available as a singleton. Would this increase the random number quality ?
Edit: Unfortunately c++11 is not an option yet
Edit: To be more specific: I am not suggesting that the singleton could increase the random number quality but the fact that only one generator is used and seeded. Otherwise I have to be sure that the seeds of the different generators are independent (random) from another.
Extreme example: I seed two generators with exactly the same number -> no randomness between them
Suppose you have several variables, each of which needs to be random, independent from the others, and will be regularly reassigned with a new random value from some random generator. This happens quite often with Monte Carlo analysis, and games (although the rigor for games is much less than it is for Monte Carlo). If a perfect random number generator existed, it would be fine to use a single instantiation of it. Assign the nth pseudo random number from the generator to variable x1, the next random number to variable x2, the next to x3, and so on, eventually coming back to variable x1 on the next cycle. around. There's a problem here: Far too many PRNGs fail the independence test fail the independence test when used this way, some even fail randomness tests on individual sequences.
My approach is to use a single PRNG generator as a seed generator for a set of N instances of self-contained PRNGs. Each instance of these latter PRNGs feeds a single variable. By self-contained, I mean that the PRNG is an object, with state maintained in instance members rather than in static members or global variables. The seed generator doesn't even need to be from the same family as those other N PRNGs. It just needs to be reentrant in the case that multiple threads are simultaneously trying to use the seed generator. However, In my uses I find that it is best to set up the PRNGs before threading starts so as to guarantee repeatability. That's one run, one execution. Monte Carlo techniques typically need thousands of executions, maybe more, maybe a lot more. With Monte Carlo, repeatability is essential. So yet another a random seed generator is needed. This one seeds the seed generator used to generate the N generators for the variables.
Repeatability is important, at least in the Monte Carlo world. Suppose run number 10234 of a long Monte Carlo simulation results in some massive failure. It would be nice to see what in the world happened. It might have been a statistical fluke, it might have been a problem. The problem is that in a typical MC setup, only the bare minimum of data are recorded, just enough for computing statistics. To see what happened in run number 10234, one needs to repeat that particular case but now record everything.
You should use the same instance of your random generator class whenever the clients are interrelated and the code needs "independent" random number.
You can use different objects of your random generator class when the clients do not depend on each other and it does not matter whether they receive the same numbers or not.
Note that for testing and debugging it is very useful to be able to create the same sequence of random numbers again. Therefore you should not "randomly seed" too much.
I don't think that its increasing the randomness but it reduces the memory you need to create an object every time you want to use the random generator. If this generator doesn't have any instance specific settings you can make a singleton.
Since I use the time (ns) as seed and since this time changes this works but I am wondering whether it would not be better to use only one singular generator and e.g. to make it available as a singleton.
This is a good example when the singleton is not an anti-pattern. You could also use some kind of inversion of control.
Would this increase the random number quality ?
No. The quality depends on the algorithm that generate random numbers. How you use it is irrelevant (assuming it is used correctly).
To your edit : you could create some kind of container that holds objects of your RNG classes (or use existing containers). Something like this :
std::vector< Rng > & RngSingleton()
{
static std::vector< Rng > allRngs( 2 );
return allRngs;
}
struct Rng
{
void SetSeed( const int seen );
int GenerateNumber() const;
//...
};
// ...
RngSingleton().at(0).SetSeed( 55 );
RngSingleton().at(1).SetSeed( 55 );
//...
const auto value1 = RngSingleton().at(0).GenerateNumber;
const auto value2 = RngSingleton().at(1).GenerateNumber;
Factory pattern to the rescue.
A client should never have to worry about the instantiation rules of its dependencies.
It allows for swapping creation methods. And the other way around, if you decide to use a different algorithm you can swap the generator class and the clients need no refactoring.
http://www.oodesign.com/factory-pattern.html
--EDIT
Added pseudocode (sorry, it's not c++, it's waaaaaay too long ago since I last worked in it)
interface PRNG{
function generateRandomNumber():Number;
}
interface Seeder{
function getSeed() : Number;
}
interface PRNGFactory{
function createPRNG():PRNG;
}
class MarsagliaPRNG implements PRNG{
constructor( seed : Number ){
//store seed
}
function generateRandomNumber() : Number{
//do your magic
}
}
class SingletonMarsagliaPRNGFactory implements PRNGFactory{
var seeder : Seeder;
static var prng : PRNG;
function createPRNG() : PRNG{
return prng ||= new MarsagliaPRNG( seeder.getSeed() );
}
}
class TimeSeeder implements Seeder{
function getSeed():Number{
return now();
}
}
//usage:
seeder : Seeder = new TimeSeeder();
prngFactory : PRNGFactory = new SingletonMarsagliaPRNGFactory();
clientA.prng = prngFactory.createPRNG();
clientB.prng = prngFactory.createPRNG();
//both clients got the same instance.
The big advantage is now that if you want/need to change any of the implementation details, nothing has to change in the clients. You can change seeding method, RNG algorithm and the instantiation rule w/o having to touch any client anywhere.
I am using c++11 new <random> header in my application and in one class in different methods I need different random number with different distributions. I just put a random engine std::default_random_engine as class member seed it in the class constructor with std::random_device and use it for different distributions in my methods. Is that OK to use the random engine in this way or I should declare different engines for every distribution I use.
It's ok.
Reasons to not share the generator:
threading (standard RNG implementations are not thread safe)
determinism of random sequences:
If you wish to be able (for testing/bug hunting) to control the exact sequences generated, you will by likely have fewer troubles by isolating the RNGs used, especially when not all RNGs consumption is deterministic.
You should be careful when using one pseudo random number generator for different random variables, because in doing so they become correlated.
Here is an example: If you want to simulate Brownian motion in two dimensions (e.g. x and y) you need randomness in both dimensions. If you take the random numbers from one generator (noise()) and assign them successively
while(simulating)
x = x + noise()
y = y + noise()
then the variables x and y become correlated, because the algorithms of the pseudo number generators only make statements about how good they are, if you take every single number generated and not only every second one like in this example. Here, the Brownian particles could maybe move into the positive x and y directions with a higher probability than in the negative directions and thus introduce an artificial drift.
For two further reasons to use different generators look at sehe's answer.
MosteM's answer isn't correct. It's correct to do this so long as you want the draws from the distributions to be independent. If for some reason you need exactly the same random input into draws of different distributions, then you may want different RNGs. If you want correlation between two random variables, it's better to build them starting from a common random variable using mathematical principal: e.g., if A, B are independent normal(0,1), then A and aA +sqrt(1-a**2)B are normal(0,1) with correlation a.
EDIT: I found a great resource on the C++11 random library which may be useful to you.
There is no reason not to do it like this. Depending on which random generator you use, the period is quite huge (2^19937 in case of Mersenne-Twister), so in most cases, you won't even reach the end of one period during the execution of your program. And even if it is not said that, it's worse to reach the period with all distributions using the same generator than having 3 generators each doing 1/3 of their period.
In my programs, I use one generator for each thread, and it works fine. I think that's the main reason they split up the generator and distributions in C++11, since if you weren't allowed to do this, there would be no benefit from having the generator and the distribution separate, if one needs one generator for each distribution anyway.
If a random generator function is not supplied to the random_shuffle algorithm in the standard library, will successive runs of the program produce the same random sequence if supplied with the same data?
For example, if
std::random_shuffle(filenames.begin(), filenames.end());
is performed on the same list of filenames from a directory in successive runs of the program, is the random sequence produced the same as that in the prior run?
If you use the same random generator, with the same seed, and the same starting
sequence, the results will be the same. A computer is, after all,
deterministic in its behavior (modulo threading issues and a few other
odds and ends).
If you do not specify a generator, the default generator is
implementation defined. Most implementations, I think, use
std::rand() (which can cause problems, particularly when the number of
elements in the sequence is larger than RAND_MAX). I would recommend
getting a generator with known quality, and using it.
If you don't correctly seed the generator which is being used (another
reason to not use the default, since how you seed it will depend on the
implementation), then you'll get what you get. In the case of
std::rand(), the default always uses the same seed. How you seed
depends on the generator used. What you use to seed it should be vary
from one run to the other; for many applications, time(NULL) is
sufficient; on a Unix platform, I'd recommend reading however many bytes
it takes from /dev/random. Otherwise, hashing other information (IP
address of the machine, process id, etc.) can also improve things---it
means that two users starting the program at exactly the same second
will still get different sequences. (But this is really only relevant
if you're working in a networked environment.)
25.2.11 just says that the elements are shuffled with uniform distribution. It makes no guarantees as to which RNG is used behind the scenes (unless you pass one in) so you can't rely on any such behavior.
In order to guarantee the same shuffle outcome you'll need to provide your own RNG that provides those guarantees, but I suspect even then if you update your standard library the random_shuffle algorithm itself could change effects.
You may produce an identical result every run of the program. You can add a custom random number generator (which can be seeded from an external source) as an additional argument to std::random_shuffle if this is a problem. The function would be the third argument. Some people recommend call srand(unsigned(time(NULL))); before random_shuffle, but the results are often times implementation defined (and unreliable).