Related
I need to obtain data from different C++ random number generation algorithms, and for that purpose I created some programs. Some of them use pseudo-random number generators and others use random_device (nondeterministic random number generator). The following program belongs to the second group:
#include <iostream>
#include <vector>
#include <cmath>
#include <random>
using namespace std;
const int N = 5000;
const int M = 1000000;
const int VALS = 2;
const int ESP = M / VALS;
int main() {
for (int i = 0; i < N; ++i) {
random_device rd;
if (rd.entropy() == 0) {
cout << "No support for nondeterministic RNG." << endl;
break;
} else {
mt19937 gen(rd());
uniform_int_distribution<int> distrib(0, 1);
vector<int> hist(VALS, 0);
for (int j = 0; j < M; ++j) ++hist[distrib(gen)];
int Y = 0;
for (int j = 0; j < VALS; ++j) Y += abs(hist[j] - ESP);
cout << Y << endl;
}
}
}
As you can see in the code, I check for the entropy to be greater than 0. I do this because:
Unlike the other standard generators, this [random_device] is not meant to be an
engine that generates pseudo-random numbers, but a generator based on
stochastic processes to generate a sequence of uniformly distributed
random numbers. Although, certain library implementations may lack the
ability to produce such numbers and employ a random number engine to
generate pseudo-random values instead. In this case, entropy returns
zero. Source
Checking the value of the entropy allows me to abort de data obtaining if the resulting data is going to be pseudo-random (not nondeterministic). Please note that I assume that if rd.entropy() == 0 is true, then we are in pseudo-random mode.
Unfortunately, all my trials result in a file with no data because of entropy being 0. My question is: what can I do to my computer, or where can I find a machine that allows me to obtain the data?
The source you cite is misleading you. The standard says that
double entropy() const noexcept;
Returns: If the implementation employs a random number engine, returns 0.0. Otherwise, returns an entropy estimate for the random numbers returned by operator(), in the range min() to log2(max()+1).
And a better reference has some empirical observations
Notes
This function is not fully implemented in some standard libraries. For
example, LLVM libc++ always returns zero even though the device is
non-deterministic. In comparison, Microsoft Visual C++ implementation
always returns 32, and boost.random returns 10.
In practice, nearly all the main implementations (targeting general purpose computers) have non-deterministic std::random_devices. Your test has a very high false negative rate.
Using C++ standard random generator I can more or less efficiently create sequences with pre-defined distributions using language-provided tools. What about Shannon entropy? Is it possible some way to define output Shannon entropy for the provided sequence?
I tried a small experiment, generated a long enough sequence with linear distribution, and implemented a Shannon entropy calculator. Resulting value is from 0.0 (absolute order) to 8.0 (absolute chaos)
template <typename T>
double shannon_entropy(T first, T last)
{
size_t frequencies_count{};
double entropy = 0.0;
std::for_each(first, last, [&entropy, &frequencies_count] (auto item) mutable {
if (0. == item) return;
double fp_item = static_cast<double>(item);
entropy += fp_item * log2(fp_item);
++frequencies_count;
});
if (frequencies_count > 256) {
return -1.0;
}
return -entropy;
}
std::vector<uint8_t> generate_random_sequence(size_t sequence_size)
{
std::vector<uint8_t> random_sequence;
std::random_device rnd_device;
std::cout << "Random device entropy: " << rnd_device.entropy() << '\n';
std::mt19937 mersenne_engine(rnd_device());
std::uniform_int_distribution<unsigned> dist(0, 255);
auto gen = std::bind(dist, mersenne_engine);
random_sequence.resize(sequence_size);
std::generate(random_sequence.begin(), random_sequence.end(), gen);
return std::move(random_sequence);
}
std::vector<double> read_random_probabilities(size_t sequence_size)
{
std::vector<size_t> bytes_distribution(256);
std::vector<double> bytes_frequencies(256);
std::vector<uint8_t> random_sequence = generate_random_sequence(sequence_size);
size_t rnd_seq_size = random_sequence.size();
std::for_each(random_sequence.begin(), random_sequence.end(), [&](uint8_t b) mutable {
++bytes_distribution[b];
});
std::transform(bytes_distribution.begin(), bytes_distribution.end(), bytes_frequencies.begin(),
[&rnd_seq_size](size_t item) {
return static_cast<double>(item) / rnd_seq_size;
});
return std::move(bytes_frequencies);
}
int main(int argc, char* argv[]) {
size_t sequence_size = 1024 * 1024;
std::vector<double> bytes_frequencies = read_random_probabilities(sequence_size);
double entropy = shannon_entropy(bytes_frequencies.begin(), bytes_frequencies.end());
std::cout << "Sequence entropy: " << std::setprecision(16) << entropy << std::endl;
std::cout << "Min possible file size assuming max theoretical compression efficiency:\n";
std::cout << (entropy * sequence_size) << " in bits\n";
std::cout << ((entropy * sequence_size) / 8) << " in bytes\n";
return EXIT_SUCCESS;
}
First, it appears that std::random_device::entropy() hardcoded to return 32; in MSVC 2015 (which is probably 8.0 according to Shannon definition). As you can try it's not far from the truth, this example it's always close to 7.9998..., i.e. absolute chaos.
The working example is on IDEONE (by the way, their compiler hardcode entropy to 0)
One more, the main question - is it possible to create such a generator that generate linearly-distributed sequence with defined entropy, let's say 6.0 to 7.0? Could it be implemented at all, and if yes, if there are some implementations?
First, you're viewing Shannon's theory entirely wrong. His argument (as you're using it) is simply, "given the probably of x (Pr(x)), the bits required to store x is -log2 Pr(x). It has nothing to do with the probability of x. In this regard, you're viewing Pr(x) wrong. -log2 Pr(x) given a Pr(x) that should be uniformly 1/256 results in a required bitwidth of 8 bits to store. However, that's not how statistics work. Go back to thinking about Pr(x) because the bits required means nothing.
Your question is about statistics. Given an infinite sample, if-and-only-if the distribution matches the ideal histogram, as the sample size approaches infinite the probability of each sample will approach the expected frequency. I want to make it clear that you're not looking for "-log2 Pr(x) is absolute chaos when it's 8 given Pr(x) = 1/256." A uniform distribution is not chaos. In fact, it's... well, uniform. It's properties are well known, simple, and easy to predict. You're looking for, "Is the finite sample set of S meeting the criteria of a independently-distributed uniform distribution (commonly known as "Independently and Identically Distributed Data" or "i.i.d") of Pr(x) = 1/256?" This has nothing to do with Shannon's theory and goes much further back in time to the basic probability theories involving flips of a coin (in this case binomial given assumed uniformity).
Assuming for a moment that any C++11 <random> generator meets the criteria for "statistically indistinguishable from i.i.d." (which, by the way, those generators don't), you can use them to emulate i.i.d. results. If you would like a range of data that is storable within 6..7 bits (it wasn't clear, did you mean 6 or 7 bits, because hypothetically, everything in between is doable as well), simply scale the range. For example...
#include <iostream>
#include <random>
int main() {
unsigned long low = 1 << 6; // 2^6 == 64
unsigned long limit = 1 << 7; // 2^7 == 128
// Therefore, the range is 6-bits to 7-bits (or 64 + [128 - 64])
unsigned long range = limit - low;
std::random_device rd;
std::mt19937 rng(rd()); //<< Doesn't actually meet criteria for i.d.d.
std::uniform_int_distribution<unsigned long> dist(low, limit - 1); //<< Given an engine that actually produces i.i.d. data, this would produce exactly what you're looking for
for (int i = 0; i != 10; ++i) {
unsigned long y = dist(rng);
//y is known to be in set {2^6..2^7-1} and assumed to be uniform (coin flip) over {low..low + (range-1)}.
std::cout << y << std::endl;
}
return 0;
}
The problem with this is that, while the <random> distribution classes are accurate, the random number generators (presumably aside from std::random_device, but that's system-specific) are not designed to stand up to statistical tests of fitness as i.i.d. generators.
If you would like one that does, implement a CSPRNG (my go-to is Bob Jenkins' ISAAC) that has an interface meeting the requirements of the <random> class of generators (probably just covering the basic interface of std::random_device is good enough).
To test for statistically sound "no" or "we can't say no" for whether a set follows a specific model (and therefore Pr(x) is accurate and therefore Shannon's entropy function is an accurate prediction), that's a whole thing else entirely. Like I said, no generator in <random> meets these criteria (except maybe std::random_device). My advice is to do research into things like Central limit theorem, Goodness-of-fit, Birthday-spacing, et cetera.
To drive my point a bit more, under the assumptions of your question...
struct uniform_rng {
unsigned long x;
constexpr uniform_rng(unsigned long seed = 0) noexcept:
x{ seed }
{ };
unsigned long operator ()() noexcept {
unsigned long y = this->x++;
return y;
}
};
... would absolutely meet your criteria of being uniform (or as you say "absolute chaos"). Pr(x) is most certainly 1/N and the bits required to store any number of the set is -log2 Pr(1/N) which is whatever 2 to the power of the bitwidth of unsigned long is. However, it's not independently distributed. Because we know it's properties, you can "store" it's entire sequence by simply storing seed. Surprise, all PRNGs work this way. Therefore the bits required to store the entire sequence of an PRNG is -log2(1/2^bitsForSeed). As your sample grows, the bits required to store vs the bits your able to generate that sample (aka, the compression ratio) approaches a limit of 0.
I cannot comment yet, but I would like to start the discussion:
From communication/information theory, it would seem that you would require probabilistic shaping methods to achieve what you want. You should be able to feed the output of any distribution function through a shaping coder, which then should re-distribute the input to a specific target shannon entropy.
Probabilistic constellation shaping has been succesfully applied in fiber-optic communication: Wikipedia with some other links
You are not clear what you want to achieve, and there are several ways of lowering the Shannon entropy for your sequence:
Correlation between the bits, e.g. putting random_sequence through a
simple filter.
Individual bits are not fully random.
As an example below you could make the bytes less random:
std::vector<uint8_t> generate_random_sequence(size_t sequence_size,
int unit8_t cutoff=10)
{
std::vector<uint8_t> random_sequence;
std::vector<uint8_t> other_sequence;
std::random_device rnd_device;
std::cout << "Random device entropy: " << rnd_device.entropy() << '\n';
std::mt19937 mersenne_engine(rnd_device());
std::uniform_int_distribution<unsigned> dist(0, 255);
auto gen = std::bind(dist, mersenne_engine);
random_sequence.resize(sequence_size);
std::generate(random_sequence.begin(), random_sequence.end(), gen);
other_sequence.resize(sequence_size);
std::generate(other_sequence.begin(), other_sequence.end(), gen);
for(size_t j=0;j<size;++j) {
if (other_sequence[j]<=cutoff) random_sequence[j]=0; // Or j or ...
}
return std::move(random_sequence);
}
I don't think this was the answer you were looking for - so you likely need to clarify the question more.
Does mt19937_64 have a higher throughput (bit/s) than the 32 bit version, mt19937, assuming a 64 bit architecture?
What about after vectorization?
As #byjoe points out, this obviously depends on the compiler.
In this case, it seems to be considerably more dependent on the compiler than is typical though. For example, the Boost test linked in the comments uses the compiler from VC++ 2010, and shows only a fairly slight increase in random bits per second from using mt19937_64.
To get more up-to-date information, I whipped up a simple test:
#include <random>
#include <chrono>
#include <iostream>
#include <iomanip>
template <class T, class U>
U test(char const *label, U count) {
using namespace std::chrono;
T gen(100);
U result = 0;
auto start = high_resolution_clock::now();
for (U i = 0; i < count; i++)
result ^= gen();
auto stop = high_resolution_clock::now();
std::cout << "Time for " << std::left << std::setw(12) << label
<< duration_cast<milliseconds>(stop - start).count() << "\n";
return result;
}
int main(int argc, char **argv) {
unsigned long long limit = 1000000000;
auto result1 = test<std::mt19937>("mt19937: ", limit);
auto result2 = test<std::mt19937_64>("mt19937_64: ", limit);
std::cout << "Ignore: " << result1 << ", " << result2 << "\n";
}
With VC++ 2015 udpate 3 (with /o2b2 /GL, though it probably doesn't matter), I got results like these:
Time for mt19937: 4339
Time for mt19937_64: 4215
Ignore: 2598366015, 13977046647333287932
This shows mt19937_64 as being slightly faster per call, so over twice as fast per bit as mt19937. With MinGW (using -O3), the results were much more like those linked from the Boost site:
Time for mt19937: 2211
Time for mt19937_64: 4183
Ignore: 2598366015, 13977046647333287932
In this case, mt19937_64 takes just a little less than twice the time per call, so it's only slightly faster per bit. The highest overall speed seems to be from g++ with mt19937_64, but the difference between g++ and VC++ (on these runs) is less than 1%, so I'm not sure it's reproducible.
For what it's worth, the difference in speed (per call) between mt19937 and mt19937_64 with VC++ is also pretty small, but does seem to be reproducible--it happened quite consistently in my testing. I did wonder about whether that might be (at least partially) a matter of clock management--that when the code first started, the CPU was idle, and the clock had been slowed, so the first part of the first run was at a lower clock speed. To check, I reversed the order to test mt19937_64 first. I think my hypothesis was at least partially correct--when I reversed the order, mt19937_64 slowed down compared to mt19937, so they were nearly identical on a per-call basis with VC++.
It clearly depends on your compiler and their implementation. I just tested and the 64bit version takes about 60% longer call-for-call, so that makes the 64bit version about 25% fast bit-for-bit. I tested with an i7 cpu.
If you need max speed, you may want to consider using something else. Especially if the numbers don't need to be very high quality.
The C++11 standard specifies a number of different engines for random number generation: linear_congruential_engine, mersenne_twister_engine, subtract_with_carry_engine and so on. Obviously, this is a large change from the old usage of std::rand.
Obviously, one of the major benefits of (at least some) of these engines is the massively increased period length (it's built into the name for std::mt19937).
However, the differences between the engines is less clear. What are the strengths and weaknesses of the different engines? When should one be used over the other? Is there a sensible default that should generally be preferred?
From the explanations below, a linear engine seems to be faster but less random while the Mersenne Twister has a higher complexity and randomness. Subtract-with-carry random number engine is an improvement to the linear engine and it is definitely more random. In the last reference, it is stated that Mersenne Twister has higher complexity than the Subtract-with-carry random number engine.
Linear congruential random number engine
A pseudo-random number generator engine that produces unsigned integer numbers.
This is the simplest generator engine in the standard library. Its state is a single integer value, with the following transition algorithm:
x = (ax+c) mod m
Where x is the current state value, a and c are their respective template parameters, and m is its respective template parameter if this is greater than 0, or numerics_limits<UIntType>::max() + 1, otherwise.
Its generation algorithm is a direct copy of the state value.
This makes it an extremely efficient generator in terms of processing and memory consumption, but producing numbers with varying degrees of serial correlation, depending on the specific parameters used.
The random numbers generated by linear_congruential_engine have a period of m.
Mersenne twister random number engine
A pseudo-random number generator engine that produces unsigned integer numbers in the closed interval [0,2^w-1].
The algorithm used by this engine is optimized to compute large series of numbers (such as in Monte Carlo experiments) with an almost uniform distribution in the range.
The engine has an internal state sequence of n integer elements, which is filled with a pseudo-random series generated on construction or by calling member function seed.
The internal state sequence becomes the source for n elements: When the state is advanced (for example, in order to produce a new random number), the engine alters the state sequence by twisting the current value using xor mask a on a mix of bits determined by parameter r that come from that value and from a value m elements away (see operator() for details).
The random numbers produced are tempered versions of these twisted values. The tempering is a sequence of shift and xor operations defined by parameters u, d, s, b, t, c and l applied on the selected state value (see operator()).
The random numbers generated by mersenne_twister_engine have a period equivalent to the mersenne number 2^((n-1)*w)-1.
Subtract-with-carry random number engine
A pseudo-random number generator engine that produces unsigned integer numbers.
The algorithm used by this engine is a lagged fibonacci generator, with a state sequence of r integer elements, plus one carry value.
Lagged Fibonacci generators have a maximum period of (2k - 1)*^(2M-1) if addition or subtraction is used. The initialization of LFGs is a very complex problem. The output of LFGs is very sensitive to initial conditions, and statistical defects may appear initially but also periodically in the output sequence unless extreme care is taken. Another potential problem with LFGs is that the mathematical theory behind them is incomplete, making it necessary to rely on statistical tests rather than theoretical performance.
And finally from the documentation of random:
The choice of which engine to use involves a number of tradeoffs: the linear congruential engine is moderately fast and has a very small storage requirement for state. The lagged Fibonacci generators are very fast even on processors without advanced arithmetic instruction sets, at the expense of greater state storage and sometimes less desirable spectral characteristics. The Mersenne Twister is slower and has greater state storage requirements but with the right parameters has the longest non-repeating sequence with the most desirable spectral characteristics (for a given definition of desirable).
I think that the point is that random generators have different properties, which can make them more suitable or not for a given problem.
The period length is one of the properties.
The quality of the random numbers can also be important.
The performance of the generator can also be an issue.
Depending on your need, you might take one generator or another one. E.g., if you need fast random numbers but do not really care for the quality, an LCG might be a good option. If you want better quality random numbers, the Mersenne Twister is probably a better option.
To help you making your choice, there are some standard tests and results (I definitely like the table p.29 of this paper).
EDIT: From the paper,
The LCG (LCG(***) in the paper) family are the fastest generators, but with the poorest quality.
The Mersenne Twister (MT19937) is a little bit slower, but yields better random numbers.
The substract with carry ( SWB(***), I think) are way slower, but can yield better random properties when well tuned.
As the other answers forget about ranlux, here is a small note by an AMD developer that recently ported it to OpenCL:
https://community.amd.com/thread/139236
RANLUX is also one of very few (the only one I know of actually) PRNGs that has a underlying theory explaining why it generates "random" numbers, and why they are good. Indeed, if the theory is correct (and I don't know of anyone who has disputed it), RANLUX at the highest luxury level produces completely decorrelated numbers down to the last bit, with no long-range correlations as long as we stay well below the period (10^171). Most other generators can say very little about their quality (like Mersenne Twister, KISS etc.) They must rely on passing statistical tests.
Physicists at CERN are fan of this PRNG. 'nuff said.
Some of the information in these other answers conflicts with my findings. I've run tests on Windows 8.1 using Visual Studio 2013, and consistently I've found mersenne_twister_engine to be but higher quality and significantly faster than either linear_congruential_engine or subtract_with_carry_engine. This leads me to believe, when the information in the other answers are taken into account, that the specific implementation of an engine has a significant impact on performance.
This is of great surprise to nobody, I'm sure, but it's not mentioned in the other answers where mersenne_twister_engine is said to be slower. I have no test results for other platforms and compilers, but with my configuration, mersenne_twister_engine is clearly the superior choice when considering period, quality, and speed performance. I have not profiled memory usage, so I cannot speak to the space requirement property.
Here's the code I'm using to test with (to make portable, you should only have to replace the windows.h QueryPerformanceXxx() API calls with an appropriate timing mechanism):
// compile with: cl.exe /EHsc
#include <random>
#include <iostream>
#include <windows.h>
using namespace std;
void test_lc(const int a, const int b, const int s) {
/*
typedef linear_congruential_engine<unsigned int, 48271, 0, 2147483647> minstd_rand;
*/
minstd_rand gen(1729);
uniform_int_distribution<> distr(a, b);
for (int i = 0; i < s; ++i) {
distr(gen);
}
}
void test_mt(const int a, const int b, const int s) {
/*
typedef mersenne_twister_engine<unsigned int, 32, 624, 397,
31, 0x9908b0df,
11, 0xffffffff,
7, 0x9d2c5680,
15, 0xefc60000,
18, 1812433253> mt19937;
*/
mt19937 gen(1729);
uniform_int_distribution<> distr(a, b);
for (int i = 0; i < s; ++i) {
distr(gen);
}
}
void test_swc(const int a, const int b, const int s) {
/*
typedef subtract_with_carry_engine<unsigned int, 24, 10, 24> ranlux24_base;
*/
ranlux24_base gen(1729);
uniform_int_distribution<> distr(a, b);
for (int i = 0; i < s; ++i) {
distr(gen);
}
}
int main()
{
int a_dist = 0;
int b_dist = 1000;
int samples = 100000000;
cout << "Testing with " << samples << " samples." << endl;
LARGE_INTEGER ElapsedTime;
double ElapsedSeconds = 0;
LARGE_INTEGER Frequency;
QueryPerformanceFrequency(&Frequency);
double TickInterval = 1.0 / ((double) Frequency.QuadPart);
LARGE_INTEGER StartingTime;
LARGE_INTEGER EndingTime;
QueryPerformanceCounter(&StartingTime);
test_lc(a_dist, b_dist, samples);
QueryPerformanceCounter(&EndingTime);
ElapsedTime.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
ElapsedSeconds = ElapsedTime.QuadPart * TickInterval;
cout << "linear_congruential_engine time: " << ElapsedSeconds << endl;
QueryPerformanceCounter(&StartingTime);
test_mt(a_dist, b_dist, samples);
QueryPerformanceCounter(&EndingTime);
ElapsedTime.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
ElapsedSeconds = ElapsedTime.QuadPart * TickInterval;
cout << " mersenne_twister_engine time: " << ElapsedSeconds << endl;
QueryPerformanceCounter(&StartingTime);
test_swc(a_dist, b_dist, samples);
QueryPerformanceCounter(&EndingTime);
ElapsedTime.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
ElapsedSeconds = ElapsedTime.QuadPart * TickInterval;
cout << "subtract_with_carry_engine time: " << ElapsedSeconds << endl;
}
Output:
Testing with 100000000 samples.
linear_congruential_engine time: 10.0821
mersenne_twister_engine time: 6.11615
subtract_with_carry_engine time: 9.26676
I just saw this answer from Marnos and decided to test it myself. I used std::chono::high_resolution_clock to time 100000 samples 100 times to produce an average. I measured everything in std::chrono::nanoseconds and ended up with different results:
std::minstd_rand had an average of 28991658 nanoseconds
std::mt19937 had an average of 29871710 nanoseconds
ranlux48_base had an average of 29281677 nanoseconds
This is on a Windows 7 machine. Compiler is Mingw-Builds 4.8.1 64bit. This is obviously using the C++11 flag and no optimisation flags.
When I turn on -O3 optimisations, the std::minstd_rand and ranlux48_base actually run faster than what the implementation of high_precision_clock can measure; however std::mt19937 still takes 730045 nanoseconds, or 3/4 of a second.
So, as he said, it's implementation specific, but at least in GCC the average time seems to stick to what the descriptions in the accepted answer say. Mersenne Twister seems to benefit the least from optimizations, whereas the other two really just throw out the random numbers unbelieveably fast once you factor in compiler optimizations.
As an aside, I'd been using Mersenne Twister engine in my noise generation library (it doesn't precompute gradients), so I think I'll switch to one of the others to really see some speed improvements. In my case, the "true" randomness doesn't matter.
Code:
#include <iostream>
#include <chrono>
#include <random>
using namespace std;
using namespace std::chrono;
int main()
{
minstd_rand linearCongruentialEngine;
mt19937 mersenneTwister;
ranlux48_base subtractWithCarry;
uniform_real_distribution<float> distro;
int numSamples = 100000;
int repeats = 100;
long long int avgL = 0;
long long int avgM = 0;
long long int avgS = 0;
cout << "results:" << endl;
for(int j = 0; j < repeats; ++j)
{
cout << "start of sequence: " << j << endl;
auto start = high_resolution_clock::now();
for(int i = 0; i < numSamples; ++i)
distro(linearCongruentialEngine);
auto stop = high_resolution_clock::now();
auto L = duration_cast<nanoseconds>(stop-start).count();
avgL += L;
cout << "Linear Congruential:\t" << L << endl;
start = high_resolution_clock::now();
for(int i = 0; i < numSamples; ++i)
distro(mersenneTwister);
stop = high_resolution_clock::now();
auto M = duration_cast<nanoseconds>(stop-start).count();
avgM += M;
cout << "Mersenne Twister:\t" << M << endl;
start = high_resolution_clock::now();
for(int i = 0; i < numSamples; ++i)
distro(subtractWithCarry);
stop = high_resolution_clock::now();
auto S = duration_cast<nanoseconds>(stop-start).count();
avgS += S;
cout << "Subtract With Carry:\t" << S << endl;
}
cout << setprecision(10) << "\naverage:\nLinear Congruential: " << (long double)(avgL/repeats)
<< "\nMersenne Twister: " << (long double)(avgM/repeats)
<< "\nSubtract with Carry: " << (long double)(avgS/repeats) << endl;
}
Its a trade-off really. A PRNG like Mersenne Twister is better because it has extremely large period and other good statistical properties.
But a large period PRNG takes up more memory (for maintaining the internal state) and also takes more time for generating a random number (due to complex transitions and post processing).
Choose a PNRG depending on the needs of your application. When in doubt use Mersenne Twister, its the default in many tools.
In general, mersenne twister is the best (and fastest) RNG, but it requires some space (about 2.5 kilobytes). Which one suits your need depends on how many times you need to instantiate the generator object. (If you need to instantiate it only once, or a few times, then MT is the one to use. If you need to instantiate it millions of times, then perhaps something smaller.)
Some people report that MT is slower than some of the others. According to my experiments, this depends a lot on your compiler optimization settings. Most importantly the -march=native setting may make a huge difference, depending on your host architecture.
I ran a small program to test the speed of different generators, and their sizes, and got this:
std::mt19937 (2504 bytes): 1.4714 s
std::mt19937_64 (2504 bytes): 1.50923 s
std::ranlux24 (120 bytes): 16.4865 s
std::ranlux48 (120 bytes): 57.7741 s
std::minstd_rand (4 bytes): 1.04819 s
std::minstd_rand0 (4 bytes): 1.33398 s
std::knuth_b (1032 bytes): 1.42746 s
I need a 'good' way to initialize the pseudo-random number generator in C++. I've found an article that states:
In order to generate random-like
numbers, srand is usually initialized
to some distinctive value, like those
related with the execution time. For
example, the value returned by the
function time (declared in header
ctime) is different each second, which
is distinctive enough for most
randoming needs.
Unixtime isn't distinctive enough for my application. What's a better way to initialize this? Bonus points if it's portable, but the code will primarily be running on Linux hosts.
I was thinking of doing some pid/unixtime math to get an int, or possibly reading data from /dev/urandom.
Thanks!
EDIT
Yes, I am actually starting my application multiple times a second and I've run into collisions.
This is what I've used for small command line programs that can be run frequently (multiple times a second):
unsigned long seed = mix(clock(), time(NULL), getpid());
Where mix is:
// Robert Jenkins' 96 bit Mix Function
unsigned long mix(unsigned long a, unsigned long b, unsigned long c)
{
a=a-b; a=a-c; a=a^(c >> 13);
b=b-c; b=b-a; b=b^(a << 8);
c=c-a; c=c-b; c=c^(b >> 13);
a=a-b; a=a-c; a=a^(c >> 12);
b=b-c; b=b-a; b=b^(a << 16);
c=c-a; c=c-b; c=c^(b >> 5);
a=a-b; a=a-c; a=a^(c >> 3);
b=b-c; b=b-a; b=b^(a << 10);
c=c-a; c=c-b; c=c^(b >> 15);
return c;
}
The best answer is to use <random>. If you are using a pre C++11 version, you can look at the Boost random number stuff.
But if we are talking about rand() and srand()
The best simplest way is just to use time():
int main()
{
srand(time(nullptr));
...
}
Be sure to do this at the beginning of your program, and not every time you call rand()!
Side Note:
NOTE: There is a discussion in the comments below about this being insecure (which is true, but ultimately not relevant (read on)). So an alternative is to seed from the random device /dev/random (or some other secure real(er) random number generator). BUT: Don't let this lull you into a false sense of security. This is rand() we are using. Even if you seed it with a brilliantly generated seed it is still predictable (if you have any value you can predict the full sequence of next values). This is only useful for generating "pseudo" random values.
If you want "secure" you should probably be using <random> (Though I would do some more reading on a security informed site). See the answer below as a starting point: https://stackoverflow.com/a/29190957/14065 for a better answer.
Secondary note: Using the random device actually solves the issues with starting multiple copies per second better than my original suggestion below (just not the security issue).
Back to the original story:
Every time you start up, time() will return a unique value (unless you start the application multiple times a second). In 32 bit systems, it will only repeat every 60 years or so.
I know you don't think time is unique enough but I find that hard to believe. But I have been known to be wrong.
If you are starting a lot of copies of your application simultaneously you could use a timer with a finer resolution. But then you run the risk of a shorter time period before the value repeats.
OK, so if you really think you are starting multiple applications a second.
Then use a finer grain on the timer.
int main()
{
struct timeval time;
gettimeofday(&time,NULL);
// microsecond has 1 000 000
// Assuming you did not need quite that accuracy
// Also do not assume the system clock has that accuracy.
srand((time.tv_sec * 1000) + (time.tv_usec / 1000));
// The trouble here is that the seed will repeat every
// 24 days or so.
// If you use 100 (rather than 1000) the seed repeats every 248 days.
// Do not make the MISTAKE of using just the tv_usec
// This will mean your seed repeats every second.
}
if you need a better random number generator, don't use the libc rand. Instead just use something like /dev/random or /dev/urandom directly (read in an int directly from it or something like that).
The only real benefit of the libc rand is that given a seed, it is predictable which helps with debugging.
On windows:
srand(GetTickCount());
provides a better seed than time() since its in milliseconds.
C++11 random_device
If you need reasonable quality then you should not be using rand() in the first place; you should use the <random> library. It provides lots of great functionality like a variety of engines for different quality/size/performance trade-offs, re-entrancy, and pre-defined distributions so you don't end up getting them wrong. It may even provide easy access to non-deterministic random data, (e.g., /dev/random), depending on your implementation.
#include <random>
#include <iostream>
int main() {
std::random_device r;
std::seed_seq seed{r(), r(), r(), r(), r(), r(), r(), r()};
std::mt19937 eng(seed);
std::uniform_int_distribution<> dist{1,100};
for (int i=0; i<50; ++i)
std::cout << dist(eng) << '\n';
}
eng is a source of randomness, here a built-in implementation of mersenne twister. We seed it using random_device, which in any decent implementation will be a non-determanistic RNG, and seed_seq to combine more than 32-bits of random data. For example in libc++ random_device accesses /dev/urandom by default (though you can give it another file to access instead).
Next we create a distribution such that, given a source of randomness, repeated calls to the distribution will produce a uniform distribution of ints from 1 to 100. Then we proceed to using the distribution repeatedly and printing the results.
Best way is to use another pseudorandom number generator.
Mersenne twister (and Wichmann-Hill) is my recommendation.
http://en.wikipedia.org/wiki/Mersenne_twister
i suggest you see unix_random.c file in mozilla code. ( guess it is mozilla/security/freebl/ ...) it should be in freebl library.
there it uses system call info ( like pwd, netstat ....) to generate noise for the random number;it is written to support most of the platforms (which can gain me bonus point :D ).
The real question you must ask yourself is what randomness quality you need.
libc random is a LCG
The quality of randomness will be low whatever input you provide srand with.
If you simply need to make sure that different instances will have different initializations, you can mix process id (getpid), thread id and a timer. Mix the results with xor. Entropy should be sufficient for most applications.
Example :
struct timeb tp;
ftime(&tp);
srand(static_cast<unsigned int>(getpid()) ^
static_cast<unsigned int>(pthread_self()) ^
static_cast<unsigned int >(tp.millitm));
For better random quality, use /dev/urandom. You can make the above code portable in using boost::thread and boost::date_time.
The c++11 version of the top voted post by Jonathan Wright:
#include <ctime>
#include <random>
#include <thread>
...
const auto time_seed = static_cast<size_t>(std::time(0));
const auto clock_seed = static_cast<size_t>(std::clock());
const size_t pid_seed =
std::hash<std::thread::id>()(std::this_thread::get_id());
std::seed_seq seed_value { time_seed, clock_seed, pid_seed };
...
// E.g seeding an engine with the above seed.
std::mt19937 gen;
gen.seed(seed_value);
#include <stdio.h>
#include <sys/time.h>
main()
{
struct timeval tv;
gettimeofday(&tv,NULL);
printf("%d\n", tv.tv_usec);
return 0;
}
tv.tv_usec is in microseconds. This should be acceptable seed.
As long as your program is only running on Linux (and your program is an ELF executable), you are guaranteed that the kernel provides your process with a unique random seed in the ELF aux vector. The kernel gives you 16 random bytes, different for each process, which you can get with getauxval(AT_RANDOM). To use these for srand, use just an int of them, as such:
#include <sys/auxv.h>
void initrand(void)
{
unsigned int *seed;
seed = (unsigned int *)getauxval(AT_RANDOM);
srand(*seed);
}
It may be possible that this also translates to other ELF-based systems. I'm not sure what aux values are implemented on systems other than Linux.
Suppose you have a function with a signature like:
int foo(char *p);
An excellent source of entropy for a random seed is a hash of the following:
Full result of clock_gettime (seconds and nanoseconds) without throwing away the low bits - they're the most valuable.
The value of p, cast to uintptr_t.
The address of p, cast to uintptr_t.
At least the third, and possibly also the second, derive entropy from the system's ASLR, if available (the initial stack address, and thus current stack address, is somewhat random).
I would also avoid using rand/srand entirely, both for the sake of not touching global state, and so you can have more control over the PRNG that's used. But the above procedure is a good (and fairly portable) way to get some decent entropy without a lot of work, regardless of what PRNG you use.
For those using Visual Studio here's yet another way:
#include "stdafx.h"
#include <time.h>
#include <windows.h>
const __int64 DELTA_EPOCH_IN_MICROSECS= 11644473600000000;
struct timezone2
{
__int32 tz_minuteswest; /* minutes W of Greenwich */
bool tz_dsttime; /* type of dst correction */
};
struct timeval2 {
__int32 tv_sec; /* seconds */
__int32 tv_usec; /* microseconds */
};
int gettimeofday(struct timeval2 *tv/*in*/, struct timezone2 *tz/*in*/)
{
FILETIME ft;
__int64 tmpres = 0;
TIME_ZONE_INFORMATION tz_winapi;
int rez = 0;
ZeroMemory(&ft, sizeof(ft));
ZeroMemory(&tz_winapi, sizeof(tz_winapi));
GetSystemTimeAsFileTime(&ft);
tmpres = ft.dwHighDateTime;
tmpres <<= 32;
tmpres |= ft.dwLowDateTime;
/*converting file time to unix epoch*/
tmpres /= 10; /*convert into microseconds*/
tmpres -= DELTA_EPOCH_IN_MICROSECS;
tv->tv_sec = (__int32)(tmpres * 0.000001);
tv->tv_usec = (tmpres % 1000000);
//_tzset(),don't work properly, so we use GetTimeZoneInformation
rez = GetTimeZoneInformation(&tz_winapi);
tz->tz_dsttime = (rez == 2) ? true : false;
tz->tz_minuteswest = tz_winapi.Bias + ((rez == 2) ? tz_winapi.DaylightBias : 0);
return 0;
}
int main(int argc, char** argv) {
struct timeval2 tv;
struct timezone2 tz;
ZeroMemory(&tv, sizeof(tv));
ZeroMemory(&tz, sizeof(tz));
gettimeofday(&tv, &tz);
unsigned long seed = tv.tv_sec ^ (tv.tv_usec << 12);
srand(seed);
}
Maybe a bit overkill but works well for quick intervals. gettimeofday function found here.
Edit: upon further investigation rand_s might be a good alternative for Visual Studio, it's not just a safe rand(), it's totally different and doesn't use the seed from srand. I had presumed it was almost identical to rand just "safer".
To use rand_s just don't forget to #define _CRT_RAND_S before stdlib.h is included.
Assuming that the randomness of srand() + rand() is enough for your purposes, the trick is in selecting the best seed for srand. time(NULL) is a good starting point, but you'll run into problems if you start more than one instance of the program within the same second. Adding the pid (process id) is an improvement as different instances will get different pids. I would multiply the pid by a factor to spread them more.
But let's say you are using this for some embedded device and you have several in the same network. If they are all powered at once and you are launching the several instances of your program automatically at boot time, they may still get the same time and pid and all the devices will generate the same sequence of "random" numbers. In that case, you may want to add some unique identifier of each device (like the CPU serial number).
The proposed initialization would then be:
srand(time(NULL) + 1000 * getpid() + (uint) getCpuSerialNumber());
In a Linux machine (at least in the Raspberry Pi where I tested this), you can implement the following function to get the CPU Serial Number:
// Gets the CPU Serial Number as a 64 bit unsigned int. Returns 0 if not found.
uint64_t getCpuSerialNumber() {
FILE *f = fopen("/proc/cpuinfo", "r");
if (!f) {
return 0;
}
char line[256];
uint64_t serial = 0;
while (fgets(line, 256, f)) {
if (strncmp(line, "Serial", 6) == 0) {
serial = strtoull(strchr(line, ':') + 2, NULL, 16);
}
}
fclose(f);
return serial;
}
Include the header at the top of your program, and write:
srand(time(NULL));
In your program before you declare your random number. Here is an example of a program that prints a random number between one and ten:
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
//Initialize srand
srand(time(NULL));
//Create random number
int n = rand() % 10 + 1;
//Print the number
cout << n << endl; //End the line
//The main function is an int, so it must return a value
return 0;
}