Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 28 days ago.
Improve this question
I have been using random_device rd{} to generate seeds for my Mersenne-Twister pseudo random number generator mt19937 RNG{rd()} as have been suggested here. However, it is written in the documentation (comment in the documentations' example code), that "the performance of many implementations of random_device degrades sharply once the entropy pool is exhausted. For practical use random_device is generally only used to seed a PRNG such as mt19937". I have tried testing how big this "entropy pool" is, and for 10^6 number of calls, random_device returns me more than 10^2 repeating numbers (see my example code and output below). In other words, if I will try using random_device as a seed to my Mersenne-Twister PRNG, it will generate a solid fraction of repeating seeds.
Question: do people still use random_device in C++ to generate seeds for PRNG or are there already better alternatives?
My code:
#include <iostream>
#include <random>
#include <chrono>
using namespace std;
int main(){
auto begin = std::chrono::high_resolution_clock::now();
random_device rd{};
mt19937 RNG{ rd() };
int total_n_of_calls = 1e6;
vector<int> seeds;
for(auto call = 0; call < total_n_of_calls; call++){
int call_rd = rd();
seeds.push_back(call_rd);
}
int count_repeats = 0;
sort(seeds.begin(), seeds.end());
for(int i = 0; i < seeds.size() - 1; i++) {
if (seeds[i] == seeds[i + 1]) {
count_repeats++;
}
}
printf("Number of times random_device have been called: %i\n", total_n_of_calls);
printf("Number of repeats: %i\n", count_repeats);
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
printf("Duration: %.3f seconds.\n", elapsed.count() * 1e-9);
return 0;
}
The output:
Number of times random_device have been called: 1000000
Number of repeats: 111
Duration: 0.594 seconds.
TL;DR: No, there's nothing better. You just need to stop abusing it.
The point of random_device is that it asks your platform for bits that are actually random, not just pseudorandom from some deterministic seed.
If the platform / OS thinks the entropy it had was expended, then it cannot offer you this. But honestly, it uses true sources of randomness, from actual randomness hardware in your CPU to timing of disk access, to modify the internal state of a PRNG. That's all there is to it – to someone external, the bits you get are still unpredictable.
So, the answer is this:
you use random_device because you need actually random seeds. There's no algorithmic shortcut to randomness – the word "algorithm" already says that it's deterministic. And software, universally, is deterministic, unless it gets random data externally. So, all you can do is ask the operating system, which actually deals with any source of randomness there is in your system. And that's already exactly what random_device does.
So, no, you cannot use something else but actual external entropy, which is exactly what you get most efficiently from random_device (unless you buy an expensive dedicated random generator card and write a driver for it).
As the OS uses the random external source to change the internal state of a PRNG, it can produce more random things securely than random events happen – but it needs to keep track of how much bits got taken out of the PRNG, so that it never becomes possible for an attacker to reconstruct prior state with a high probability of being right. Thus, it slows down your consumption of randomness when there's not enough external randomness to modify the internal state.
Thus, 10⁶ calls to generate a seed in a short time sound like you're doing something wrong; twice as much if these are used to feed a Mersenne twister, an algorithm that is overly complex and slow, but not cryptographically secure. You're not using this much actual randomness, ever! Don't reseed, continue to use your seeded PRNG, unless you need cryptographically safety that these seeds are independent.
And that's exactly the thing: if you're in a situation where you need to generate 10⁶ independent cryptographically secure keys in less than a few seconds, you're a bit special. Are you working for someone who does CDNs, where a single operating system would serve millions of new TLS connections per second? If not, reduce your usage of random_device to what it's actually useful for.
If you want to understand more about the way true randomness ends up in your program, I recommend reading this answer. In short, if you're actually in need of more random bytes per second than the default random_device offers, try constructing it with "/dev/urandom" as a ctor parameter. It's still going to be secure, for any assumable definition of what that means in the context in which you're asking this (which means I assume you're not writing a cryptographic library for extremely high throughput of key generation).
Related
I'm using std::mt19937 to produce deterministic random numbers. I'd like to pass it to functions so I can control their source of randomness. I could do int foo(std::mt19937& rng);, but I want to call foo and bar in parallel, so that won't work. Even if I put the generation function behind a mutex (so each call to operator() did std::lock_guard lock(mutex); return rng();), calling foo and bar in parallel wouldn't be deterministic due to the race on the mutex.
I feel like conceptually I should be able to do this:
auto fooRNG = std::mt19937(rng()); // Seed a RNG with the output of `rng`.
auto barRNG = std::mt19937(rng());
parallel_invoke([&] { fooResult = foo(fooRNG); },
[&] { barResult = bar(barRNG); });
where I "fork" rng into two new ones with different seeds. Since fooRNG and barRNG are seeded deterministically, they should be random and independent.
Is this general gist viable?
Is this particular implementation sufficient (I doubt it)?
Extended question: Suppose I want to call baz(int n, std::mt19937&) massively in parallel over a range of indexed values, something like
auto seed = rng();
parallel_for(range(0, 1 << 20),
[&](int i) {
auto thisRNG = std::mt19937(seed ^ i); // Deterministically set up RNGs in parallel?
baz(i, thisRNG);
});
something like that should work, right? That is, provided we give it enough bits of state?
Update:
Looking into std::seed_seq, it looks(?) like it's designed to turn not-so-random seeds into high-quality seeds: How to properly initialize a C++11 std::seed_seq
So maybe what I want something like
std::mt19937 fork(std::mt19937& rng) {
return std::mt19937(std::seed_seq({rng()}));
}
or more generally:
//! Represents a state that can be used to generate multiple
//! distinct deterministic child std::mt19937 instances.
class rng_fork {
std::mt19937::result_type m_seed;
public:
rng_fork(std::mt19937& rng) : m_seed(rng()) {}
// Copy is explicit b/c I think it's a correctness footgun:
explicit rng_fork(const rng_fork&) = default;
//! Make the ith fork: a deterministic but well-seeded
//! RNG based off the internal seed and the given index:
std::mt19937 ith_fork(std::mt19937::result_type index) const {
return std::mt19937(std::seed_seq({m_seed, index}));
}
};
then the initial examples would become
auto fooRNG = fork(rng);
auto barRNG = fork(rng);
parallel_invoke([&] { fooResult = foo(fooRNG); },
[&] { barResult = bar(barRNG); });
and
auto fork_point = rng_fork{rng};
parallel_for(range(0, 1 << 20),
[&](int i) {
auto thisRNG = fork_point.ith_fork(i); // Deterministically set up a RNG in parallel.
baz(i, thisRNG);
});
Is that correct usage of std::seed_seq?
I am aware of 3 ways to seed multiple parallel pseudo random number generators (PRNGs):
First option
Given a seed, initialize the first instance of the PRNG with seed, the second with seed+1, etc. The thing to be aware of here is that the state of the PRNGs will be initially very close in case the seed is not hashed. Some PRNGs will take a long time to diverge. See e.g. this blog post for more information.
For std::mt19937 specifically, however, this was never an issue in my tests because the initial seed is not taken as is but instead gets "mangled/hashed" (compare the documentation of the result_type constructor). So it seems to be a viable option in practice.
However, notice that there are some potential pitfalls when seeding a Mersenne Twister (which has an internal state of 624 32-bit integers) with a single 32 bit integer. For example, the first number can never be 7 or 13. See this blog post for more information. But if you do not rely on the randomness of only the first few drawn numbers but draw a more reasonable number of numbers from each PRNG, it is probably fine.
Second option
Without std::seed_seq:
Seed one "parent" PRNG. Then, to initialize N parallel PRNGs, draw N random numbers and use them as seeds. This is your initial idea where you draw 2 random numbers rng() and initialize the two std::mt19937:
std::mt19937 & rng = ...;
auto fooRNG = std::mt19937(rng()); // Seed a RNG with the output of `rng`.
auto barRNG = std::mt19937(rng());
The major issue to look out for here is the birthday problem. It essentially states that the probability to draw the same number twice is more likely than you'd intuitively think. Given a type of PRNG that has a value range of b (i.e. b different values can appear in its output), the probability p(t) to draw the same number twice when drawing t numbers can be estimated as:
p(t) ~ t^2 / (2b) for t^2 << b
(compare this post). If we stretch the estimate "a bit", just to show the basic issue:
For a PRNG producing a 16 bit integer, we have b=2^16. Drawing 256 numbers results in a 50% chance to draw the same number twice according to that formula. For a 32 bit PRNG (such as std::mt19937) we need to draw 65536 numbers, and for a 64 bit integer PRNG we need to draw ~4e9 numbers to reach the 50%. Of course, this is all an estimate, so you want to draw several orders of magnitude less numbers than that. Also see this blog post for more information.
In case of seeding the parallel std::m19937 instances with this method (32 bit output and input!), that means you probably do not want to draw more than a hundred or so random numbers. Otherwise, you have a realistic chance of drawing the same number twice. Of course, you could ensure that you do not draw the same seed twice by keeping a list of already used seeds. Or use std::mt19937_64.
Additionally, there are still the potential pitfalls mentioned above regarding the seeding of a Mersenne Twister with 32 bit numbers.
With seed sequence:
The idea of std::seed_seq is to take some numbers, "mix them" and then provide them as input to the PRNG so that it can initialize its state. Since the 32 bit Mersenne Twister has a state of 624 32-bit integers, you should provide that many numbers to the seed sequence for theoretically optimal results. That way you get b=2^(624*32), meaning that you avoid the birthday problem for all practical purposes.
But in your example
std::mt19937 fork(std::mt19937& rng) {
return std::mt19937(std::seed_seq({rng()}));
}
you provide only a single 32 bit integer. This effectively means that you hash that 32 bit number before putting it into std::mt19937. So you do not gain anything regarding the birthday problem. And the additional hashing is unnecessary because std::mt19937 already does something like this.
std::seed_seq itself is somewhat flawed, see this blog post. But I guess for practical purposes it does not really matter. A supposedly better alternative exists, but I have no experience with it.
Third option
Some PRNG algorithms such as PCG or xoshiro256++ allow to jump over a large number of random numbers fast. For example, xoshiro256++ has a period of (2^256)-1 before it repeats itself. It allows to jump ahead by 2^128 (or alternatively 2^192) numbers. So the idea would be that the first PRNG is seeded, then you create a copy of it and jump ahead by 2^128 numbers, then create a copy of that second one and jump ahead again by 2^128, etc. So each instance works in a slice of length 2^128 from the total range of 2^256. The slices are stochastically independent. This elegantly bypasses the problems with the above methods.
The standard PRNGs do have a discard(z) method to jump z values ahead. However, it is not guaranteed that the jumping will be fast. I don't know whether std::mt19937 implements fast jumping in all standard library implementations. (As far as I know, the Mersenne Twister algorithm itself does allow this in principle.)
Additional note
I found PRNGs to be surprisingly difficult to use "right". It really depends on the use case how careful you need to be and what method to choose. Think about the worst thing that could happen in your case if something goes wrong, and invest an according amount of time in researching the topic.
For ordinary scientific simulations where you require only a few dozens or so parallel instances of std::mt19937, I'd guess that the first and second option (without seed sequence) are both viable. But if you need several hundreds or even more, you should think more carefully.
#include <vector>
#include <random>
using namespace std;
int main()
{
vector<int> coll{1, 2, 3, 4};
shuffle(coll.begin(), coll.end(), random_device{});
default_random_engine dre{random_device{}()};
shuffle(coll.begin(), coll.end(), dre);
}
Question 1: What's the difference between
shuffle(coll.begin(), coll.end(), random_device{});
and
shuffle(coll.begin(), coll.end(), dre);?
Question 2: Which is better?
Question 1: What's the difference between...
std::random_device conceptually produces true random numbers. Some implementations will stall if you exhaust the system's source of entropy so this version may not perform as well.
std::default_random_engine is a pseudo-random engine. Once seeded, with an random number it would be extremely difficult (but not impossible) to predict the next number.
There is another subtle difference. std::random_device::operator() will throw an exception if it fails to come up with a random number.
Question 2: Which is better?
It depends. For most cases, you probably want the performance and temporal-determinism of the pseudorandom engine seeded with a random number, so that would be the second option.
Both random_device and default_random_engine are implementation defined. random_device should provide a nondeterministic source of randomness if available, but it may also be a prng in some implementations. Use random_device if you want unpredictable random numbers (most machines nowadays have hardware entropy sources). If you want pseudo random numbers you'd probably use one of the specific algorithms, like the mersenne_twister_engine.
I guess default_random_engine is what you'd use if you don't care about the particulars of how you get your random numbers. I'd suspect it'd just use rand under the hood or a linear_congruential_engine most of the time.
I don't think the question "which is better" can be answered objectively. It depends on what you're trying to do with your random numbers. If they're supposed to be random sources for some cryptographic process, I suspect default_random_engine is a terrible choice, although I'm not a security expert, so I'm not sure if even random_device is good enough.
I'm trying to find a random number generator that will give me a single random number each time I run it. I have spent a week trying dozens of different ones, both from this site and others. Every time I run it, it gives me the same number! The only time it changes is if I change the range, and then it just gives me the new number over and over.
I am running Code::Blocks ver. 16.01 on Windows 7. Can anyone help?? I'm at my wits' end!
This code gives me a decently ramdom string of numbers, but still the same string each time!
#include <iostream>
#include <random>
int main()
{
std::random_device rd;
std::mt19937 eng(rd()); std::uniform_int_distribution<> distr(0, 10);
for(int n=0; n<100; ++n)
std::cout << distr(eng) << '\t';
}
I have tried the code on my compiler app on my phone as well.
Every pseudo random number generator will return the same sequence of numbers for the same initial seed value.
What you want to do is to use a different seed every time you run the program. Otherwise you'll just be using the same default seed every time and get the same values.
Picking good seeds is not as easy as you might think. Using the output from time(nullptr) for example still gives the same results if two copies of the program run within the same second. Using the value of getpid() is also bad since pid values wrap and thus sometimes you'll get the same value for different runs. Luckily you have other options. std::seed_seq lets you combine multiple bad sources and returns a good (or rather, pretty good) seed value you can use. There is also std::random_device which (on all sane implementations) returns raw entropy - perfect for seeding a pseudo random generator (or you can just use it directly if it is fast enough for your purpose) or you can combine it with std::seed_seq and the bad sources to seed a generator if you are worried it might be implemented as a prng on your implementation.
I would advice you to read this page: http://en.cppreference.com/w/cpp/numeric/random for an overview of how to deal with random number generation in modern C++.
The standard allows std::random_device to be implemented in terms of a pseudo-random number generator if there is no real random source on the system.
You may need to find a different entropy source, such as the time, or user touch co-ordinates.
I'm working on a random walk simulation of particles moving in a lattice. For that reason I must create a massive amount of random numbers, about 10^12 and above. Currently I'm using the possibilities C++11 provides with <random>. When profiling my program, I see that a major amount of time is spent in <random>. The vast majority of those numbers are between 0 and 1, evenly distributed. Here a then I need a number from a binomial distribution. But the focus lies on the 0..1 numbers.
The question is: What can I do to reduce the CPU time needed to generate these numbers and what would the impact be on their quality?
As you can see, I tried different engines, but that had no big effect on CPU time. Further, what is the difference between my uniform01(gen) and generate_canonical<double,numeric_limits<double>::digits>(gen) anyhow?
Edit: Reading through the answers I conclude that there is not THE ideal solution for my problem. Thus I decided to first make my program multi threading capable and run multiple RNG in different threads (seeded with one random_device number + an thread individual increment). For the time being this seams to be the most unavoidable step (multi threading would be required anyhow). As a further step, pending on exact requirements I consider switching to the suggested Intel RNG or to Thrust. Meaning that my RNG implementation should not be to complex, which, currently is is not. But for now I like to focus on the physical correctness of my model and not on programming stuff, this comes as soon as the output of my program is physically correct.
Thrust
Concerning Intel RNG
Here is what I do currently:
class Generator {
public:
Generator();
virtual ~Generator();
double rand01(); //random number [0,1)
int binomial(int n, double p); //binomial distribution with n samples with probability p
private:
std::random_device randev; //seed
/*Engines*/
std::mt19937_64 gen;
//std::mt19937 gen;
//std::default_random_engine gen;
/*Distributions*/
std::uniform_real_distribution<double> uniform01;
std::binomial_distribution<> binomialdist;
};
Generator::Generator() : randev(), gen(randev()), uniform01(0.,1.), binomial(1,1.) {
}
Generator::~Generator() { }
double Generator::rand01() {
//return uniform01(gen);
return generate_canonical<double,numeric_limits<double>::digits>(gen);
}
int Generator::binomialdist(int n, double p) {
binomial.param(binomial_distribution<>::param_type(n,p));
return binomial(gen);
}
You can pre-process random numbers and use them when you need.
If you need true random numbers I suggest you to use a service like http://www.random.org/ that ensures random numbers calculated by environment ambient instead that some algorithm.
And, speaking about random numbers, you must also check this:
If you need a massive amount of random numbers, and I mean MASSIVE, do a careful search on the internet for IBM's floating point random number generator, published maybe ten years ago. You'll have to buy either a PowerPC machine, or a newer Intel machine with fused multiply-add. They achieved random numbers at a rate of one per cycle per core. So if you bought a new Mac Pro, you could achieve probably 50 billion random numbers per second.
Perhaps instead of using a CPU you could use a GPU to generate many numbers concurrently?
Efficient Random Number Generation and Application Using CUDA
On my i3, the following program runs in about five seconds:
#include <random>
std::mt19937_64 foo;
double drand() {
union {
double d;
long long l;
} x;
x.d = 1.0;
x.l |= foo() & (1LL<<53)-1;
return x.d-1;
}
int main() {
double d;
for (int i = 0; i < 1e9; i++)
d += drand();
printf("%g\n", d);
}
whereas replacing the drand() call with the following results in a program that runs in about ten seconds:
double drand2() {
return std::generate_canonical<double,
std::numeric_limits<double>::digits>(foo);
}
Using the following instead of drand() also results in a program that runs in about ten seconds:
std::uniform_real_distribution<double> uni;
double drand3() {
return uni(foo);
}
Perhaps the hacky drand() above suits your purposes better than the standard solutions..
Task Definition
OP asks to get answer for both the
1. Speed of generation -- assuming a set of 10E+012 random numbers to be "massive"
and
2. Quality of generator -- with a weak assumption that numbers just evenly distributed over some range of values are also random
However, there are more cardinal aspects to be addressed and successfully solved for the real system:
A. Define, whether your system simulation needs to be provided with a guarantee of a repeatability of the sequence of the random numbers for future re-runs of an experiment.
If this is not the case, the re-runs of the simulated experiment will yield principally different results then the randomizer process ( or pre-randomizer and randomized-selector ) need not worry about their re-entrant, state-full mode of operation and will get much simpler implementation.
B. Define, to what level do you need to proof a quality of randomness of the generated random numbers ( or does the generated sets of random numbers have to belong to some specific law of statistic theory ( some known synthetic distributions or truly random with an utmost Kolmogorov complexity of the resulting set of random numbers )). One need not be NSA expert to state that numerical generators of true-random sequences is a very hard issue and has it's computational costs associated with production of high-randomness products.
Hyper-chaotic and true-random sequences are computationally extemely expensive. Using low- or poor-randomness generators is not an option for randomness-quality sensitive applications ( whatever the marketing papers may say, no MIL-STD- or NSA-graded system will ever try this compromised quality in enviroments, where the results indeed matter, so why to settle for less in scientific simulations? Perhaps not a problem if you do not mind to miss so many "unvisited" states of the simulated phenomena ).
C. Verify, how many random numbers does your simulation system need to "consume per [usec]" and whether this design requirement parameter is constant or may get scaled-up by going into multi-threaded, vectorised, Grid-/Cloud-based distributed computation framework.
D. Does your simulation system require to maintain a global or per-thread- or perGrid/CloudNode- individual access management to the pool-of-randomized numbers in case of vectorized or Grid/Cloud-based computational strategy.
Task Solution Approach
Fastest [1] and best [2] solution with [A] and [B] solved and options for [D] is to pre-generate an utmost randomness quality numbers into an adequate access-pool ( and pay an acceptable cost of [C] and [D] on access-policy and access-management controls to re-read from the pool, rather than to re-generate ).
I'm currently working on a C/C++ project where I'm using a random number generator (gsl or boost). The whole idea can be simplified to a non-trivial stochastic process which receives a seed and returns results. I'm computing averages over different realisations of the process.
So, the seed is important: the processes must be with different seeds or it will bias the averages.
So far, I'm using time(NULL) to give a seed. However, if two processes start at the same second, the seed is the same. That happens because I'm using parallelisation (using openMP).
So, my question is: how to implement a "seed giver" on C/C++ which gives independent seeds?
For instance, I though in using the thread number (thread_num), seed = time(NULL)*thread_num. However, this means that the seeds are correlated: they are multiple of each others. Does that poses any problem to the "pseudo-random" or is it as good as sequential seeds?
The requirements are that it must work on both Mac OS (my pc) and Linux distribution similar to OS Cent (the cluster) (and naturally give independent realisations).
A commonly used scheme for this is to have a "master" RNG used to generate seeds for each process-specific RNG.
The advantage of such a scheme is that the whole computation is determined by only one seed, which you can record somewhere to be able to replay any simulation (this might be useful to debug nasty bugs).
We ran into a similar problem on a Beowulf computing grid, the solution we used was to incorporate the pid of the process into the RNG seed, like so:
time(NULL)*thread_num*getpid()
Of course, you could just read from /dev/urandom or /dev/random into an integer.
When faced with this problem I often use seed_rng from Boost.Uuid. It uses time, clock and random data from /dev/urandom to calculate a seed. You can use it like
#include <boost/uuid/seed_rng.hpp>
#include <iostream>
int main() {
int seed = boost::uuids::detail::seed_rng()();
std::cout << seed << std::endl;
}
Note that seed_rng comes from a detail namespace, so it can go away without further notice. In that case writing your own implementation based on seed_rng shouldn't be too hard.
Mac OS is Unix too, so it probably has /dev/random. If so, that's the
best solution for obtaining the seeds. Otherwise, if the generator is
good, taking time( NULL ) once, and then incrementing it for the seed
of each generator, should give reasonably good results.
If you are on x86 and don't mind making the code non-portable then you could read the Time Stamp Counter (TSC) which is a 64-bit counter that increments at the CPU (max) clock rate (about 3 GHz) and use that as a seed.
#include <stdint.h>
static inline uint64_t rdtsc()
{
uint64_t tsc;
asm volatile
(
"rdtsc\n\t"
"shl\t$32,%%rdx\n\t" // rdx = TSC[ 63 : 32 ] : 0x00000000
"add\t%%rdx,%%rax\n\t" // rax = TSC[ 63 : 0 ]
: "=a" (tsc) : : "%rdx"
);
return tsc;
}
When compare two infinite time sequences produced by the same pseudo-random number generator with different seeds, we can see that they are same delayed by some time tau. Usually this time time scale is much bigger than your problem to ensure that the two random walks are uncorrelated.
If your stochastic process is in a high dimensional phase space, I think that one good suggestion could be:
seed = MAXIMUM_INTEGER/NUMBER_OF_PARALLEL_RW*thread_num + time(NULL)
Notice that using scheme you are not guaranteeing that time tau is big !!
If you have some knowledge of your system time scale, you can call your random number generator some number o times in order to generate seeds that are equidistant by some time interval.
Maybe you could try std::chrono high resolution clock from C++11:
Class std::chrono::high_resolution_clock represents the clock with the
smallest tick period available on the system. It may be an alias of
std::chrono::system_clock or std::chrono::steady_clock, or a third,
independent clock.
http://en.cppreference.com/w/cpp/chrono/high_resolution_clock
BUT tbh Im not sure that there is anything wrong with srand(0); srand(1), srand(2).... but my knowledge of rand is very very basic. :/
For crazy safety consider this:
Note that all pseudo-random number generators described below are
CopyConstructible and Assignable. Copying or assigning a generator
will copy all its internal state, so the original and the copy will
generate the identical sequence of random numbers.
http://www.boost.org/doc/libs/1_51_0/doc/html/boost_random/reference.html#boost_random.reference.generators
Since most of the generators have crazy long cycles you could generate one, copy it as first generator, generate X numbers with original, copy it as second, generate X numbers with original, copy it as third...
If your users call their own generator less than X time they will not be overlapping.
The way I understand your question, you have multiple processes using the same pseudo-random number generation algorithm, and you want each "stream" of random numbers (in each process) to be independent of each other. Am I correct ?
In that case, you are right in suspecting that giving different (correlated) seeds does not guaranty you anything unless the rng algorithm says so. You basically have two solutions:
Simple version
Use a single source of random numbers, with a single seed. Then feed random numbers in a round-robin fashion to each process.
This solution is slow but provide some guaranty that the number you give to your processes are ok.
You can do the same thing but generating all the random numbers you need at once, and then splitting this set into as many slices as you have processes.
Use a RNG designed for that
You can find in papers and on the web several algorithms specifically designed to provide independent streams of random numbers from a single initial state. They are complicated but most provide source code. The idea is generally to "split" the RNG space (values you can obtain from the initial state) into various chunks like above. They are just faster because the algorithm used makes it possible to compute easily what would be the state of the RNG if you skipped a given number of values.
These generators are generally called "parallel random number generators".
The most popular ones are probably these two:
RngStreams: http://statmath.wu.ac.at/software/RngStreams/
SPRNG: http://sprng.cs.fsu.edu/
Check their manuals to fully understand what they do, how they do it, and if it really is what you need.