Where to initialize random seed for use through multiple random modules? - c++

So, every time I am developing something big, with multiple modules coming together to build a final functionality, I've been wondering the same question: Where to initialize the random seed if more than 1 module needs to use the random function?
If I have a certain class that needs random (e.g. class that initializes itself by sorting an input array with self-implemented quicksort, so I would need a random for the pivot choice), I usually have a private static bool isRandOn; variable, so before I start the random pivot choice, I check that variable and do srand(time(NULL)); if the random is not on yet.
If I have a ton of utility functions in a namespace, I do a very similar thing: I put such a variable in an anonymous namespace inside my utils library, and do the more-or-less same thing as with a class.
The problem I have is when combining those modules. All by it self, I know each module will not set the seed more than once. But, I want to be able to use a various amount of my modules together, I want other people to be able to use one or more of my modules independent of the others...
So, what is the best way to handle multiple random-seed-needing modules? Set the seed in each module? Do not set the seed at all but instead document the usage of random and make the user initialize the seed if he wants to use the module? Something third?

I would suggest using Boost.Random rather than relying on some global state shared at the program level.
Boost.Random has two concepts:
Engine: which generates random numbers
Distributions: which adapt the result from the engines to provide results fitted to a certain distribution (normal, poisson, gaussian, ...)
Each module may then have its own Engine, or indeed several of them: there is no specific reason for a given Engine to be shared between several different functions within the same module.
As a final word: whatever you do make sure you have a way to set the seed(s) deterministically for bug repro purposes. Bug repro may benefit from having multiple engines (isolation of the parts helps).

You can make a special "module" for random number generation, and use that from the other parts of your application. Then you only seed once when the random-number module is initialized.

#penelope gave a correct answer. There is some complex algorithm for generating pseudo-random number sequence behind rand(). This is like some function rand_func(prev_rand), which generates next pseudo-random number from previous. For the first time you call srand(time(NULL)), which sets prev_rand to in these terms supposing time(NULL) to be quite undetermined. So you can safely call srand() (which sets ) multiple times.
The special issue is if you neet predictable pseudo-random sequences: for example, srand(0) etc. But it seems to be not your case.

The best way I came with to avoid repeating always the same initial random sequence is to do the following in each module where you call the random() function:
/* Global variable to remember if we already initialized the PRNG */
static bool seed_initialized = false;
/* Helper function to avoid having always the same sequence again and again */
static void
prng_init (unsigned int seed)
{
if (!seed_initialized)
{
srandom (seed);
seed_initialized = true;
}
}
And, each time you use random() in a function you start the function with something like:
/* Initializing PRNG with a 'reasonably strong' random seed for our purpose */
prng_init (time (NULL) - getpid());
This way, you ensure that:
You will initialize your PRNG at least the first time you go through;
You will never reinitialize the random sequence more than once within the module.
Hope this help!

Related

Is the seed of the mersenne_twister_engine instance invariant? [duplicate]

Inspired from this and the similar questions, I want to learn how does mt19937 pseudo-number generator in C++11 behaves, when in two separate machines, it is seeded with the same input.
In other words, say we have the following code;
std::mt19937 gen{ourSeed};
std::uniform_int_distribution<int> dest{0, 10000};
int randNumber = dist(gen);
If we try this code on different machines at different times, will we get the same sequence of randNumber values or a different sequence each time ?
And in either case, why this is the case ?
A further question:
Regardless of the seed, will this code generate randomly numbers infinitely ? I mean for example, if we use this block of code in a program that runs for months without stopping, will there be any problem in the generation of the number or in the uniformity of the numbers ?
The generator will generate the same values.
The distributions may not, at least with different compilers or library versions. The standard did not specify their behaviour to that level of detail. If you want stability between compilers and library versions, you have to roll your own distribution.
Barring library/compiler changes, that will return the same values in the same sequence. But if you care write your own distribution.
...
All PRNGs have patterns and periods. mt19937 is named after its period of 2^19937-1, which is unlikely to be a problem. But other patterns can develop. MT PRNGs are robust against many statistical tests, but they are not crytographically secure PRNGs.
So it being a problem if you run for months will depend on specific details of what you'd find to be a problem. However, mt19937 is going to be a better PRNG than anything you are likely to write yourself. But assume attackers can predict its future behaviour from past evidence.
Regardless of the seed, will this code generate randomly numbers infinitely ? I mean for example, if we use this block of code in a program that runs for months without stopping, will there be any problem in the generation of the number or in the uniformity of the numbers ?
RNG we deal with with standard C++ are called pseudo-random RNGs. By definition, this is pure computational device, with multi-bit state (you could think about state as large bit vector) and three functions:
state seed2state(seed);
state next_state(state);
uint(32|64)_t state2output(state);
and that is it. Obviously, state has finite size, 19937 bits in case of MT19937, so total number of states are 219937 and thus MT19937 next_state() function is a periodic one, with max period no more than 219937. This number is really HUGE, and most likely more than enough for typical simulation
But output is at max 64 bits, so output space is 264. It means that during large run any particular output appears quite a few times. What matters is when not only some 64bit number appears again, but number after that, and after that and after that - this is when you know RNG period is reached.
If we try this code on different machines at different times, will we get the same sequence of randNumber values or a different sequence each time?
Generators are defined rather strictly, and you'll get the same bit stream. For example for MT19937 from C++ standard (https://timsong-cpp.github.io/cppwp/rand)
class mersenne_twister_engine {
...
static constexpr result_type default_seed = 5489u;
...
and function seed2state described as (https://timsong-cpp.github.io/cppwp/rand#eng.mers-6)
Effects: Constructs a mersenne_­twister_­engine object. Sets X−n to value mod 2w. Then, iteratively for i=−n,…,−1, sets Xi to ...
Function next_state is described as well together with test value at 10000th invocation. Standard says (https://timsong-cpp.github.io/cppwp/rand#predef-3)
using mt19937 = mersenne_twister_engine<uint_fast32_t,32,624,397,31,0x9908b0df,11,0xffffffff,7,0x9d2c5680,15,0xefc60000,18,1812433253>;
3
#Required behavior: The 10000th consecutive invocation of a default-constructed object
of type mt19937 shall produce the value 4123659995.
Big four compilers (GCC, Clang, VC++, Intel C++) I used produced same MT19937 output.
Distributions, from the other hand, are not specified that well, and therefore vary between compilers and libraries. If you need portable distributions you either roll your own or use something from Boost or similar libraries
Any pseudo RNG which takes a seed will give you the same sequence for the same seed every time, on every machine. This happens since the generator is just a (complex) mathematical function, and has nothing actually random about it. Most times when you want to randomize, you take the seed from the system clock, which constantly changes so each run will be different.
It is useful to have the same sequence in computer games for example when you have a randomly generated world and want to generate the exact same one, or to avoid people cheating using save games in a game with random chances.

Deterministic random numbers from STL [duplicate]

Inspired from this and the similar questions, I want to learn how does mt19937 pseudo-number generator in C++11 behaves, when in two separate machines, it is seeded with the same input.
In other words, say we have the following code;
std::mt19937 gen{ourSeed};
std::uniform_int_distribution<int> dest{0, 10000};
int randNumber = dist(gen);
If we try this code on different machines at different times, will we get the same sequence of randNumber values or a different sequence each time ?
And in either case, why this is the case ?
A further question:
Regardless of the seed, will this code generate randomly numbers infinitely ? I mean for example, if we use this block of code in a program that runs for months without stopping, will there be any problem in the generation of the number or in the uniformity of the numbers ?
The generator will generate the same values.
The distributions may not, at least with different compilers or library versions. The standard did not specify their behaviour to that level of detail. If you want stability between compilers and library versions, you have to roll your own distribution.
Barring library/compiler changes, that will return the same values in the same sequence. But if you care write your own distribution.
...
All PRNGs have patterns and periods. mt19937 is named after its period of 2^19937-1, which is unlikely to be a problem. But other patterns can develop. MT PRNGs are robust against many statistical tests, but they are not crytographically secure PRNGs.
So it being a problem if you run for months will depend on specific details of what you'd find to be a problem. However, mt19937 is going to be a better PRNG than anything you are likely to write yourself. But assume attackers can predict its future behaviour from past evidence.
Regardless of the seed, will this code generate randomly numbers infinitely ? I mean for example, if we use this block of code in a program that runs for months without stopping, will there be any problem in the generation of the number or in the uniformity of the numbers ?
RNG we deal with with standard C++ are called pseudo-random RNGs. By definition, this is pure computational device, with multi-bit state (you could think about state as large bit vector) and three functions:
state seed2state(seed);
state next_state(state);
uint(32|64)_t state2output(state);
and that is it. Obviously, state has finite size, 19937 bits in case of MT19937, so total number of states are 219937 and thus MT19937 next_state() function is a periodic one, with max period no more than 219937. This number is really HUGE, and most likely more than enough for typical simulation
But output is at max 64 bits, so output space is 264. It means that during large run any particular output appears quite a few times. What matters is when not only some 64bit number appears again, but number after that, and after that and after that - this is when you know RNG period is reached.
If we try this code on different machines at different times, will we get the same sequence of randNumber values or a different sequence each time?
Generators are defined rather strictly, and you'll get the same bit stream. For example for MT19937 from C++ standard (https://timsong-cpp.github.io/cppwp/rand)
class mersenne_twister_engine {
...
static constexpr result_type default_seed = 5489u;
...
and function seed2state described as (https://timsong-cpp.github.io/cppwp/rand#eng.mers-6)
Effects: Constructs a mersenne_­twister_­engine object. Sets X−n to value mod 2w. Then, iteratively for i=−n,…,−1, sets Xi to ...
Function next_state is described as well together with test value at 10000th invocation. Standard says (https://timsong-cpp.github.io/cppwp/rand#predef-3)
using mt19937 = mersenne_twister_engine<uint_fast32_t,32,624,397,31,0x9908b0df,11,0xffffffff,7,0x9d2c5680,15,0xefc60000,18,1812433253>;
3
#Required behavior: The 10000th consecutive invocation of a default-constructed object
of type mt19937 shall produce the value 4123659995.
Big four compilers (GCC, Clang, VC++, Intel C++) I used produced same MT19937 output.
Distributions, from the other hand, are not specified that well, and therefore vary between compilers and libraries. If you need portable distributions you either roll your own or use something from Boost or similar libraries
Any pseudo RNG which takes a seed will give you the same sequence for the same seed every time, on every machine. This happens since the generator is just a (complex) mathematical function, and has nothing actually random about it. Most times when you want to randomize, you take the seed from the system clock, which constantly changes so each run will be different.
It is useful to have the same sequence in computer games for example when you have a randomly generated world and want to generate the exact same one, or to avoid people cheating using save games in a game with random chances.

If we seed c++11 mt19937 as the same on different machines, will we get the same sequence of random numbers

Inspired from this and the similar questions, I want to learn how does mt19937 pseudo-number generator in C++11 behaves, when in two separate machines, it is seeded with the same input.
In other words, say we have the following code;
std::mt19937 gen{ourSeed};
std::uniform_int_distribution<int> dest{0, 10000};
int randNumber = dist(gen);
If we try this code on different machines at different times, will we get the same sequence of randNumber values or a different sequence each time ?
And in either case, why this is the case ?
A further question:
Regardless of the seed, will this code generate randomly numbers infinitely ? I mean for example, if we use this block of code in a program that runs for months without stopping, will there be any problem in the generation of the number or in the uniformity of the numbers ?
The generator will generate the same values.
The distributions may not, at least with different compilers or library versions. The standard did not specify their behaviour to that level of detail. If you want stability between compilers and library versions, you have to roll your own distribution.
Barring library/compiler changes, that will return the same values in the same sequence. But if you care write your own distribution.
...
All PRNGs have patterns and periods. mt19937 is named after its period of 2^19937-1, which is unlikely to be a problem. But other patterns can develop. MT PRNGs are robust against many statistical tests, but they are not crytographically secure PRNGs.
So it being a problem if you run for months will depend on specific details of what you'd find to be a problem. However, mt19937 is going to be a better PRNG than anything you are likely to write yourself. But assume attackers can predict its future behaviour from past evidence.
Regardless of the seed, will this code generate randomly numbers infinitely ? I mean for example, if we use this block of code in a program that runs for months without stopping, will there be any problem in the generation of the number or in the uniformity of the numbers ?
RNG we deal with with standard C++ are called pseudo-random RNGs. By definition, this is pure computational device, with multi-bit state (you could think about state as large bit vector) and three functions:
state seed2state(seed);
state next_state(state);
uint(32|64)_t state2output(state);
and that is it. Obviously, state has finite size, 19937 bits in case of MT19937, so total number of states are 219937 and thus MT19937 next_state() function is a periodic one, with max period no more than 219937. This number is really HUGE, and most likely more than enough for typical simulation
But output is at max 64 bits, so output space is 264. It means that during large run any particular output appears quite a few times. What matters is when not only some 64bit number appears again, but number after that, and after that and after that - this is when you know RNG period is reached.
If we try this code on different machines at different times, will we get the same sequence of randNumber values or a different sequence each time?
Generators are defined rather strictly, and you'll get the same bit stream. For example for MT19937 from C++ standard (https://timsong-cpp.github.io/cppwp/rand)
class mersenne_twister_engine {
...
static constexpr result_type default_seed = 5489u;
...
and function seed2state described as (https://timsong-cpp.github.io/cppwp/rand#eng.mers-6)
Effects: Constructs a mersenne_­twister_­engine object. Sets X−n to value mod 2w. Then, iteratively for i=−n,…,−1, sets Xi to ...
Function next_state is described as well together with test value at 10000th invocation. Standard says (https://timsong-cpp.github.io/cppwp/rand#predef-3)
using mt19937 = mersenne_twister_engine<uint_fast32_t,32,624,397,31,0x9908b0df,11,0xffffffff,7,0x9d2c5680,15,0xefc60000,18,1812433253>;
3
#Required behavior: The 10000th consecutive invocation of a default-constructed object
of type mt19937 shall produce the value 4123659995.
Big four compilers (GCC, Clang, VC++, Intel C++) I used produced same MT19937 output.
Distributions, from the other hand, are not specified that well, and therefore vary between compilers and libraries. If you need portable distributions you either roll your own or use something from Boost or similar libraries
Any pseudo RNG which takes a seed will give you the same sequence for the same seed every time, on every machine. This happens since the generator is just a (complex) mathematical function, and has nothing actually random about it. Most times when you want to randomize, you take the seed from the system clock, which constantly changes so each run will be different.
It is useful to have the same sequence in computer games for example when you have a randomly generated world and want to generate the exact same one, or to avoid people cheating using save games in a game with random chances.

How to properly seed a 64 bit random generator with time [duplicate]

I'm working on a program that runs Monte Carlo simulation; specifically, I'm using a Metropolis algorithm. The program needs to generate possibly billions of "random" numbers. I know that the Mersenne twister is very popular for Monte Carlo simulation, but I would like to make sure that I am seeding the generator in the best way possible.
Currently I'm computing a 32-bit seed using the following method:
mt19937_64 prng; //pseudo random number generator
unsigned long seed; //store seed so that every run can follow the same sequence
unsigned char seed_count; //to help keep seeds from repeating because of temporal proximity
unsigned long genSeed() {
return ( static_cast<unsigned long>(time(NULL)) << 16 )
| ( (static_cast<unsigned long>(clock()) & 0xFF) << 8 )
| ( (static_cast<unsigned long>(seed_count++) & 0xFF) );
}
//...
seed = genSeed();
prng.seed(seed);
I have a feeling there are much better ways to assure non-repeating new seeds, and I'm quite sure mt19937_64 can be seeded with more then 32-bits. Does anyone have any suggestions?
Use std::random_device to generate the seed. It'll provide non-deterministic random numbers, provided your implementation supports it. Otherwise it's allowed to use some other random number engine.
std::mt19937_64 prng;
seed = std::random_device{}();
prng.seed(seed);
operator() of std::random_device returns an unsigned int, so if your platform has 32-bit ints, and you want a 64-bit seed, you'll need to call it twice.
std::mt19937_64 prng;
std::random_device device;
seed = (static_cast<uint64_t>(device()) << 32) | device();
prng.seed(seed);
Another available option is using std::seed_seq to seed the PRNG. This allows the PRNG to call seed_seq::generate, which produces a non-biased sequence over the range [0 ≤ i < 232), with an output range large enough to fill its entire state.
std::mt19937_64 prng;
std::random_device device;
std::seed_seq seq{device(), device(), device(), device()};
prng.seed(seq);
I'm calling the random_device 4 times to create a 4 element initial sequence for seed_seq. However, I'm not sure what the best practice for this is, as far as length or source of elements in the initial sequence is concerned.
Let's recap (comments too), we want to generate different seeds to get independent sequences of random numbers in each of the following occurrences:
The program is relaunched on the same machine later,
Two threads are launched on the same machine at the same time,
The program is launched on two different machines at the same time.
1 is solved using time since epoch, 2 is solved with a global atomic counter, 3 is solved with a platform dependent id (see How to obtain (almost) unique system identifier in a cross platform way?)
Now the point is what is the best way to combine them to get a uint_fast64_t (the seed type of std::mt19937_64)? I assume here that we do not know a priori the range of each parameter or that they are too big, so that we cannot just play with bit shifts getting a unique seed in a trivial way.
A std::seed_seq would be the easy way to go, however its return type uint_least32_t is not our best choice.
A good 64 bits hasher is a much better choice. The STL offers std::hash under the functional header, a possibility is to concatenate the three numbers above into a string and then passing it to the hasher. The return type is a size_t which on 64 machines is very likely to match our requirements.
Collisions are unlikely but of course possible, if you want to be sure to not build up statistics that include a sequence more than once, you can only store the seeds and discard the duplicated runs.
A std::random_device could also be used to generate the seeds (collisions may still happen, hard to say if more or less often), however since the implementation is library dependent and may go down to a pseudo random generator, it is mandatory to check the entropy of the device and avoid to a use zero-entropy device for this purpose as you will probably break the points above (especially point 3). Unfortunately you can discover the entropy only when you take the program to the specific machine and test with the installed library.
As far as I can tell from your comments, it seems that what you are interested in is ensuring that if a process starts several of your simulations at exactly the same time, they will get different seeds.
The only significant problem I can see with your current approach is a race condition: if you are going to start multiple simulations simultaneously, it must be done from separate threads. If it is done from separate threads, you need to update seed_count in a thread-safe manner, or multiple simulations could end up with the same seed_count. You could simply make it an std::atomic<int> to solve that.
Beyond that, it just seems more complicated than it has to be. What do you gain by using two separate timers? You could do something as simple as this:
at program startup, grab the current system time (using a high resolution timer) once, and store that.
assign each simulation a unique ID (this could just be an integer initialized to 0, (which should be generated without any race conditions, as mentioned above) which is incremented each time a simulation starts, effectively like your seed_count.
when seeding a simulation, just use the initially generated timestamp + the unique ID. If you do this, every simulation in the process is assured a unique seed.
How about...
There is some main code that starts the threads and there are copies of a function run in those threads, each copy with it's own Marsenne Twister. Am I correct? If it is so, why not use another random generator in the main code? It would be seeded with time stamp, and send it's consecutive pseudorandom numbers to function instances as their seeds.
From the comments I understand you want to run several instances of the algorithm, one instance per thread. And given that the seed for each instance will be generated pretty much at the same time, you want to ensure that these seeds are different. If that is indeed what you are trying to solve, then your genSeed function will not necessarily guarantee that.
In my opinion, what you need is a parallelisable random number generator (RNG). What this means, is that you only need one RNG which you instantiate with only one seed (which you can generate with your genSeed) and then the sequence of random numbers that would normally be gerenated in a sequential environment is split in X non-overlapping sequences; where X is the number of threads. There is a very good library which provides these type of RNGs in C++, follows the C++ standard for RNGs, and is called TRNG(http://numbercrunch.de/trng).
Here is a little more information. There are two ways you can achieve non-overlapping sequences per thread. Let's assume that the sequence of random numbers from a single RNG is r = {r(1), r(2), r(3),...} and you have only two threads. If you know in advance how many random numbers you will need per thread, say M, you can give the first M of the r sequence to the first thread, ie {r(1), r(2),..., r(M)}, and the second M to the second thread, ie {r(M+1), r(M+2),...r(2M)}. This technique is called blocksplitting since you split the sequence in two consecutive blocks.
The second way is to create the sequence for the first thread as {r(1), r(3), r(5), ...} and for the second thread as {r(2), r(4), r(6),...}, which has the advantage that you do not need to know in advance how many random numbers you will need per thread. This is called leapfroging.
Note that both methods guarantee that the sequences per thread are indeed non-overlapping. The link I posted above has many examples and the library itself is extremely easy to use. I hope my post helps.
The POSIX function gettimeofday(2) gives the time with microsecond precision.
The POSIX thread function gettid(2) returns the ID number of the current thread.
You should be able to combine the time in seconds since the epoch (which you are already using), the time in microseconds, and the thread ID to get a seed which is always unique on one machine.
If you also need it to be unique across multiple machines, you could consider also getting the hostname, the IP address, or the MAC address.
I would guess that 32 bits is probably enough, since there are over 4 billion unique seeds available. Unless you are running billions of processes, which doesn't seem likely, you should be alright without going to 64 bit seeds.

Best way to seed mt19937_64 for Monte Carlo simulations

I'm working on a program that runs Monte Carlo simulation; specifically, I'm using a Metropolis algorithm. The program needs to generate possibly billions of "random" numbers. I know that the Mersenne twister is very popular for Monte Carlo simulation, but I would like to make sure that I am seeding the generator in the best way possible.
Currently I'm computing a 32-bit seed using the following method:
mt19937_64 prng; //pseudo random number generator
unsigned long seed; //store seed so that every run can follow the same sequence
unsigned char seed_count; //to help keep seeds from repeating because of temporal proximity
unsigned long genSeed() {
return ( static_cast<unsigned long>(time(NULL)) << 16 )
| ( (static_cast<unsigned long>(clock()) & 0xFF) << 8 )
| ( (static_cast<unsigned long>(seed_count++) & 0xFF) );
}
//...
seed = genSeed();
prng.seed(seed);
I have a feeling there are much better ways to assure non-repeating new seeds, and I'm quite sure mt19937_64 can be seeded with more then 32-bits. Does anyone have any suggestions?
Use std::random_device to generate the seed. It'll provide non-deterministic random numbers, provided your implementation supports it. Otherwise it's allowed to use some other random number engine.
std::mt19937_64 prng;
seed = std::random_device{}();
prng.seed(seed);
operator() of std::random_device returns an unsigned int, so if your platform has 32-bit ints, and you want a 64-bit seed, you'll need to call it twice.
std::mt19937_64 prng;
std::random_device device;
seed = (static_cast<uint64_t>(device()) << 32) | device();
prng.seed(seed);
Another available option is using std::seed_seq to seed the PRNG. This allows the PRNG to call seed_seq::generate, which produces a non-biased sequence over the range [0 ≤ i < 232), with an output range large enough to fill its entire state.
std::mt19937_64 prng;
std::random_device device;
std::seed_seq seq{device(), device(), device(), device()};
prng.seed(seq);
I'm calling the random_device 4 times to create a 4 element initial sequence for seed_seq. However, I'm not sure what the best practice for this is, as far as length or source of elements in the initial sequence is concerned.
Let's recap (comments too), we want to generate different seeds to get independent sequences of random numbers in each of the following occurrences:
The program is relaunched on the same machine later,
Two threads are launched on the same machine at the same time,
The program is launched on two different machines at the same time.
1 is solved using time since epoch, 2 is solved with a global atomic counter, 3 is solved with a platform dependent id (see How to obtain (almost) unique system identifier in a cross platform way?)
Now the point is what is the best way to combine them to get a uint_fast64_t (the seed type of std::mt19937_64)? I assume here that we do not know a priori the range of each parameter or that they are too big, so that we cannot just play with bit shifts getting a unique seed in a trivial way.
A std::seed_seq would be the easy way to go, however its return type uint_least32_t is not our best choice.
A good 64 bits hasher is a much better choice. The STL offers std::hash under the functional header, a possibility is to concatenate the three numbers above into a string and then passing it to the hasher. The return type is a size_t which on 64 machines is very likely to match our requirements.
Collisions are unlikely but of course possible, if you want to be sure to not build up statistics that include a sequence more than once, you can only store the seeds and discard the duplicated runs.
A std::random_device could also be used to generate the seeds (collisions may still happen, hard to say if more or less often), however since the implementation is library dependent and may go down to a pseudo random generator, it is mandatory to check the entropy of the device and avoid to a use zero-entropy device for this purpose as you will probably break the points above (especially point 3). Unfortunately you can discover the entropy only when you take the program to the specific machine and test with the installed library.
As far as I can tell from your comments, it seems that what you are interested in is ensuring that if a process starts several of your simulations at exactly the same time, they will get different seeds.
The only significant problem I can see with your current approach is a race condition: if you are going to start multiple simulations simultaneously, it must be done from separate threads. If it is done from separate threads, you need to update seed_count in a thread-safe manner, or multiple simulations could end up with the same seed_count. You could simply make it an std::atomic<int> to solve that.
Beyond that, it just seems more complicated than it has to be. What do you gain by using two separate timers? You could do something as simple as this:
at program startup, grab the current system time (using a high resolution timer) once, and store that.
assign each simulation a unique ID (this could just be an integer initialized to 0, (which should be generated without any race conditions, as mentioned above) which is incremented each time a simulation starts, effectively like your seed_count.
when seeding a simulation, just use the initially generated timestamp + the unique ID. If you do this, every simulation in the process is assured a unique seed.
How about...
There is some main code that starts the threads and there are copies of a function run in those threads, each copy with it's own Marsenne Twister. Am I correct? If it is so, why not use another random generator in the main code? It would be seeded with time stamp, and send it's consecutive pseudorandom numbers to function instances as their seeds.
From the comments I understand you want to run several instances of the algorithm, one instance per thread. And given that the seed for each instance will be generated pretty much at the same time, you want to ensure that these seeds are different. If that is indeed what you are trying to solve, then your genSeed function will not necessarily guarantee that.
In my opinion, what you need is a parallelisable random number generator (RNG). What this means, is that you only need one RNG which you instantiate with only one seed (which you can generate with your genSeed) and then the sequence of random numbers that would normally be gerenated in a sequential environment is split in X non-overlapping sequences; where X is the number of threads. There is a very good library which provides these type of RNGs in C++, follows the C++ standard for RNGs, and is called TRNG(http://numbercrunch.de/trng).
Here is a little more information. There are two ways you can achieve non-overlapping sequences per thread. Let's assume that the sequence of random numbers from a single RNG is r = {r(1), r(2), r(3),...} and you have only two threads. If you know in advance how many random numbers you will need per thread, say M, you can give the first M of the r sequence to the first thread, ie {r(1), r(2),..., r(M)}, and the second M to the second thread, ie {r(M+1), r(M+2),...r(2M)}. This technique is called blocksplitting since you split the sequence in two consecutive blocks.
The second way is to create the sequence for the first thread as {r(1), r(3), r(5), ...} and for the second thread as {r(2), r(4), r(6),...}, which has the advantage that you do not need to know in advance how many random numbers you will need per thread. This is called leapfroging.
Note that both methods guarantee that the sequences per thread are indeed non-overlapping. The link I posted above has many examples and the library itself is extremely easy to use. I hope my post helps.
The POSIX function gettimeofday(2) gives the time with microsecond precision.
The POSIX thread function gettid(2) returns the ID number of the current thread.
You should be able to combine the time in seconds since the epoch (which you are already using), the time in microseconds, and the thread ID to get a seed which is always unique on one machine.
If you also need it to be unique across multiple machines, you could consider also getting the hostname, the IP address, or the MAC address.
I would guess that 32 bits is probably enough, since there are over 4 billion unique seeds available. Unless you are running billions of processes, which doesn't seem likely, you should be alright without going to 64 bit seeds.