STL said on his lecture on Going Native, that
Multiple threads cannot simultaneously call a single object.
(Last slide on pptx file online.)
If I need uniform distribution of random numbers across multiple threads (not independent random generators for threads), how to correctly use the same uniform_int_distribution from multiple threads? Or it is not possible at all?
Just create multiple copies. A distribution is a lightweight object, cheaper than the mutex you'd need to protect it.
You're never meant to use a PRNG for long enough that you can see a perfectly uniform distribution from it. You should only see a random approximation of a uniform distribution. The fact that giving every thread its own PRNG means it would take numThreads times longer for every thread to see that perfect distribution does not matter.
PRNGs are subject to mathematical analysis to confirm that they produce each possible output the same number of times, but this does not reflect how they're meant to be used.
If they were used this way it would represent a weakness. If you watch it for long enough you know that having seen output x n times, and every other output n+1 times, the next output must be x. This is uniform but it is obviously not random.
A true RNG will never produce a perfectly uniform output, but it should also never show the same bias twice. A PRNG known to have non-uniform output will have the same non-uniform output every time it's used. This means that if you run simulations for longer to average out the noise, the PRNG's own bias will eventually become the most statistically significant factor. So an ideal PRNG should eventually emit a perfectly uniform distribution (actually often not perfect, but within a known, very small margin) over a sufficiently long period.
Your random seed will pick a random point somewhere in that sequence and your random number requests will proceed some way along that sequence, but if you ever find yourself going all the way around and coming back to your own start point (or even within 1/10th that distance) then you need to get a bigger PRNG.
That said, there is another consideration. Often PRNGs are used specifically so that their results will be predictable.
If you're in a situation where you need a mutex to safely access a single PRNG, then you've probably already lost your predictable behaviour because you can't tell which thread got the next [predictable] random number. Each thread still sees an unpredictable sequence because you can't be sure which subset of PRNG results it got.
If every thread has its own PRNG, then (so long as other ordering constraints are met as necessary) you can still get predictable results.
You can also use thread_local storage when defining your random generator engine (assuming you want to have only one declared in your program). In this way, each thread will have access to a "local" copy of it, so you won't have any data races. You cannot just use a single engine across all threads, since you may have data races conditions when changing the state of it during the generation sequence.
Related
Edited: to clarify high dimension meaning
My problem is generating N size array or vector(corresponding to math N dimensional vector), N is huge, more than 10^8, each dimension lies at i.i.d uniform distribution over 1~q(or 0~q-1), where q is very small, q<=2^4. And array must be good statistically. So the basic soltuion is
constexpr auto N = 1000000000UZ;
constexpr auto q = 12;
std::array<std::uint8_t, N> Array{};
std::random_device Device{};
std::mt19937_64 Eng{Device()};
std::uniform_int_distribution<std::uint8_t> Dis(0, q);
std::ranges::generate(Array, [&]{return Dis(Eng);});
But the problem lies in performance, I have several plans to improve it:
because s=q^8<=2^32 so, we use
std::uniform_int_distribution<std::uint8_t> Dis(0, s);
add decompose the result t<q^8 into 8 different t_i<q, but this decomposition is not straightforward, and may have performance flaw in decomposition.
use boost::random::uniform_smallint, but I don't know how much improvement will be? this cann't be used together with method 1.
use multi-threading like openmp or <thread>, but C++ PRNG may not be thread-safe, so it's hard to write to my knowledge.
use other generator such as pcg32 or anything else, but these are not thread-safe as well.
Does anyone offer any suggestions?
If the running time for a sequential initialization is prohibitive (for instance because it happens multiple times), you can indeed consider doing it in parallel. First of all, the C++ random number generation does not have global state, so you don't have to worry about thread-safety if you create a separate generator per thread.
However, you have to wonder independence. Clearly you can not start them all on the same seed. But picking random seeds is also dangerous. Just imagine what would happen if sucessive generators started on successive values in the sequence.
Here are two approaches:
If you know how many threads there are, say p, first start each generator at the same seed, and then thread t generates t numbers. That's the starting point. Now every time you need a new number, you take p steps! This means that the threads are using interlacing values from the random sequence.
On the other hand, if you know how many iterations i will happen in total, start each thread on the same seed, and then let thread t take t * i steps. So now each thread gets a disjoint block of values from the sequence.
There are other approaches to parallel random number generation, but these are simple solutions based on the standard sequential generator.
I have been looking at a lot of questions regarding generating a random number in the fragment shader. However, none of them are of shape:
float rand() {
...
return random_number;
}
All of them require a input parameter. Is there any way to get a random number without any input?
Computers are deterministic devices that execute algorithms. Randomness cannot manifest itself out of thin air. While many CPUs do have some form of randomness-generating construct, even that takes some kind of input, even if you don't see it.
This is why random number algorithms are technically called "pseudo-random number" generators (PRNG). They take an input and generate an output based on that input. The trick with PRNGs is that they generate output that (theoretically) is not predictable from the input. Outside of hardware-based random generators, all computer RNGs work this way.
Even C/C++'s rand function does, though it hides the input behind the seed state set by srand. rand uses internal state which can be initialized by srand, and that state is used as the input to the PRNG function. Every execution of rand changes this state, presumably in a way that is not easy to predict.
You could theoretically try the rand trick in a shader, by using Image Load/Store functionality to store the internal state. However, that would still qualify as input. The function might not take input, but the shader certainly would; you'd have to provide a texture or buffer to use as the internal state.
Also, it wouldn't work, due to the incoherent nature of image load/store reading and writing. Multiple invocations would be reading from and writing to the same state. That can't happen.
For example:
for (...)
{
... std::uniform_real_distribution<float>(min, max)(rng) ...
}
Intuitively it seems to me that the constructor can't need to do much besides store the two values, and there shouldn't be any state in the uniform_*_distribution instance. I haven't profiled it myself (I'm not at that kind of stage in the project yet), but I felt this question belonged out there :)
I am aware that this would be a bad idea for some distribution types - for example, std::normal_distribution might generate its numbers in pairs, and the second number would be wasted each time.
I feel what I have is more readable than just accessing rng() and doing the maths myself, but I'd be interested if there are any other ways to write this more straightforwardly.
std::uniform_real_distribution's objects are lightweight, so it's not a problem to construct them each time inside the loop.
Sometimes, the hidden internal state of distribution is important, but not in this case. reset() function is doing nothing in all popular STL implementations:
void
reset() { }
For example, it's not true for std::normal_distribution:
void
reset()
{ _M_saved_available = false; }
Well, some rng's have substantial state, e.g. Mersenne twister has something like 600 words of state (although I don't know about the C++ implementation of that). So there's the potential for taking up some time in the loop just to repeatedly create the rng.
However, it may well be that even when taking the state into account, it's still fast enough that taking the extra time to construct the rng doesn't matter. Only profiling can tell you for sure.
My personal preference is to move the rng out of the loop and compute the desired value via a simple formula if possible (e.g. min + (max - min)*x for a uniform distribution or mu + sigma*x for a Gaussian distribution).
void NetClass::Modulate(vector <synapse> & synapses )
{
int size = synapses.size();
int split = 200 * 0.5;
for(int w=0; w < size; w++)
if(synapses[w].active)
synapses[w].rmod = ((rand_r(seedp) % 200 - split ) / 1000.0);
}
The function rand_r(seedp) is seriously bottle-necking my program. Specifically, its slowing me by 3X when run serialy, and 4.4X when run on 16 cores. rand() is not an option because its even worse. Is there anything I can do to streamline this? If it will make a difference, I think I can sustain a loss in terms of statistical randomness. Would pre-generating (before execution) a list of random numbers and then loading to the thread stacks be an option?
Problem is that seedp variable (and its memory location) is shared among several threads. Processor cores must synchronize their caches each time they access this ever changing value, which hampers performance. The solution is that all threads work with their own seedp, and so avoid cache synchronization.
It depends on how good the statistical randomness needs to be. For high quality, the Mersenne twister, or its SIMD variant, is a good choice. You can generate and buffer a large block of pseudo-random numbers at a time, and each thread can have its own state vector. The Park-Miller-Carta PRNG is extremely simple - these guys even implemented it as a CUDA kernel.
Marsaglia's xor-shift generator is the probably fastest "reasonable quality" generator that you can use. It does not quite have the same "quality" as MT19937 or WELL, but honestly these differences are academic sophistries.
For all real, practical uses, there is no observable difference, except 1-2 orders of magnitude difference in execution speed, and 3 orders of magnitude of difference in memory consumption.
The xor-shift generator is also naturally thread-safe (in the sense that it will produce non-deterministic, pseudorandom results, and it will not crash) without anything special, and it can be trivially made thread-safe in another sense (in the sense that it will generate per-thread independent, deterministic, pseudorandom numbers) by having one instance per thread.
It could also be made threadsafe in yet another sense (generate a deterministic, pseudorandom sequence handed out to threads as they come) using atomic compare-exchange, but I don't think that's very useful.
The only three notable issues with the xor-shift generator are:
It is not k-distributed for up to 623 dimensions, but honestly who cares. I can't think in more than 4 dimensions (and even that's a lie!), and can't imagine many applications where more than 10 or 20 dimensions could possibly matter. That would have to be some quite esoteric simulation.
It passes most, but not ever pedantic statistic test. Again, who cares. Most people use a random generator that does not even pass a single test and never notice.
A zero seed will produce a zero sequence. This is trivially fixed by adding a non-zero constant to one of the temporaries (I wonder why Marsaglia never thought of that?). Having said that, MT19937 also behaves extremely badly given a zero seed, and does not recover nearly as well.
Have a look at Boost: http://www.boost.org/doc/libs/1_47_0/doc/html/boost_random.html
It has a number of options which vary in complexity (= speed) and randomness (cycle length).
If you don't need maximum randomness, you might get away with a simple Mersenne Twister.
do you absolutely need to have 1 shared random?
I had a similar contention problem a while ago, the solution that worked best for me was to create a new Random class (I was working in C#) for each thread. they're dead cheap anyway.
If you seed them properly to make sure you don't create duplicate seeds you should be fine. Then you won't have shared state so you don't need to use the threadsafe function.
Regards GJ
maybe you don't have to call it in every iteration? you could initialize an array of pre-randomized elements and make successive use of it...
I think you can use OpenMP for paralleling like this:
#pragma omp parallel
for(int w=0; w < size; w++)
I have a loop where I am adding noise to some points; these are being later used as the basis for some statistical tests.
The datasets involved are quite large, so I would like to parallelise it using openMP to speed things up. The issue comes up when I want to have multiple PRNGs. I have my own PRNG class based upon NR's modulo method (rand4 I think), but I am unsure how to seed the PRNGs correctly to ensure appropriate entropy
Normalliy I would do something like this
prng.initTimer();
But if I have an array of prngs, one per worker thread, then I cannot simply call initTimer on each instance -- the timer may not change, and the timers being close may introduce correlation.
I need to protect against natural correlations, not against malicious attackers (this is experimental data), so I need to have a safe way of seeding the rng array.
I thought of simply using
prng[0].initTimer()
for(int i=1; i<numRNGs; i++)
prng[i].init(prng[0].getRandNum());
Then calling my loop, but am unsure if this will introduce correlations in the modulo method.
Seeding PRNGs doesn't necessary create independent streams. You should seed only the the first instance (call it reference) and initialise the remaining instances by fast forwarding the reference instance. This only works if you know how many random numbers each thread will consume and the fast forwarding algorithm is available.
I don't know much about your rand4 (googled it, but nothing specific came up), but you shouldn't assume that it is possible to create independent streams just by seeding. You probably want to use different (a better) PRNG. Take a look at WELL. It is fast, has good statistical properties and developed by well know experts. WELL 512 and 1024 are one of the fastest PRNGs available and both have huge periods. You can initialise several WELL instances with distinct seeds in order to create independent streams. Thanks to huge period there is almost zero chance that your PRNGs will generate overlapping streams of random numbers.
If your PRNGs are called frequently, beware of false sharing. This Herb Sutter's article explains how false sharing can kill multi-core performance. Packing multiple PRNGs into a contiguous array is almost a perfect recipe for false sharing. In order to avoid false sharing either add padding between PRNGs or allocate PRNGs on heap/free store. In the later case each RNG should be allocated individually using some sort of aligned allocator. Your compiler should provide a version of aligned malloc. Check the docs (well, googling is actually faster than reading manuals). Visual C++ has _aligned_malloc, GCC has memalign and posix_memalign. The aliment value must be a multiple of CPU's cache line size. The common practice is to align along 128 byte boundaries. For portable solution you can use TBB's cache aligned allocator.
I think it depends on the properties of your PRNG. Usual PRNGs weaknesses are lower entropy in the lower bits and lower entropy for the first n values. So I think you should check your PRNG for such weaknesses and change your code accordingly.
Perhaps some of the diehard tests give useful information, but you can also check the first n values and their statistical properties like sum and variance yourself and compare them to the expected values.
For example, seed the PRNG and sum up the first 100 values modulo 11 of your PRNG, repeat this R times. If the total sum is very different from the expected (5*100*R), your PRNG suffers from one or both weaknesses mentioned above.
Knowing nothing about the PRNG, I'd feel safer using something like this:
prng[0].initTimer();
// Throw the first 100 values away
for(int i=1; i < 100; i++)
prng[0].getRandNum();
// Use only higher bits for seed values (assuming 32 bit size)
for(int i=1; i<numRNGs; i++)
prng[i].init(((prng[0].getRandNum() >> 16) << 16)
+ (prng[0].getRandNum() >> 16));
But of course, these are speculations about the PRNG. With an ideal PRNG, your approach should work fine as it is.
If you seed your PRNGs using a sequence of numbers from the same type of PRNG, they will all be producing the same sequence of numbers, offset by one from each other. If you want them to produce different numbers, you will need to seed them with a sequence of pseudorandom numbers from a different PRNG.
Alternatively, if you are on a unix-like system with a /dev/random, you can just read from that device to get a sequence of random numbers to use as your seeds.