I have been looking at a lot of questions regarding generating a random number in the fragment shader. However, none of them are of shape:
float rand() {
...
return random_number;
}
All of them require a input parameter. Is there any way to get a random number without any input?
Computers are deterministic devices that execute algorithms. Randomness cannot manifest itself out of thin air. While many CPUs do have some form of randomness-generating construct, even that takes some kind of input, even if you don't see it.
This is why random number algorithms are technically called "pseudo-random number" generators (PRNG). They take an input and generate an output based on that input. The trick with PRNGs is that they generate output that (theoretically) is not predictable from the input. Outside of hardware-based random generators, all computer RNGs work this way.
Even C/C++'s rand function does, though it hides the input behind the seed state set by srand. rand uses internal state which can be initialized by srand, and that state is used as the input to the PRNG function. Every execution of rand changes this state, presumably in a way that is not easy to predict.
You could theoretically try the rand trick in a shader, by using Image Load/Store functionality to store the internal state. However, that would still qualify as input. The function might not take input, but the shader certainly would; you'd have to provide a texture or buffer to use as the internal state.
Also, it wouldn't work, due to the incoherent nature of image load/store reading and writing. Multiple invocations would be reading from and writing to the same state. That can't happen.
Related
For example:
for (...)
{
... std::uniform_real_distribution<float>(min, max)(rng) ...
}
Intuitively it seems to me that the constructor can't need to do much besides store the two values, and there shouldn't be any state in the uniform_*_distribution instance. I haven't profiled it myself (I'm not at that kind of stage in the project yet), but I felt this question belonged out there :)
I am aware that this would be a bad idea for some distribution types - for example, std::normal_distribution might generate its numbers in pairs, and the second number would be wasted each time.
I feel what I have is more readable than just accessing rng() and doing the maths myself, but I'd be interested if there are any other ways to write this more straightforwardly.
std::uniform_real_distribution's objects are lightweight, so it's not a problem to construct them each time inside the loop.
Sometimes, the hidden internal state of distribution is important, but not in this case. reset() function is doing nothing in all popular STL implementations:
void
reset() { }
For example, it's not true for std::normal_distribution:
void
reset()
{ _M_saved_available = false; }
Well, some rng's have substantial state, e.g. Mersenne twister has something like 600 words of state (although I don't know about the C++ implementation of that). So there's the potential for taking up some time in the loop just to repeatedly create the rng.
However, it may well be that even when taking the state into account, it's still fast enough that taking the extra time to construct the rng doesn't matter. Only profiling can tell you for sure.
My personal preference is to move the rng out of the loop and compute the desired value via a simple formula if possible (e.g. min + (max - min)*x for a uniform distribution or mu + sigma*x for a Gaussian distribution).
STL said on his lecture on Going Native, that
Multiple threads cannot simultaneously call a single object.
(Last slide on pptx file online.)
If I need uniform distribution of random numbers across multiple threads (not independent random generators for threads), how to correctly use the same uniform_int_distribution from multiple threads? Or it is not possible at all?
Just create multiple copies. A distribution is a lightweight object, cheaper than the mutex you'd need to protect it.
You're never meant to use a PRNG for long enough that you can see a perfectly uniform distribution from it. You should only see a random approximation of a uniform distribution. The fact that giving every thread its own PRNG means it would take numThreads times longer for every thread to see that perfect distribution does not matter.
PRNGs are subject to mathematical analysis to confirm that they produce each possible output the same number of times, but this does not reflect how they're meant to be used.
If they were used this way it would represent a weakness. If you watch it for long enough you know that having seen output x n times, and every other output n+1 times, the next output must be x. This is uniform but it is obviously not random.
A true RNG will never produce a perfectly uniform output, but it should also never show the same bias twice. A PRNG known to have non-uniform output will have the same non-uniform output every time it's used. This means that if you run simulations for longer to average out the noise, the PRNG's own bias will eventually become the most statistically significant factor. So an ideal PRNG should eventually emit a perfectly uniform distribution (actually often not perfect, but within a known, very small margin) over a sufficiently long period.
Your random seed will pick a random point somewhere in that sequence and your random number requests will proceed some way along that sequence, but if you ever find yourself going all the way around and coming back to your own start point (or even within 1/10th that distance) then you need to get a bigger PRNG.
That said, there is another consideration. Often PRNGs are used specifically so that their results will be predictable.
If you're in a situation where you need a mutex to safely access a single PRNG, then you've probably already lost your predictable behaviour because you can't tell which thread got the next [predictable] random number. Each thread still sees an unpredictable sequence because you can't be sure which subset of PRNG results it got.
If every thread has its own PRNG, then (so long as other ordering constraints are met as necessary) you can still get predictable results.
You can also use thread_local storage when defining your random generator engine (assuming you want to have only one declared in your program). In this way, each thread will have access to a "local" copy of it, so you won't have any data races. You cannot just use a single engine across all threads, since you may have data races conditions when changing the state of it during the generation sequence.
Assume I have to write a C or C++ computational intensive function that has 2 arrays as input and one array as output. If the computation uses the 2 input arrays more often than it updates the output array, I'll end up in a situation where the output array seldom gets cached because it's evicted in order to fetch the 2 input arrays.
I want to reserve one fraction of the cache for the output array and enforce somehow that those lines don't get evicted once they are fetched, in order to always write partial results in the cache.
Update1(output[]) // Output gets cached
DoCompute1(input1[]); // Input 1 gets cached
DoCompute2(input2[]); // Input 2 gets cached
Update2(output[]); // Output is not in the cache anymore and has to get cached again
...
I know there are mechanisms to help eviction: clflush, clevict, _mm_clevict, etc. Are there any mechanisms for the opposite?
I am thinking of 3 possible solutions:
Using _mm_prefetch from time to time to fetch the data back if it has been evicted. However this might generate unnecessary traffic plus that I need to be very careful to when to introduce them;
Trying to do processing on smaller chunks of data. However this would work only if the problem allows it;
Disabling hardware prefetchers where that's possible to reduce the rate of unwanted evictions.
Other than that, is there any elegant solution?
Intel CPUs have something called No Eviction Mode (NEM) but I doubt this is what you need.
While you are attempting to optimise the second (unnecessary) fetch of output[], have you given thought to using SSE2/3/4 registers to store your intermediate output values, update them when necessary, and writing them back only when all updates related to that part of output[] are done?
I have done something similar while computing FFTs (Fast Fourier Transforms) where part of the output is in registers and they are moved out (to memory) only when it is known they will not be accessed anymore. Until then, all updates happen to the registers. You'll need to introduce inline assembly to effectively use SSE* registers. Of course, such optimisations are highly dependent on the nature of the algorithm and data placement.
I am trying to get a better understanding of the question:
If it is true that the 'output' array is strictly for output, and you never do something like
output[i] = Foo(newVal, output[i]);
then, all elements in output[] are strictly write. If so, all you would ever need to 'reserve' is one cache-line. Isn't that correct?
In this scenario, all writes to 'output' generate cache-fills and could compete with the cachelines needed for 'input' arrays.
Wouldn't you want a cap on the cachelines 'output' can consume as opposed to reserving a certain number of lines.
I see two options, which may or may not work depending on the CPU you are targeting, and on your precise program flow:
If output is only written to and not read, you can use streaming-stores, i.e., a write instruction with a no-read hint, so it will not be fetched into cache.
You can use prefetching with a non-temporally-aligned (NTA) hint for input. I don't know how this is implemented in general, but I know for sure that on some Intel CPUs (e.g., the Xeon Phi) each hardware thread uses a specific way of cache for NTA data, i.e., with an 8-way cache 1/8th per thread.
I guess solution to this is hidden inside, the algorithm employed and the L1 cache size and cache line size.
Though I am not sure how much performance improvement we will see with this.
We can probably introduce artificial reads which cleverly dodge compiler and while execution, do not hurt computations as well. Single artificial read should fill cache lines as many needed to accommodate one page. Therefore, algorithm should be modified to compute blocks of output array. Something like the ones used in matrix multiplication of huge matrices, done using GPUs. They use blocks of matrices for computation and writing result.
As pointed out earlier, the write to output array should happen in a stream.
To bring in artificial read, we should initialize at compile time the output array at right places, once in each block, probably with 0 or 1.
void NetClass::Modulate(vector <synapse> & synapses )
{
int size = synapses.size();
int split = 200 * 0.5;
for(int w=0; w < size; w++)
if(synapses[w].active)
synapses[w].rmod = ((rand_r(seedp) % 200 - split ) / 1000.0);
}
The function rand_r(seedp) is seriously bottle-necking my program. Specifically, its slowing me by 3X when run serialy, and 4.4X when run on 16 cores. rand() is not an option because its even worse. Is there anything I can do to streamline this? If it will make a difference, I think I can sustain a loss in terms of statistical randomness. Would pre-generating (before execution) a list of random numbers and then loading to the thread stacks be an option?
Problem is that seedp variable (and its memory location) is shared among several threads. Processor cores must synchronize their caches each time they access this ever changing value, which hampers performance. The solution is that all threads work with their own seedp, and so avoid cache synchronization.
It depends on how good the statistical randomness needs to be. For high quality, the Mersenne twister, or its SIMD variant, is a good choice. You can generate and buffer a large block of pseudo-random numbers at a time, and each thread can have its own state vector. The Park-Miller-Carta PRNG is extremely simple - these guys even implemented it as a CUDA kernel.
Marsaglia's xor-shift generator is the probably fastest "reasonable quality" generator that you can use. It does not quite have the same "quality" as MT19937 or WELL, but honestly these differences are academic sophistries.
For all real, practical uses, there is no observable difference, except 1-2 orders of magnitude difference in execution speed, and 3 orders of magnitude of difference in memory consumption.
The xor-shift generator is also naturally thread-safe (in the sense that it will produce non-deterministic, pseudorandom results, and it will not crash) without anything special, and it can be trivially made thread-safe in another sense (in the sense that it will generate per-thread independent, deterministic, pseudorandom numbers) by having one instance per thread.
It could also be made threadsafe in yet another sense (generate a deterministic, pseudorandom sequence handed out to threads as they come) using atomic compare-exchange, but I don't think that's very useful.
The only three notable issues with the xor-shift generator are:
It is not k-distributed for up to 623 dimensions, but honestly who cares. I can't think in more than 4 dimensions (and even that's a lie!), and can't imagine many applications where more than 10 or 20 dimensions could possibly matter. That would have to be some quite esoteric simulation.
It passes most, but not ever pedantic statistic test. Again, who cares. Most people use a random generator that does not even pass a single test and never notice.
A zero seed will produce a zero sequence. This is trivially fixed by adding a non-zero constant to one of the temporaries (I wonder why Marsaglia never thought of that?). Having said that, MT19937 also behaves extremely badly given a zero seed, and does not recover nearly as well.
Have a look at Boost: http://www.boost.org/doc/libs/1_47_0/doc/html/boost_random.html
It has a number of options which vary in complexity (= speed) and randomness (cycle length).
If you don't need maximum randomness, you might get away with a simple Mersenne Twister.
do you absolutely need to have 1 shared random?
I had a similar contention problem a while ago, the solution that worked best for me was to create a new Random class (I was working in C#) for each thread. they're dead cheap anyway.
If you seed them properly to make sure you don't create duplicate seeds you should be fine. Then you won't have shared state so you don't need to use the threadsafe function.
Regards GJ
maybe you don't have to call it in every iteration? you could initialize an array of pre-randomized elements and make successive use of it...
I think you can use OpenMP for paralleling like this:
#pragma omp parallel
for(int w=0; w < size; w++)
I have a loop where I am adding noise to some points; these are being later used as the basis for some statistical tests.
The datasets involved are quite large, so I would like to parallelise it using openMP to speed things up. The issue comes up when I want to have multiple PRNGs. I have my own PRNG class based upon NR's modulo method (rand4 I think), but I am unsure how to seed the PRNGs correctly to ensure appropriate entropy
Normalliy I would do something like this
prng.initTimer();
But if I have an array of prngs, one per worker thread, then I cannot simply call initTimer on each instance -- the timer may not change, and the timers being close may introduce correlation.
I need to protect against natural correlations, not against malicious attackers (this is experimental data), so I need to have a safe way of seeding the rng array.
I thought of simply using
prng[0].initTimer()
for(int i=1; i<numRNGs; i++)
prng[i].init(prng[0].getRandNum());
Then calling my loop, but am unsure if this will introduce correlations in the modulo method.
Seeding PRNGs doesn't necessary create independent streams. You should seed only the the first instance (call it reference) and initialise the remaining instances by fast forwarding the reference instance. This only works if you know how many random numbers each thread will consume and the fast forwarding algorithm is available.
I don't know much about your rand4 (googled it, but nothing specific came up), but you shouldn't assume that it is possible to create independent streams just by seeding. You probably want to use different (a better) PRNG. Take a look at WELL. It is fast, has good statistical properties and developed by well know experts. WELL 512 and 1024 are one of the fastest PRNGs available and both have huge periods. You can initialise several WELL instances with distinct seeds in order to create independent streams. Thanks to huge period there is almost zero chance that your PRNGs will generate overlapping streams of random numbers.
If your PRNGs are called frequently, beware of false sharing. This Herb Sutter's article explains how false sharing can kill multi-core performance. Packing multiple PRNGs into a contiguous array is almost a perfect recipe for false sharing. In order to avoid false sharing either add padding between PRNGs or allocate PRNGs on heap/free store. In the later case each RNG should be allocated individually using some sort of aligned allocator. Your compiler should provide a version of aligned malloc. Check the docs (well, googling is actually faster than reading manuals). Visual C++ has _aligned_malloc, GCC has memalign and posix_memalign. The aliment value must be a multiple of CPU's cache line size. The common practice is to align along 128 byte boundaries. For portable solution you can use TBB's cache aligned allocator.
I think it depends on the properties of your PRNG. Usual PRNGs weaknesses are lower entropy in the lower bits and lower entropy for the first n values. So I think you should check your PRNG for such weaknesses and change your code accordingly.
Perhaps some of the diehard tests give useful information, but you can also check the first n values and their statistical properties like sum and variance yourself and compare them to the expected values.
For example, seed the PRNG and sum up the first 100 values modulo 11 of your PRNG, repeat this R times. If the total sum is very different from the expected (5*100*R), your PRNG suffers from one or both weaknesses mentioned above.
Knowing nothing about the PRNG, I'd feel safer using something like this:
prng[0].initTimer();
// Throw the first 100 values away
for(int i=1; i < 100; i++)
prng[0].getRandNum();
// Use only higher bits for seed values (assuming 32 bit size)
for(int i=1; i<numRNGs; i++)
prng[i].init(((prng[0].getRandNum() >> 16) << 16)
+ (prng[0].getRandNum() >> 16));
But of course, these are speculations about the PRNG. With an ideal PRNG, your approach should work fine as it is.
If you seed your PRNGs using a sequence of numbers from the same type of PRNG, they will all be producing the same sequence of numbers, offset by one from each other. If you want them to produce different numbers, you will need to seed them with a sequence of pseudorandom numbers from a different PRNG.
Alternatively, if you are on a unix-like system with a /dev/random, you can just read from that device to get a sequence of random numbers to use as your seeds.