Why are the results of RNG same every time? - c++

I have a problem about RNG class. I want to get different point randomly from a given image, so I use the RNG class which is recommended in the OpenCV documentation. The code is:
struct SingleAnt
{
int row;
int col;
};
void initializeAnts( SingleAnt *ants, Mat *sourceImage )
{
RNG rng( 0xFFFFFFFF );
int imgWidth = sourceImage->cols;
int imgHight = sourceImage->rows;
for( int index = 0; index < ANTSNUMBER; index++ ) {
ants[ index ].col = rng.uniform( 0, imgWidth );
ants[ index ].row = rng.uniform( 0, imgHight );
}
}
However, when I run this code, I get the same result every time. Are there any mistakes in the code?

RNG rng( 0xFFFFFFFF );
Here you are (presumably) providing the PRNG with a seed value - specifically the same seed value (0xFFFFFFFF) every time the code runs. Because of this, the PRNG (being a completely deterministic algorithm) is going to provide the same sequence of output values every time.
Instead, you should provide it with a pseudo-random seed value. Typically, the system time() value is used to seed a PRNG. Many times, calling a parameter-less constructor for a PRNG actually does this for you.
As B... points out, The cv::RNG class does have a parameterless constructor: cv::RNG::RNG(), but it does not seed the generator. From the documentation, RNG::RNG() only
sets the state to some pre-defined value, equal to 2**32-1 in the current implementation
So as I previously suggested, you should seed it yourself.

Related

When is it preferable to use rand() vs a generator + a distribution? (e.g. mt19937 + uniform_real_distribution)

After going through the rabbit hole that is learning about rand() and how it's not very good at generating uniform pseudorandom data based on what I've dug into based on this post:
Random float number generation. I am stuck trying to figure out which strategy would yield better balance of performance and accuracy when iterated a significant number of times, 128*10^6 for an example of my use case.
This link is what led me to make this post, otherwise I would have just used rand(): rand() considered harmful
Anyway, my main goal is to understand whether rand() is ever preferable to use over the generator + distribution method. There doesn't seem to be very good info even on cppreference.com or cplusplus.com for performance or time complexity for either of the two strategies.
For example, between the following two random number generation strategies is it always preferable to use the 2nd approach?
rand()
std::mt19937 and uniform_real_distribution
Here is an example of what my code would be doing:
int main(){
int numIterations = 128E6;
std::vector<float> randomData;
randomData.resize(numIterations);
for(int i = 0; i < numIterations; i++){
randomData[i] = float(rand())/float(RAND_MAX);
}
}
vs.
#include<random>
int main(){
std::mt19937 mt(1729);
std::uniform_real_distribution<float> dist(0.0, 1.0);
int numIterations = 128E6;
std::vector<float> randomData;
randomData.resize(numIterations);
for(int i = 0; i < numIterations; i++){
randomData[i] = dist(mt);
}
}

Random function generator C++

I want to generate a random number in C++ based on known distribution.
Here is the problem. I rolled a dice (say) 6 times, and I record a four for 3 times, and an one for 1 times, and a two for 2 times.
So four=3/6, one=1/6, two=2/6
Is there a library function that I could use which generates a random number based on the above distribution?
If not, do you think it is valid for me to simply do
int i= ran()%5;
if (i is in the range of 0 to 2)
{
//PICK FOUR
}
else if (i is in the range of 3 to 4)
{
// PICK ONE
}
else
{
// PICK TWO
}
int pick()
{
static const int val[6] = { 4,4,4,1,2,2 };
return val[ran()%6]; // <---- note %6 not %5
}
Edit Note ran() % 6 may or may not be uniformly distributed, even if ran() is. You probably want something that is guaranteed to be uniformly distributed, e.g.
std::random_device device;
std::default_random_engine engine(device());
std::uniform_int_distribution<int> dist(0, 5);
Now dist(engine) is a good replacement for ran()%6.
Edit2 From a suggestion in the comments, here's a version based on std::discrete_distribution:
std::random_device device;
std::default_random_engine engine(device());
std::discrete_distribution<> dist ({1, 2, 0, 3, 0, 0});
int pick()
{
return dist(engine) + 1;
}

Default_random_engine passed into a function gives repeatable results

I have a class Permutation that inherits from std::vector<int>. I created a constructor that makes the object filled with non-repeating numbers. Randomness is meant to be guaranteed by <random> stuff, so the declaration goes like this:
/* Creates a random permutation of a given length
* Input: n - length of permutation
* generator - engine that does the randomizing work */
Permutation(int n, default_random_engine generator);
Function itself looks like this (irrevelant details skipped):
Permutation::Permutation(int n, default_random_engine generator):
vector<int>(n, 0)
{
vector<int> someIntermediateStep(n, 0);
iota(someIntermediateStep.begin(), someIntermediateStep.end(), 0); //0, 1, 2...
shuffle(someIntermediateStep.begin(), someIntermediateStep.end(),
generator);
// etc.
}
And is called in the following context:
auto seed = std::chrono::system_clock::now().time_since_epoch().count();
static std::default_random_engine generator(seed);
for (int i = 0; i < n; i++)
Permutation test(length, generator);
Code compiles perfectly fine, but all instances of Permutation are the same. How to force regular generation of random numbers? I know that default_random_engine should be binded to a distribution object, but hey, I don't have any – I use the engine only in shuffle() (at least at the moment).
Is there any solution or a workaround that still uses the goodness of <random>?
Your Permutation constructor takes the engine in by value. So, in this loop:
for (int i = 0; i < n; i++)
Permutation test(length, generator);
You are passing a copy of the same engine, in the same state, over and over. So you are of course getting the same results. Pass the engine by reference instead
Permutation::Permutation(int n, default_random_engine& generator)
That way its state will be modified by the call to std::shuffle.
So a childish mistake, just as I supposed – I mixed various solutions to similar problems in a wrong way.
As Benjamin pointed out, I mustn't copy the same engine over and over again, because it remains, well, the same. But this alone doesn't solve the issue, since the engine is pointlessly declared static (thanks, Zereges).
For the sake of clarity, corrected code looks like this:
Permutation(int n, default_random_engine &generator);
// [...]
Permutation::Permutation(int n, default_random_engine generator):
vector<int>(n, 0)
{
vector<int> someIntermediateStep(n, 0);
iota(someIntermediateStep.begin(), someIntermediateStep.end(), 0); //0, 1, 2...
shuffle(someIntermediateStep.begin(), someIntermediateStep.end(),
generator);
// etc.
}
// [...]
// some function
auto seed = chrono::system_clock::now().time_since_epoch().count();
default_random_engine generator(seed);
for (int i = 0; i < n; i++)
Permutation test(length, generator);

Generating Random Numbers with CUDA via rejection method. Performance problems

I'm running a Monte Carlo code for particle simulation, written in CUDA. Basically, in each step I calculate the velocity of each particle and update its position. The velocity is directly proportional to the path length. For a given material, the path length has a certain distribution. I know the probability density function of this path length. I now try to sample random numbers according to this function via rejection method. I would describe my CUDA knowledge as limited. I understood, that it is preferable to create large chunks of random numbers at once instead of multiple small chunks. However, for the rejection method, I generate only two random numbers, check a certain condition and repeat this procedure in the case of failure. Therefore I generate my random numbers on the kernel.
Using the profiler / nvvp I noticed, that basically 50% of my time is spend during the rejection method.
Here is my question: Are there any ways to optimize the rejection methods?
I appreciate every answer.
CODE
Here is the rejection method.
__global__ void rejectSamplePathlength(float* P, curandState* globalState,
int numParticles, float sigma, int timestep,curandState state) {
int i = blockDim.x * blockIdx.x + threadIdx.x;
if (i < numParticles) {
bool success = false;
float p;
float rho1, rho2;
float a, b;
a = 0.0;
b = 10.0;
curand_init(i, 0, 0, &state);
while (!success) {
rho1 = curand_uniform(&globalState[i]);
rho2 = curand_uniform(&globalState[i]);
if (rho2 < pathlength(a, b, rho1, sigma)) {
p = a + rho1 * (b - a);
success = true;
}
}
P[i] = abs(p);
}
}
The pathlength function in the if statement computes a value y=f(x) on the kernel.
I"m pretty sure, that curand_init is problematic in terms of time, but without this statement, each kernel would generate the same numbers?
Maybe you could create a pool of random generated uniform variable in a previous kernel and then you pick your uniform in that pool and cycling over that pool. But it should be large enough to avoid infinite loop..

Skiplast random function need explained

I read about skipList implementation in C++ and I don't understand this random function :
float frand() {
return (float) rand() / RAND_MAX;
}
int random_level() {
static bool first = true;
if (first) {
srand( (unsigned)time(NULL) );
first = false;
}
int lvl = (int)(log(frand())/log(1.-P));
return lvl < MAX_LEVEL ? lvl : MAX_LEVEL;
}
Thanks for reading and I'm waiting for your answer :)
So, the way skiplists work is it makes the new node link to other nodes at levels, randomly choosing to add a level or not. Normally this means flipping a coin once for each level the new node is intended to link to. if it comes up heads, you go up a level and flip again, if tails, you're done.
What this does is it simulates the flipping of that coin several times, but only calling the random number source once, and applying a function with the same probability distribution as summing consecutive coin flips
// this function generates a random number between 0 and 1
float frand() {
return (float) rand() / RAND_MAX; // RAND_MAX is the biggest possible value returned by rand()
}
int random_level() {
static bool first = true; // a static variable to track whether or not this is the first run of the function
if (first) { // if this is the first time the function has been called...
srand( (unsigned)time(NULL) ); // generate a seed from the current time
first = false; // set first to false
}
int lvl = (int)(log(frand())/log(1.-P)); // generate the value of lvl with some weird log functions
return lvl < MAX_LEVEL ? lvl : MAX_LEVEL; // cap the value to MAX_LEVEL, and return
}