Upper bound for custom rand48 - c++

I'm using a custom random number function rand48 in CUDA. The function does not allow an upperbound to be set, but I require the output to be between 0 and 1.
I guess I'm missing something but how would I convert the output to be between 0 and 1, the length of the number can change e.g. 697135872 would need to be divided by 100000000 and 29186668 would need to be divided by 100000000.
Thanks everyone

If your PRNG behaves like rand then it generates numbers between 0 and RAND_MAX with uniform probability. You just have to multiply by 1.f/RAND_MAX.
If you divide by different numbers in different cases, you will end up with non-uniform distribution.

Related

Generate random list of numbers that add up to 1 [duplicate]

This question already has answers here:
Getting N random numbers whose sum is M
(9 answers)
Closed 9 years ago.
Are there any STL functions that allow one to create a vector with random numbers that add up to 1? Ideally, this would be dependent on the size of the vector, so that I can make the vector size, say, 23 and this function will populate those 23 elements with random numbers between 0 and 1 that all add up to 1.
One option would be to use generate to fill the vector with random numbers, then using accumulate to sum up the values, and finally dividing all the values in the vector by the sum to normalize the sum to one. This is shown here:
std::vector<double> vec(23);
std::generate(vec.begin(), vec.end(), /* some random source */);
const double total = std::accumulate(vec.begin(), vec.end(), 0.0);
for (double& value: vec) value /= total;
Hope this helps!
No, but you can do this easily with the following steps:
Fill the vector with random float values, say 0 to 100.
Calculate the sum.
Divide each value by the sum.
There are certainly lots of standard functions to generate random numbers. To get the normalization to happen, you'll want to do that after you've generated all the numbers. (For instance, you might generate the numbers, then divide them all by their sum.) Note that you probably won't have uniformly-distributed numbers at that point, if it matters.
This depends on the kind of distribution of random numbers that you want. One approach (which has been suggested in another answer) is to just generate some random numbers, then divide them each by their total sum.
Another approach is to make a list of random numbers from the interval [0, 1), then sort them. You can then take the differences between consecutive numbers (adding 0 and 1 to the beginning and end of your list respectively). These differences will naturally sum up to 1. So, for example, let's say you picked 3 random numbers and they were: {0.38, 0.05, 0.96}. Let's add 0 and 1 to this list and then sort it:
{0, 0.05, 0.38, 0.96, 1}
Now let's take the differences:
{0.05, 0.33, 0.58, 0.04}
If you add these up, they sum to 1. If you don't understand why this works, imagine you have a piece of rope of length 1 and you use a knife to cut it some random distance from the end (without moving the pieces apart as you cut it). Naturally all the pieces will add up to the original length. That's exactly what's happening here.
Now, like I said, this approach will give you a different distribution of random numbers than the divide by sum method, so don't consider them to be the same!

Using bitwise & instead of modulus operator to randomly sample integers from a range

I need to randomly sample from a uniform distribution of integers over the interval [LB,UB] in C++. To do so, I start with a "good" RN generator (from Numerical Recipes 3rd ed.) that uniformly randomly samples 64-bit integers; let's call it int64().
Using the mod operator, I can sample from the integers in [LB,UB] by:
LB+int64()%(UB-LB+1);
The only issue with using the mod operator is the slowness of the integer division. So, I then tried the method suggested here, which is:
LB + (int64()&(UB-LB))
The bitwise & method is about 3 times as fast. This is huge for me, because one of my simulations in C++ needs to randomly sample about 20 million integers.
But there's 1 big problem. When I analyze the integers sampled using the bitwise & method, they don't appear uniformly distributed over the interval [LB,UB]. The integers are indeed sampled from [LB,UB], but only from the even integers in that range. For example, here is a histogram of 5000 integers sampled from [20,50] using the bitwise & method:
By comparison, here is what a similar histogram looks like when using the mod operator method, which of course works fine:
What's wrong with my bitwise & method? Is there any way to modify it so that both even and odd numbers are sampled over the defined interval?
The bitwise & operator looks at each pair of corresponding bits of its operands, performs an and using only those two bits, and puts that result in the corresponding bit of the result.
So, if the last bit of UB-LB is 0, then the last bit of the result is 0. That is to say, if UB-LB is even then every output will be even.
The & is inappropriate to the purpose, unless UB-LB+1 is a power of 2. If you want to find a modulus, then there's no general shortcut: the compiler will already implement % the fastest way it knows.
Note that I said no general shortcut. For particular values of UB-LB, known at compile time, there can be faster ways. And if you can somehow arrange for UB and LB to have values that the compiler can compute at compile time then it will use them when you write %.
By the way, using % does not in fact produce uniformly-distributed integers over the range, unless the size of the range is a power of 2. Otherwise there must be a slight bias in favour of certain values, because the range of your int64() function cannot be assigned equally across the desired range. It may be that the bias is too small to affect your simulation in particular, but bad random number generators have broken random simulations in the past, and will do so again.
If you want a uniform random number distribution over an arbitrary range, then use std::uniform_int_distribution from C++11, or the class of the same name in Boost.
This works well if the range difference (UB-LB) is 2n-1, but won't work at all well if for example 2n.
The two are equivalent only when the size of the interval is a power of two. In general y%x and y&(x-1) are not the same.
For example, x%5 produces numbers from 0 to 4 (or to -4, for negative x), but x&4 produces either 0 or 4, never 1, 2, or 3, because of how bitwise operators work...

map random numbers

for(int i=0;i<100;i++)
for(int j=0;i<6;j++)
{
cout<<rand()%6<<"," // Store these numbers in a map
}
cout<<endl;
Say, suppose I store these random numbers in the inner for loop in a map<int,myRandomNumbers>
In some random game, game maker also created a similar call to rand()%6 to get all 6 numbers. Are these 6 numbers having any slightest chance to be fully or partially the same as one of myRandomNumbers ?
Well, you can calculate it, assuming rand() gives a completely uniform distribution (it doesn't, but we'll assume it does anyway). If you generate 6 random numbers in the range [0, 5], the probability that another set of 6 random numbers generated from the same range are exactly the same is (1/6)^6 ~ 2.14e-5. You can use the binomial distribution to calculate the probability that they will be partially similar, that is, match in n places for n in [0, 6].
Unless you seed the rand() function by calling srand(), there's a good chance that you will get exactly the same numbers (assuming the random numbers are created by different processes, that is).

Unbalanced random number generator

I have to pick an element from an ascending array. Smaller elements are considered better. So if I pick an element from the beginning of the array it's considered a better choice. But at the same time I don't want the choice to be deterministic and always the same element. So I'm looking for
a random numbers generator that produces numbers in range [0, n], but
the smaller the number is, the more chance of it being produced.
This came to my mind:
num = n;
while(/*the more iteration the more chance for smaller numbers*/)
num = rand()%num;
I was wondering if anyone had a better solution.
I did look at some similar questions but they have details about random number generation generally. I'm looking for a solution to this specific type of random number generation, either an algorithm or a library that provides it.
Generate a Random number, say x, between [0,n) and then generate another Random floating point number, say y, between [0,1]. Then raise x to the power of y and use floor function, you'll get your number.
int cust(int n)
{
int x;
double y, temp;
x = rand() % n;
y = (double)rand()/(double)RAND_MAX;
temp = pow((double) x, y);
temp = floor(temp);
return (int)temp;
}
Update: Here are some sample results of calling the above function 10 times, with n = 10, 20 and 30.
2 5 1 0 1 0 1 4 1 0
1 2 4 1 1 2 3 5 17 6
1 19 2 1 2 20 5 1 6 6
Simple ad-hoc approach that came to my mind is to use standard random generators, but duplicate indices. So in the array:
0, 0, 0, 1, 1, 2, 3
odds are good that smaller element will be taken.
I dont' know exactly what do you need. You can also define your own distribution or maybe use some random number generation libraries. But suggested approach is simple and easy to configure.
UPDATE2: You don't have to generate array explicitly. For array of size 1000, you can generate random number from interval: [0,1000000] and then configure your own distribution of selected values: say, intervals of length 1200 for smaller values (0-500) and intervals of length 800 for larger (500-1000). The main point that this way you can easily configure the probability and you don't have to re-implement random number generator.
Use an appropriate random distribution, e.g. the rounded results of an exponential distribution. Pick a distribution that fits your needs, document the distribution you used, and find a nice implementation. If code under the GNU Public License is an option, use the excellent GNU Scientific Library (GSL), or try Boost.Random.
Two tools will solve many random distribution needs
1) A uniform random number generator which you have
2) and a function which maps uniform values onto your target distribution
I've gotta head to the city now, but I'll make note to write up a couple of examples with a drawing when I get back.
There are some worthwhile methods and ideas discussed in this related question (more about generating a normal pseudo random number)

random complex number

i need algorithm for generate random complex number please help i know how generate random number but random complex number confuse me
I would simply generate two random numbers and use one for the real part and one for the imaginary part.
Generate 2 random numbers (x, y) (use the built-in rand/rnd/random class from your environment's libraries), where x is the real part and y is the imaginary part.
Create a complex number class (with a constructor that takes a real and imaginary parameter)
Use the 2 random numbers from step 1 to create a complex number, x + i y
1.Generate 2 vector of numbers say one is real_vector and another is imaginary_vector of size say MAX_SIZE to be generated randomly with differrent seeds.
2.Random shuffle the numbers in vectors(real_vector+imaginary_vector) using any distribution let us say use of std::random_shuffle(uniform distribution).
3.randomly generate a index and apply modulo operator for MAX_SIZE and select index from first array that will provide an real part of ur random number.
4.use step 3 to get imaginary part of your random number.
5.Create a complex number using number got from step 3 and step 4 and store in a container.
6.go to step 3 and check if you want any more complex number;if no then break;