I am working on an openGL project and my rand function is not giving me a big enough random range.
I am tasked with writing a diamond program to where one diamond is centered on the screen and 5 are randomly placed elsewhere on the screen. What is happening is my center diamond is where it is supposed to be and the other five are bunching with a small range of random left of center. I have included the function that draws the diamonds.
void myDisplay(void)
{
srand(time(0));
GLintPoint CenterPoint;
int const size = 20;
CenterPoint.x = screenWidth / 2;
CenterPoint.y = screenHeight / 2;
glClear(GL_COLOR_BUFFER_BIT);
drawDiamond(CenterPoint, size);
for (int i = 0; i < 5; i++)
{
GLfloat x = rand() % 50 + 10;
GLfloat y = rand() % 200 + 100;
GLfloat size = rand() % 100;
GLintPoint diam = {x,y};
drawDiamond(diam, size);
}
If more code is needed, please let me know and I will edit. Does anyone have any ideas on how I can correct this? I have "toyed" with the numbers of the rand() function and really doesn't seem to do much. They still seem to bunch just at different points on the screen. I appreciate the help. Just in case anyone needs to know, I am creating this in VS2012.
This has no business being in this function:
srand(time(0));
This should be called once at the beginning of your program (a good place is just inside main()); and most-certainly not in your display routine. Once the seed is set, you should never do it again for your process unless you want to repeat a prior sequence (which by the looks of it, you don't).
That said, I would strongly advise using the functionality in <random> that comes with your C++11 standard library. With it you can establish distributions (ex: uniform_int_distribution<>) that will do much of your modulo work for you, and correctly account for the problems such things can encounter (Andon pointed out one regarding likeliness of certain numbers based on the modulus).
Spend some time with <random>. Its worth it. An example that uses the three ranges you're using:
#include <iostream>
#include <random>
using namespace std;
int main()
{
std::random_device rd;
std::default_random_engine rng(rd());
// our distributions.
std::uniform_int_distribution<> dist1(50,60);
std::uniform_int_distribution<> dist2(200,300);
std::uniform_int_distribution<> dist3(0,100);
for (int i=0;i<10;++i)
std::cout << dist1(rng) << ',' << dist2(rng) << ',' << dist3(rng) << std::endl;
return EXIT_SUCCESS;
}
Output (obviously varies).
58,292,70
56,233,41
57,273,98
52,204,8
50,284,43
51,292,48
53,220,42
54,281,64
50,290,51
53,220,7
Yeah, it really is just that simple. Like I said, that library is this cat's pajamas. There are many more things it offers, including random normal distributions, different engine backends, etc. I highly encourage you to check into it.
As WhozCraig mentioned, seeding your random number generator with the time every time you call myDisplay (...) is a bad idea. This is because time (NULL) has a granularity of 1 second, and in real-time graphics you usually draw your scene more than one time per-second. Thus, you are repeating the same sequence of random numbers every time you call myDisplay (...) when less than 1 second has elapsed.
Also, using modulo arithmetic on a call to rand (...) adversely affects the quality of the returned values. This is because it changes the probability distribution for numbers occurring. The preferred technique should be to cast rand (...) to float and then divide by RAND_MAX, and then multiply this result by your desired range.
GLfloat x = rand() % 50 + 10; /* <-- Bad! */
/* Consider this instead */
GLfloat x = (GLfloat)rand () / RAND_MAX * 50.0f + 10.0f;
Although, come to think of it. Why are you using GLfloat for x and y if you are going to store them in an integer data structure 2 lines later?
Related
I am implementing a Monte Carlo simulation, where I need to run multiple realisations of some dynamics and then take an average over the end state for all the simulations. Since the number of realisation is large, I run them in parallel using OpenMP. Every realisation starts from the same initial conditions and then at each time step a process happens with a given probability, and to determine which process I draw a random number from a uniform distribution.
I want to make sure that all simulations are statistically independent and that there is no overlap in the random numbers that are being drawn.
I use OpenMP to parallelise the for loops, so the skeleton code looks like this:
vector<int> data(number_of_sims);
double t;
double r;
#pragma omp parallel for
for(int i = 0; i < number_of_sims; i++){
// run sim
t = 0;
while (t < T) {
r = draw_random_uniform();
if (r < p) do_something();
else do_something_else();
t += 1.0; // increment time
}
// some calculation
data[i] = calculate();
}
So every time I want a random number, I would call a function which used the Mersenne Twister seeded with random device.
double draw_random_uniform(){
static thread_local auto seed = std::random_device{}();
static thread_local mt19937 mt(seed);
std::uniform_real_distribution<double> distribution(0.0, 1.0);
double r = distribution(mt);
return r;
}
However, since I ultimately want to run this code on a high power computing cluster I want to avoid using std::random_device() as it is risky for systems with little entropy.
So instead I want to create an initial random number generator and then jump it forward a large amount for each of the threads. I have been attempting to do this with the Xoroshiro256+ PRNG (I found some good implementation here: https://github.com/Reputeless/Xoshiro-cpp). Something like this for example:
XoshiroCpp::Xoshiro256Plus prng(42); // properly seeded prng
#pragma omp parallel num_threads()
{
static thread_local XoshiroCpp::Xoshiro256Plus lprng(prng); // thread local copy
lprng.longJump(); // jump ahead
// code as before, except use lprng to generate random numbers
# pragma omp for
....
}
However, I cannot get such an implementation to work. I suspect because of the double OpenMP for loops. I had the thought of pre-generating all of the PNRGs and storing in a container, then accessing the relevant one by using omp_get_thread_num() inside the parallelised for loop.
I am unsure if this is the best way to go about doing all this. Any advice is appreciated.
Coordinating random number generators with long jump can be tricky. Alternatively there is a much simpler method.
Here is a quote from the authors website:
It is however important that the period is long enough.
Moreover, if you run n independent
computations starting at random seeds, the sequences used by each
computation should not overlap.
Now, given a generator with period P, the probability that
subsequences of length L starting at random points in the state space
overlap is bounded by n² L/P. If your generator has period 2^256 and you run on 2^64
cores (you will never have them) a computation using 2^64 pseudorandom
numbers (you will never have the time) the probability of overlap
would be less than 2^-64.
So instead of trying to coordinate, you could in each thread just randomly seed a new generator from std::random_device{}. The period is so large that it will not collide.
While this sounds like a very add-hock approach, this random-seeding method is actually a widely used and classic method.
You just need to make sure the seeds are different. Depending on the platform usually different random seeds are proposed.
Using a truly random source
Having an atomic int that is incremented and some hashing
Using another pseudo random number generator to generate a seed sequence
Using a combination of thread id and time to create a seed
If repeatability is not needed, seeds from a random source is the most easiest and safest solution.
The paper from L'Ecuyer et. al. from 2017 gives a good overview of methods for generating parallel streams. He calls this approach "RNG with a “random” seed for each stream` under chapter 4.
vector<int> data(number_of_sims);
double t;
double r;
#pragma omp parallel for
for(int i = 0; i < number_of_sims; i++){
// random 128 bit seed
auto rd = std::random_device{};
auto seed = std::seed_seq {rd(), rd(), rd(), rd()};
auto mt = std::mt19937 {seed};
// run sim
t = 0;
while (t < T) {
r = draw_random_uniform(mt);
if (r < p) do_something();
else do_something_else();
t += 1.0; // increment time
}
// some calculation
data[i] = calculate();
}
and
double draw_random_uniform(mt19937 &mt){
std::uniform_real_distribution<double> distribution(0.0, 1.0);
return distribution(mt);
}
If number_of_sims is not extremely large there is no need for static or thread_local initialization.
You should read "Parallel Random Numbers, as easy as one, two three"
http://www.thesalmons.org/john/random123/papers/random123sc11.pdf
This paper explicitly addresses your forward stepping issues.
You can now find implementations of this generator in maths libraries (such as Intel's MKL, which uses the specialized encryption instructions, so will be hard to beat by hand!)
I am trying to fill a vector with a specific distribution of nonuniform screen points. These points represent some x and y position on the screen. At some point I am going to draw all of these points on the screen, which should be unevenly distributed at the center. Basically, the frequency of points should increase as you get closer to the center, where one side of the screen is a reflection of the other (can "Mirror over the center of the screen")
I was thinking about using some sort of formula (like y=cos(x) between -pi/2 and pi/2) where the resulting y would equal the frequency of the points in that area of the screen (where -pi/2 would be the leftmost side of the screen, vice versa), but I got stuck on how I would even be able to apply something like this when creating points to put onto the vector. Note: There is a specific number of points that must be generated
If the above hypothesis is not able to work, maybe a cheaty way of achieving this would be to constantly reduce some step size between each point, but I don't know how I would be able to ensure that the specific number of points reach the center.
Ex.
// this is a member function inside a class PointList
// where we fill a member variable list(vector) with nonuniform data
void PointList::FillListNonUniform(const int numPoints, const int numPerPoint)
{
double step = 2;
double decelerator = 0.01;
// Do half the screen then duplicate and reverse the sign
// so both sides of the screen mirror eachother
for (int i = 0; i < numPoints / 2; i++)
{
Eigen::Vector2d newData(step, 0);
for (int j = 0; j < numPerPoint; j++)
{
list.push_back(newData);
}
decelerator += 0.01f;
step -= 0.05f + decelerator;
}
// Do whatever I need to, to mirror the points ...
}
Literally any help would be a appreciated. I have briefly looked into std::normal_distribution, but it appears to me that it relies on randomness, so I am unsure if this would be a good option for what I am trying to do.
You can use something called rejection sampling. The idea is that you have some function of some parameters (in your case 2 parameters x, y), which represents the probability density function. In your 2D case, you can then generate an x, y pair along with a variable representing the probability p. If the probability density function is larger at the coordinates (i.e. f(x, y) > p), the sample is added, otherwise a new pair is generated. You can implement this like:
#include <functional>
#include <vector>
#include <utility>
#include <random>
std::vector<std::pair<double,double>> getDist(int num){
std::random_device rd{};
std::mt19937 gen{rd()};
auto pdf = [] (double x, double y) {
return /* Some probability density function */;
};
std::vector<std::pair<double,double>> ret;
double x,y,p;
while(ret.size() <= num){
x = (double)gen()/SOME_CONST_FOR_X;
y = (double)gen()/SOME_CONST_FOR_Y;
p = (double)gen()/SOME_CONST_FOR_P;
if(pdf(x,y) > p) ret.push_back({x,y});
}
return ret;
}
This is a very crude draft but should give and idea as to how this might work.
An other option (if you want normal distribution), would be std::normal_distribution. The example from the reference page can be adapted so:
#include <random>
#include <vector>
#include <utility>
std::vector<std::pair<double,double>> getDist(int num){
std::random_device rd{};
std::mt19937 gen{rd()};
std::normal_distribution<> d_x{x_center,x_std};
std::normal_distribution<> d_y{y_center,y_std};
while(ret.size() <= num){
ret.push_back({d_x(gen),d_y(gen)});
}
}
There are various ways to approach this, depending on the exact distribution you want. Generally speaking, if you have a distribution function f(x) that gives you the probability of a point at a specific distance to the center, then you can integrate it to get the cumulative distribution function F(x). If the CDF can be inverted, you can use the inverse CDF to map a uniform random variable to distances from the center, such that you get the desired distribution. But not all functions are easily inverted.
Another option would be to fake it a little bit: for example, make a loop that goes from 0 to the maximum distance from the center, and then for each distance you use the probability function to get the expected number of points at that distance. Then just add exactly that many points at randomly chosen angles. This is quite fast and the result might just be good enough.
Rejection sampling as mentioned by Lala5th is another option, giving you the desired distribution, but potentially taking a long time if large areas of the screen have a very low probability. A way to ensure it finishes in bounded time is to not loop until you have num points added, but to loop over every pixel, and add the coordinates of that pixel if pdf(x,y) > p. The drawback of that is that you won't get exactly num points.
I have an enemy class called Slime and each slime travels down the path (like a tower defense game) and I'm trying to get a random number (which will tell the slime when to change directions for the path change). But I tried it with using 3 Slimes and they all end up with the same random numbers. My enemy class has this code in it to generate random numbers for x and y:
Enemy::Enemy(Level* level, float x, float y, float speed, int direction, int width, int height)
:
Entity(level, x, y, width, height), // Each enemy is an entity
speed(speed),
direction(direction)
{
srand((unsigned)time(0));
rangeX = (level->GetTileWidth() * level->GetScale() - width * level->GetScale()) - (width * level->GetScale()) + 1;
rangeY = (level->GetTileHeight() * level->GetScale() - height * level->GetScale()) - (height * level->GetScale()) + 1;
randNumX = (rand() % rangeX) + (width * level->GetScale());
randNumY = (rand() % rangeY) + (height * leel->GetScale());
}
That code is being called whenever I create a new Slime object. I'm testing with three different slimes and they all give me the same random numbers. When I restart it, they're different numbers than the original, but all three slimes still have that same random numbers. Am I doing something wrong? Should I be seeding the rand outside of this class so it's only called once? And the rangeX and rangeY just give me a number within the path so no enemy is on the grass or hanging off the path.
You are re-seeding rand() to the same value every time you create a new Slime object. This means that rand() produces the same number for each Slime.
If you only seed rand() once at the beginning of the program (in the main), you'll get different values.
Lose the srand((unsigned)time(0));. Do that only once when your current thread first starts.
If you really need each one of your objects to contain its own random number generator, then either equip each one of your objects with an instance of a class that implements such a random number generator, or roll your own; you will find some good ideas here: https://stackoverflow.com/a/1640399/773113
I would like to generate the sample points that can randomly fill/cover a space (like in the attached image). I think they have a method called "Quasi-random" that can generate such sample points. However, it's a little bit far from my knowledge. Can someone make suggestions or help me find a library that can be do this? Or suggest how to start writing such a program?
In the image, 256 sample points are applied on the given space, placed at random positions to cover the whole given space.
Update:
I just try to use some code from Halton Quasi-random Sequence and compare with the result of pseudo-random which is post by friend below. The result of Halton's method is more better in my opinion. I would like to share some result as below;
The code which I wrote is
#include "halton.hpp"
#include "opencv2/opencv.hpp"
int main()
{
int m_dim_num = 2;
int m_n = 50;
int m_seed[2], m_leap[2], m_base[2];
double m_r[100];
for (int i = 0; i < m_dim_num; i++)
{
m_seed[i] = 0;
m_leap[i] = 1;
m_base[i] = 2+i;
}
cv::Mat out(100, 100, CV_8UC1);
i4_to_halton_sequence( m_dim_num, m_n, 0, m_seed, m_leap, m_base, m_r);
int displaced = 100;
for (int i = 0; i < 100; i=i+2)
{
cv::circle(out, cv::Point2d((m_r[i])*displaced, (m_r[i+1])*displaced), 1, cv::Scalar(0, 255, 0), 1, 8, 0);
}
cv::imshow("test", out);
cv::waitKey(0);
return 0;
}
As I little bit familiar with OpenCV, I wrote this code by plot on the matrix of OpenCV (Mat). The "i4_to_halton_sequence()" is the function from the library that I mentioned above.
The result is not better, but might be use in somehow for my work. Someone have another idea?
I am going to give an answer that will seem half-assed. However, this topic has been studied extensively in the literature, so I will just refer you to some summaries from Wikipedia and other places online.
What you want is also called low-discrepancy sequence (or quasi-random, as you pointed out). You can read more about it here: http://en.wikipedia.org/wiki/Low-discrepancy_sequence. It's useful for a number of things, which includes numerical integration and, more recently, simulating retinal ganglion mosaic.
There are many ways to generate low-discrepancy sequences (or pseudo quasi random sequences :p). Some of these are in ACM Collected Algorithms (http://www.netlib.org/toms/index.html).
The most common of which, I think, is called Sobol sequence (algorithm 659 from the ACM thing). You can get some details on this here: http://en.wikipedia.org/wiki/Sobol_sequence
For the most part, unless you are really into it, that stuff looks pretty scary. For quick result, I would use GNU's GSL (GNU Scientific Library): http://www.gnu.org/software/gsl/
This library includes code to generate quasi-random sequences (http://www.gnu.org/software/gsl/manual/html_node/Quasi_002dRandom-Sequences.html) including Sobol sequence (http://www.gnu.org/software/gsl/manual/html_node/Quasi_002drandom-number-generator-examples.html).
If you're still stuck, I can paste some code here, but you're better off digging into GSL.
Well here's another way to do quasi-random that covers the entire space.
Since you have 256 points to use, you can start by plotting those points as a 16x16 grid.
Then apply some function that give some random offset to each point (say 0 to ±2 to the points' x and y coordinates).
You could create equidistant points (all points have same distance to their neighbors) and then, in a second step, move each point randomly a bit so that they appear 'random'.
The second idea I have is:
1. Start with one area.
2. Create a random point P rand about the 'middle' of your area.
3. Divide the area into 4 areas by that point. P is the upper right corner of the lower left subarea, the upper left corner of the lower right area and so on.
4. Repeat steps 2..4 for all 4 sub areas. Of course, not forever, but until you're satisfied.
This algorithms ensures that each 'hole' (i.e. the new sub area) is filled with a point.
Update: Your initial area should be twice as large as your area, because of step (2). This ensures having points at the edges and corners as well.
This is called a "low discrepancy sequence". The linked Wikipage explains how you can generate them.
But I suspect you already knew this, as your image is very similar to the 2,3 Halton sequence example from Wikipedia
You just need library rand() function:
#include <stdlib.h>
#include <time.h>
unsigned int N = 256; //number of points
int RANGE_X = 100; //x range to put sample points in
int RANGE_Y = 100;
void PutSamplePoint(int x, int y)
{
//some your code putting sample point on field
}
int main()
{
srand((unsigned)time(0)); //initialize random generator - uses current time as seed
for(unsigned int i = 0; i < N; i++)
{
int x = rand() % RANGE_X; //returns random value in range [0, RANGE_X)
int y = rand() % RANGE_Y;
PutSamplePoint(x, y);
}
return 0;
}
I've written a little particle system for my 2d-application. Here is raining code:
// HPP -----------------------------------
struct Data
{
float x, y, x_speed, y_speed;
int timeout;
Data();
};
std::vector<Data> mData;
bool mFirstTime;
void processDrops(float windPower, int i);
// CPP -----------------------------------
Data::Data()
: x(rand()%ScreenResolutionX), y(0)
, x_speed(0), y_speed(0), timeout(rand()%130)
{ }
void Rain::processDrops(float windPower, int i)
{
int posX = rand() % mWindowWidth;
mData[i].x = posX;
mData[i].x_speed = WindPower*0.1; // WindPower is float
mData[i].y_speed = Gravity*0.1; // Gravity is 9.8 * 19.2
// If that is first time, process drops randomly with window height
if (mFirstTime)
{
mData[i].timeout = 0;
mData[i].y = rand() % mWindowHeight;
}
else
{
mData[i].timeout = rand() % 130;
mData[i].y = 0;
}
}
void update(float windPower, float elapsed)
{
// If this is first time - create array with new Data structure objects
if (mFirstTime)
{
for (int i=0; i < mMaxObjects; ++i)
{
mData.push_back(Data());
processDrops(windPower, i);
}
mFirstTime = false;
}
for (int i=0; i < mMaxObjects; i++)
{
// Sleep until uptime > 0 (To make drops fall with randomly timeout)
if (mData[i].timeout > 0)
{
mData[i].timeout--;
}
else
{
// Find new x/y positions
mData[i].x += mData[i].x_speed * elapsed;
mData[i].y += mData[i].y_speed * elapsed;
// Find new speeds
mData[i].x_speed += windPower * elapsed;
mData[i].y_speed += Gravity * elapsed;
// Drawing here ...
// If drop has been falled out of the screen
if (mData[i].y > mWindowHeight) processDrops(windPower, i);
}
}
}
So the main idea is: I have some structure which consist of drop position, speed. I have a function for processing drops at some index in the vector-array. Now if that's first time of running I'm making array with max size and process it in cycle.
But this code works slower that all another I have. Please, help me to optimize it.
I tried to replace all int with uint16_t but I think it doesn't matter.
Replacing int with uint16_t shouldn't do any difference (it'll take less memory, but shouldn't affect running time on most machines).
The shown code already seems pretty fast (it's doing only what it's needed to do, and there are no particular mistakes), I don't see how you could optimize it further (at most you could remove the check on mFirstTime, but that should make no difference).
If it's slow it's because of something else. Maybe you've got too many drops, or the rest of your code is so slow that update gets called little times per second.
I'd suggest you to profile your program and see where most time is spent.
EDIT:
one thing that could speed up such algorithm, especially if your system hasn't got an FPU (! That's not the case of a personal computer...), would be to replace your floating point values with integers.
Just multiply the elapsed variable (and your constants, like those 0.1) by 1000 so that they will represent milliseconds, and use only integers everywhere.
Few points:
Physics is incorrect: wind power should be changed as speed makes closed to wind speed, also for simplicity I would assume that initial value of x_speed is the speed of the wind.
You don't take care the fraction with the wind at all, so drops getting faster and faster. but that depends on your want to model.
I would simply assume that drop fails in constant speed in constant direction because this is really what happens very fast.
Also you can optimize all this very simply as you don't need to solve motion equation using integration as it can be solved quite simply directly as:
x(t):= x_0 + wind_speed * t
y(t):= y_0 - fall_speed * t
This is the case of stable fall when the gravity force is equal to friction.
x(t):= x_0 + wind_speed * t;
y(t):= y_0 - 0.5 * g * t^2;
If you want to model drops that fall faster and faster.
Few things to consider:
In your processDrops function, you pass in windPower but use some sort of class member or global called WindPower, is that a typo? If the value of Gravity does not change, then save the calculation (i.e. mult by 0.1) and use that directly.
In your update function, rather than calculating windPower * elapsed and Gravity * elapsed for every iteration, calculate and save that before the loop, then add. Also, re-organise the loop, there is no need to do the speed calculation and render if the drop is out of the screen, do the check first, and if the drop is still in the screen, then update the speed and render!
Interestingly, you never check to see if the drop is out of the screen interms of it's x co-ordinate, you check the height, but not the width, you could save yourself some calculations and rendering time if you did this check as well!
In loop introduce reference Data& current = mData[i] and use it instead of mData[i]. And use this reference instead of index also in procesDrops.
BTW I think that consulting mFirstTime in processDrops serves no purpose because it will never be true. Hmm, I missed processDrops in initialization loop. Never mind this.
This looks pretty fast to me already.
You could get some tiny speedup by removing the "firsttime" code and putting it in it's own functions to call once rather that testing every calls.
You are doing the same calculation on lots of similar data so maybe you could look into using SSE intrinsics to process several items at once. You'l likely have to rearrange your data structure for that though to be a structure of vectors rather than a vector od structures like now. I doubt it would help too much though. How many items are in your vector anyway?
It looks like maybe all your time goes into ... Drawing Here.
It's easy enough to find out for sure where the time is going.