Generating random points are so close together [closed] - c++

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 29 days ago.
The community is reviewing whether to reopen this question as of 29 days ago.
Improve this question
I'm trying to generate a uniform gird, then selecting random points out of it. The problem is the points that are chosen are very close to each other. Here is my trial
Minmumal example : https://ideone.com/80jFm2
int random(int min, int max) //range : [min, max]
{
static bool first = true;
if (first)
{
srand(time(NULL)); //seeding for the first time only!
first = false;
}
return min + rand() % ((max + 1) - min);
}
jcv_point p1, p2, p3;
p1.x = 325;
p1.y = 239;
p2.x = 431;
p2.y = 448;
p3.x = 640;
p3.y = 685;
float radius = 100;
std::vector<jcv_point> grid;
std::vector<int> rand_nums;
for (int i = 20; i < 1000; i++)
{
for (int j = 20; j < 1000; j++)
{
float x = (float)i;
float y = (float)j;
float distance1 = sqrt(pow(p1.x - x, 2) + pow(p1.y - y, 2));
float distance2 = sqrt(pow(p2.x - x, 2) + pow(p2.y - y, 2));
float distance3 = sqrt(pow(p3.x - x, 2) + pow(p3.y - y, 2));
if (distance1 > radius && distance2 > radius && distance3 > radius)
{
jcv_point p;
p.x = x;
p.y = y;
grid.push_back(p);
int idx = random(0, grid.size());
rand_nums.push_back(idx%width);
}
}
}
points = (jcv_point*)malloc(sizeof(jcv_point) * (size_t)count);
for (int i = 0; i < count; ++i)
{
int idx = random(0, grid.size());
points[i] = grid[rand_nums[idx]];
}

In the first part of your code you have, basically:
// create new grid
for (x = ...)
for (y = ...)
// add a point to the grid
// generate a random number between 0 and grid.size
So your random numbers at first will be clustered at the low end. That is, first time you generate a random number between 0 and 1. Then between 0 and 2, etc., etc. As your grid size increases, the range of the random numbers grows, but overall your distribution will be skewed towards the low end.
Not sure what the point is of generating your random numbers and storing them when you build the grid. I also don't see the point of dividing the generated random number by the width. Seems like you could get the desired effect by generating random points after the entire grid is built.
Build the grid, then generate random numbers between 0 and grid.size.

Related

Limited float precision and infinitely harmonic signal generation problem

Suppose we need to generate a very long harmonic signal, ideally infinitely long. At first glance, the solution seems trivial:
Sample1:
float t = 0;
while (runned)
{
float v = sinf(w * t);
t += dt;
}
Unfortunately, this is a non-working solution. For t >> dt due to limited float precision incorrect values will be obtained. Fortunately we can call to mind that sin(2*PI* n + x) = sin(x) where n - arbitrary integer value, therefore modifying the example is not difficult to get an "infinite" analog
Sample2:
float t = 0;
float tau = 2 * M_PI / w;
while (runned)
{
float v = sinf(w * t);
t += dt;
if (t > tau) t -= tau;
}
For one physical simulation, I needed to get an infinite signal, which is the sum of harmonic signals, like that:
Sample3:
float getSignal(float x)
{
float ret = 0;
for (int i = 0; i < modNum; i++)
ret += sin(w[i] * x);
return ret;
}
float t = 0;
while (runned)
{
float v = getSignal(t);
t += dt;
}
In this form, the code does not work correctly for large t, for similar reasons for the Sample1. The question is - how to get an "infinite" implementation of the Sample3 algorithm? I assume that the solution should looks like an Sample2. A very important note - generally speaking, w[i] is arbitrary and not harmonics, that is, all frequencies are not multiples of some base frequency, so i can't find common tau. Using types with greater precission (double, long double) is not allowed.
Thanks for your advice!
You can choose an arbitrary tau and store the phase reminders for each mod when subtracting it from t (as #Damien suggested in the comments).
Also, representing the time as t = dt * it where it is an integer can improve numerical stability (i think).
Maybe something like this:
int ndt = 1000; // accumulate phase every 1000 steps for example
float tau = dt * ndt;
std::vector<float> phases(modNum, 0.0f);
int it = 0;
float t = 0.0f;
while (runned)
{
t = dt * it;
float v = 0.0f;
for (int i = 0; i < modNum; i++)
{
v += sinf(w[i] * t + phases[i]);
}
if (++it >= ndt)
{
it = 0;
for (int i = 0; i < modNum; ++i)
{
phases[i] = fmod(w[i] * tau + phases[i], 2 * M_PI);
}
}
}

Can I find quadratic function when Im being given x and y values? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I was doing an algorithm where Im given x and y solution, and I need to find if possible, quadratic formula for that solutions.
What I actually mean is:
If Im given output:
f(1) = 1
f(2) = 5
f(3) = 13
f(4) = 25
So, the function should return 1,5,13,25
Function that gives that output is: 2x^2-2x+1, but how do I get that?
If your y-values are precise, you can solve system of linear equations
a*x1^2 + b*x1 + c = y1
a*x2^2 + b*x2 + c = y2
a*x3^2 + b*x3 + c = y3
substitute known values for three points and find unknowns coefficients a,b,c
If values are approximate, use least squares method (more precise - polynomial least squares) with all points
This is piece of code for least sqaure analysis. It is written for newmat matrix library. Because i don't use it right now so i am too lazy to rewrite it into armadillo library i am currently using. Just to avoid mistakes newmat starts with vector/matrix indexes from 1 instead of 0.
void polynomial_approx( const vector<double>& x, const vector<double>& fx, vector<double>& coeff, int pd)
{
// x - input values of independent variable
// fx - input values of dependent variable
// coeff - output vector with polynomial coeffiecients
// pd - polynomial degree
if ( x.size() < pd ){
cerr << "Not enough data for such high polynomial degree." << endl;
};
coeff.clear();
Matrix A(x.size(), pd + 1);
Matrix D(pd+1,pd+1);
ColumnVector y(fx.size());
ColumnVector dx;
// converting vector from c++ type to newmat vector
for (unsigned int i = 0; i < fx.size(); i++)
y(i+1) = fx[i];
// creating the design matrix
for (unsigned int i = 1; i <= x.size();i++ ){
for (unsigned int j = 1; j<= pd+1;j++ ){
A(i,j) = pow(x[i],j-1);
}
}
// compute the unknown coefficients
dx = (A.t() * A ).i() * A.t() * y;
for (unsigned int i = 1; i<= dx.Ncols(); i++)
coeff.push_back( dx(i) );
/* reconstruction of polynomial */
vector<double> recon (x.size(), 0.0 );
for ( unsigned int i = 0; i < x.size() ; i++){
for ( unsigned int j = 0; j< coeff.size(); j++){
recon[i] += coeff[j]*pow( x[i], (double) j );
}
}
}

calculating numerical integral in c++ [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I want to write a function that can calculate me inetegral e^(cos x) on range (a,b)
double integral(double(*f)(double x), double a, double b, int n) {
double step = (b - a) / n; // width of each small rectangle
double area = 0.0; // signed area
for (int i = 0; i < n; i ++) {
area += f(a + (i + 0.5) * step) * step; // sum up each small rectangle
}
return area;
}
This is what I have found but I`m new in c++ and I can't work with pointers
if there is another way please help me.
That function you found allows you to integrate any function you want, and that funcion is the first parameter of the integral method. You can just remove the first argument ('double(*f)(double x)' ), that is because the function you want to integrate is known ( e^cos(x)), so you don't need to give it as an argument. Then, in the for loop, you just replace de f function for e^cos(x). The method will look like this:
double integral(double a, double b, int n){
double step = (b - a) / n; // width of each small rectangle
double area = 0.0; // signed area
for (int i = 0; i < n; i ++) {
area += exp(cos(a + (i + 0.5) * step)) * step; // sum up each small rectangle
}
return area;
}
#include <functional>
template<typename T>
T integral(const std::function<T(T)>& f, T a, T b, int n) {
auto step = (b - a) / n; // width of each small rectangle
auto area = static_cast<T>(0); // signed area
for (auto i = 0; i < n; i++)
{
// sum up each small rectangle
area += f(a + (i + static_cast<T>( 0.5)) * step) * step;
}
return area;
}
int main()
{
std::function<float(float)> f_sine = [](float in) { return sin(in); };
auto two = integral(f_sine, 0.0f, 3.14f, 20);
return 0;
}
That will be $3.50

Calculate the running standard deviation

I am converting equations to c++. Is this correct for a running standard deviation.
this->runningStandardDeviation = (this->sumOfProcessedSquaredSamples - sumSquaredDividedBySampleCount) / (sampleCount - 1);
Here is the full function:
void BM_Functions::standardDeviationForRunningSamples (float samples [], int sampleCount)
{
// update the running process samples count
this->totalSamplesProcessed += sampleCount;
// get the mean of the samples
double mean = meanForSamples(samples, sampleCount);
// sum the deviations
// sum the squared deviations
for (int i = 0; i < sampleCount; i++)
{
// update the deviation sum of processed samples
double deviation = samples[i] - mean;
this->sumOfProcessedSamples += deviation;
// update the squared deviations sum
double deviationSquared = deviation * deviation;
this->sumOfProcessedSquaredSamples += deviationSquared;
}
// get the sum squared
double sumSquared = this->sumOfProcessedSamples * this->sumOfProcessedSamples;
// get the sum/N
double sumSquaredDividedBySampleCount = sumSquared / this->totalSamplesProcessed;
this->runningStandardDeviation = sqrt((this->sumOfProcessedSquaredSamples - sumSquaredDividedBySampleCount) / (sampleCount - 1));
}
A numerically stable and efficient algorithm for computing the running mean and variance/SD is Welford's algorithm.
One C++ implementation would be:
std::pair<double,double> getMeanVariance(const std::vector<double>& vec) {
double mean = 0, M2 = 0, variance = 0;
size_t n = vec.size();
for(size_t i = 0; i < n; ++i) {
double delta = vec[i] - mean;
mean += delta / (i + 1);
M2 += delta * (vec[i] - mean);
variance = M2 / (i + 1);
if (i >= 2) {
// <-- You can use the running mean and variance here
}
}
return std::make_pair(mean, variance);
}
Note: to get the SD, just take sqrt(variance)
You may check for sufficient sampleSount (1 would cause division by zero)
MAke sure that the variables have suitable data type (floating point)
Otherwise this looks correct...

normalizing a list of doubles to range -1 to 1 or 0 - 255

I have a list of doubles in the range of anywhere between -1.396655 to 1.74707 could even be higher or lower, either way I would know what the Min and Max value is before normalizing. My question is How can I normalize these values between -1 to 1 or even better yet convert them from double values to char values of 0 to 255
Any help would be appreciated.
double range = (double)(max - min);
value = 255 * (value - min)/range
You need a mapping of the form y = mx + c, and you need to find an m and a c. You have two fixed data-points, i.e.:
1 = m * max + c
-1 = m * min + c
From there, it's simple algebra.
The easiest thing is to first shift all the values so that min is 0, by subtracting Min from each number. Then multiply by 255/(Max-Min), so that the shifted Max will get mapped to 255, and everything else will scale linearly. So I believe your equation would look like this:
newval = (unsigned char) ((oldval - Min)*(255/(Max-Min)))
You may want to round a bit more carefully before casting to char.
There are two changes to be made.
First, use 256 as the limit.
Second, make sure your range is scaled back slightly to avoid getting 256.
public int GetRangedValue(double value, double min, double max)
{
int outputLimit = 256;
double range = (max - min) - double.Epsilon; // Here we shorten the range slightly
// Then we build a range such that value >= 0 and value < 1
double rangedValue = (value - min) / range;
return min + (int)(outputLimit * rangedValue);
}
With these two changes, you will get the correct distribution in your output.
I solved this need when I dived into doing some convolution stuff using C++.
Hopefully my code can have you a useful reference :)
bool normalize(uint8_t*& dst, double* src, int width, int height) {
dst = new uint8_t[sizeof(uint8_t)*width*height];
if (dst == NULL)
return false;
memset(dst, 0, sizeof(uint8_t)*width*height);
double max = std::numeric_limits<double>::min();
double min = std::numeric_limits<double>::max();
double range = std::numeric_limits<double>::max();
double norm = 0.0;
//find the boundary
for (int j=0; j<height; j++) {
for (int i=0; i<width; i++) {
if (src[i+j*width] > max)
max = src[i+j*width];
else if (src[i+j*width] < min)
min = src[i+j*width];
}
}
//normalize double matrix to be an uint8_t matrix
range = max - min;
for (int j=0; j<height; j++) {
for (int i=0; i<width; i++) {
norm = src[i+j*width];
norm = 255.0*(norm-min)/range;
dst[i+j*width] = (uint8_t)norm;
}
}
return true;
}
Basically output (calley receives by 'dst') is around [0, 255].