Generating random coordinates with differences between them - c++

So i seek to generate 5 random coordinates within an area of roughly 640,000 points with each axis value between 100 and 900. These coordinates must have a distance of more than 100 between them to prevent an overlap. Having searched previous answers and attempted a piece of code below:
struct point
{
int x;
int y;
};
point pointarray[6];
srand ( time(NULL) );
pointarray[1].x = 100+(std::rand()% 801);
pointarray[1].y = 100+(std::rand()% 801);
for (int n=2; n <= 5 ; n++)
{
double dist;
int currentx;
int currenty;
double xch;
double ych;
while (dist < 100)
{
srand ( time(NULL) );
currentx = (100+(std::rand()% 801));
currenty = (100+(std::rand()% 801));
xch = (currentx - (pointarray[(n-1)].x));
ych = (currenty - (pointarray[(n-1)].y));
dist = sqrt(xch*xch + ych*ych);
if (dist >= 100 && dist <= 800 )
{
currentx = pointarray[n].x;
currenty = pointarray[n].y;
}
}
}
I do not understand why the last 4 points are absolutely huge numbers (in the millions) while only the first is within the required range?

You use uninitialized dist, it could be the problem:
double dist;
....
while (dist < 100)
Also, I see no place where you write to pointarray (except pointarray[1]). Should not
currentx = pointarray[n].x;
become
pointarray[n].x = currentx;
Also, if dist gets bigger than 800, then nothing happens and we just go to the next element. I guess the intention was to stay within while loop instead.
Also, we check the distance to one previous point. I'm not sure, but it could be that the intention was to check distances to all previous points. In the latter case, we need an inner loop perhaps. But make sure that there is no possibility that already placed points make it impossible to put the next one.
Also, perhaps you do not want the second srand ( time(NULL) );

Related

Difference between logspace generators

Looking through ncmpcpp's spectrum visualizer code, I found a method that generates a "logspace," a vector used to group frequencies into log-scaled bins after applying an fft.
Here is the (isolated) code:
// Lowest frequency in display
const double HZ_MIN = 20;
// Highest frequency in display
const double HZ_MAX = 20000;
// Number of bars in spectrum
const size_t width = 100;
std::vector<double> dft_logspace;
void GenLogspace() {
// Calculate number of extra bins needed between 0 HZ and HZ_MIN
const size_t left_bins = (log10(HZ_MIN) - width*log10(HZ_MIN)) / (log10(HZ_MIN) - log10(HZ_MAX));
// Generate logspaced frequencies
dft_logspace.resize(width);
const double log_scale = log10(HZ_MAX) / (left_bins + dft_logspace.size() - 1);
for (size_t i = left_bins; i < dft_logspace.size() + left_bins; ++i) {
dft_logspace[i - left_bins] = pow(10, i * log_scale);
}
}
I spent a while trying to understand how this works... and it seems to be an awfully complicated way to get the same result as the following function, which works the way you'd expect:
Given limits a and b so that a < b, divide the interval [log10(a), log10(b)] into equal subintervals and exponential-map your way back.
// a = HZ_MIN, and
// b = HZ_MAX
void my_GenLogspace() {
dft_logspace.resize(width);
// Generate log-scaled frequency bins between HZ_MAX and HZ_MIN
for (size_t i = 0; i < width; i++) {
dft_logspace[i] = HZ_MIN * pow((HZ_MAX/HZ_MIN), ((double) i/(width-1)));
}
}
I'm fairly sure that these are mathematically identical.
Are they? Is there any reason to use original method over my rewrite? Does the author of the commit that introduced this code know something I don't?
Edit: (width-1), per Bob__'s suggestion
Got it. If anyone happens to need this later...
// Generate log-scaled vector of frequencies from HZ_MIN to HZ_MAX
void GenLogspace() {
// Prepare vector
dft_logspace.resize(width);
// Calculate number of extra bins needed between 0 HZ and HZ_MIN
// In logspace, divide the region between MAX and MIN into
// w - 1 equal segments (by fencepost, this gives us w seperators)
const double d = (
(log10(HZ_MAX) - log10(HZ_MIN))
/
(width - 1)
);
// Count how many of these segments will fit between
// 0 and MIN (note that we're still in logspace).
// This is how many log-scaled intervals are outside
// our desired range of frequencies.
const size_t skip_bins = log10(HZ_MIN) / d;
// Calculate log scale size.
// We can't use the value of d here, because d is "anchored" to both MIN and MAX.
// The last bin should be equal to MAX, but there may not be a bin that is equal to MIN.
//
// So, we re-partition our logspace:
// Divide the distance between 0 and MAX into equal partitions.
const double log_scale = log10(HZ_MAX) / (skip_bins + width - 1);
// Exponential-map bins out of logspace, skipping those that are outside our range.
// Note that the first (skipped) bin is ALWAYS 1, since 10^0 = 1.
// The last bin ALWAYS equals MAX.
for (size_t i = skip_bins; i < width + skip_bins; ++i) {
dft_logspace[i - skip_bins] = pow(10, i * log_scale);
}
}

Alive neighbour cells not correctly counted

I know my title isn't very specific but that's because I have no idea where the problem comes from. I'm stuck with this problem since 2 or 3 hours and in theory everything should be working, but it's not.
This piece of code:
for ( int x = -1; x <= 1; x++ ) { //Iterate through the 8 neighbour cells plus the one indicated
for ( int y = -1; y <= 1; y++ ) {
neighbour = coords(locX + x, locY + y, width); //Get the cell index in the array
if (existsInOrtho(ortho, neighbour)) { //If the index exists in the array
if (ortho[neighbour] == 0) { //Cell is dead
cnt--; //Remove one from the number of alive neighbour cells
}
} else { //Cell is not in the zone
cnt--; //Remove one from the number of alive neighbour cells
}
}
}
Iterates through all the neighbour cells to get their value in the array (1 for alive, 0 for dead). The "coords" function, shown here:
int coords(int locX, int locY, int width)
{
int res = -1;
locX = locX - 1; //Remove one from both coordinates, since an index starts at 0 (and the zone starts at (1;1) )
locY = locY - 1;
res = locX * width + locY; //Small calculation to get the index of the pixel in the array
return res;
}
Gets the index of the cell in the array. But when I run the code, it doesn't work, the number of neighbour cells is not correct (it's like a cell is not counted every time there's some alive in the neighborhood). I tried decomposing everything manually, and it works, so I don't know what ruins everything in the final code... Here is the complete code. Sorry if I made any English mistake, it's not my native language.
This code ...
for ( int x = -1; x <= 1; x++ ) { //Iterate through the 8 neighbour cells plus the one indicated
for ( int y = -1; y <= 1; y++ ) {
Actually checks 9 cells. Perhaps you forgot that it checks (x,y) = (0,0). That would include the cell itself as well as its neighbours.
A simple fix is:
for ( int x = -1; x <= 1; x++ ) { //Iterate through the 8 neighbour cells plus the one indicated
for ( int y = -1; y <= 1; y++ ) {
if (x || y) {
Also, the simulate function (from your link) makes the common mistake of updating the value of the cell in the same array before processing state changes required for the cells beside it. The easiest fix is to keep two arrays -- two complete copies of the grid (two ortho arrays, in your code). When reading from orthoA, update orthoB. And then on the next generation, flip. Read from orthoB and write to orthoA.

3-D Plane Filtering EVD RANSAC... where am I going wrong?

Background
For a computer vision assignment I've been given the task of implementing RANSAC to fit a plane to a given set of points and filter that input list of points by the consensus model using Eigenvalue Decomposition.
I have spent days trying to tweak my code to achieve correct plane filtering behavior on an input set of test data. All you algorithm junkies, this one's for you.
My implementation uses a vector of a ROS data structure (Point32) as inputs, but this is transparent to the problem at hand.
What I've done
When I test for expected plane filtering behavior (correct elimination of outliers >95-99% of the time), I see in my implementation that I only eliminate outliers and extract the main plane of a test point cloud ~30-40% of the time. Other times, I filter a plane that ~somewhat~ fits the expected model, but leaves a lot of obvious outliers inside the consensus model. The fact that this works at all suggests that I'm doing some things right, and some things wrong.
I've tweaked my constants (distance threshold, max iterations, estimated % points fit) to London and back, and I only see small differences in the consensus model.
Implementation (long)
const float RANSAC_ESTIMATED_FIT_POINTS = .80f; // % points estimated to fit the model
const size_t RANSAC_MAX_ITER = 500; // max RANSAC iterations
const size_t RANDOM_MAX_TRIES = 100; // max RANSAC random point tries per iteration
const float RANSAC_THRESHOLD = 0.0000001f; // threshold to determine what constitutes a close point to a plane
/*
Helper to randomly select an item from a STL container, from stackoverflow.
*/
template <typename I>
I random_element(I begin, I end)
{
const unsigned long n = std::distance(begin, end);
const unsigned long divisor = ((long)RAND_MAX + 1) / n;
unsigned long k;
do { k = std::rand() / divisor; } while (k >= n);
std::advance(begin, k);
return begin;
}
bool run_RANSAC(const std::vector<Point32> all_points,
Vector3f *out_p0, Vector3f *out_n,
std::vector<Point32> *out_inlier_points)
{
for (size_t iterations = 0; iterations < RANSAC_MAX_ITER; iterations ++)
{
Point32 p1,p2,p3;
Vector3f v1;
Vector3f v2;
Vector3f n_hat; // keep track of the current plane model
Vector3f P0;
std::vector<Point32> points_agree; // list of points that agree with model within
bool found = false;
// try RANDOM_MAX_TRIES times to get random 3 points
for (size_t tries = 0; tries < RANDOM_MAX_TRIES; tries ++) // try to get unique random points 100 times
{
// get 3 random points
p1 = *random_element(all_points.begin(), all_points.end());
p2 = *random_element(all_points.begin(), all_points.end());
p3 = *random_element(all_points.begin(), all_points.end());
v1 = Vector3f (p2.x - p1.x,
p2.y - p1.y,
p2.z - p1.z ); //Vector P1P2
v2 = Vector3f (p3.x - p1.x,
p3.y - p1.y,
p3.z - p1.z); //Vector P1P3
if (std::abs(v1.dot(v2)) != 1.f) // dot product != 1 means we've found 3 nonlinear points
{
found = true;
break;
}
} // end try random element loop
if (!found) // could not find 3 random nonlinear points in 100 tries, go to next iteration
{
ROS_ERROR("run_RANSAC(): Could not find 3 random nonlinear points in %ld tries, going on to iteration %ld", RANDOM_MAX_TRIES, iterations + 1);
continue;
}
// nonlinear random points exist past here
// fit a plane to p1, p2, p3
Vector3f n = v1.cross(v2); // calculate normal of plane
n_hat = n / n.norm();
P0 = Vector3f(p1.x, p1.y, p1.z);
// at some point, the original p0, p1, p2 will be iterated over and added to agreed points
// loop over all points, find points that are inliers to plane
for (std::vector<Point32>::const_iterator it = all_points.begin();
it != all_points.end(); it++)
{
Vector3f M (it->x - P0.x(),
it->y - P0.y(),
it->z - P0.z()); // M = (P - P0)
float d = M.dot(n_hat); // calculate distance
if (d <= RANSAC_THRESHOLD)
{ // add to inlier points list
points_agree.push_back(*it);
}
} // end points loop
ROS_DEBUG("run_RANSAC() POINTS AGREED: %li=%f, RANSAC_ESTIMATED_FIT_POINTS: %f", points_agree.size(),
(float) points_agree.size() / all_points.size(), RANSAC_ESTIMATED_FIT_POINTS);
if (((float) points_agree.size()) / all_points.size() > RANSAC_ESTIMATED_FIT_POINTS)
{ // if points agree / total points > estimated % points fitting
// fit to points_agree.size() points
size_t n = points_agree.size();
Vector3f sum(0.0f, 0.0f, 0.0f);
for (std::vector<Point32>::iterator iter = points_agree.begin();
iter != points_agree.end(); iter++)
{
sum += Vector3f(iter->x, iter->y, iter->z);
}
Vector3f centroid = sum / n; // calculate centroid
Eigen::MatrixXf M(points_agree.size(), 3);
for (size_t row = 0; row < points_agree.size(); row++)
{ // build distance vector matrix
Vector3f point(points_agree[row].x,
points_agree[row].y,
points_agree[row].z);
for (size_t col = 0; col < 3; col ++)
{
M(row, col) = point(col) - centroid(col);
}
}
Matrix3f covariance_matrix = M.transpose() * M;
Eigen::EigenSolver<Matrix3f> eigen_solver;
eigen_solver.compute(covariance_matrix);
Vector3f eigen_values = eigen_solver.eigenvalues().real();
Matrix3f eigen_vectors = eigen_solver.eigenvectors().real();
// find eigenvalue that is closest to 0
size_t idx;
// find minimum eigenvalue, get index
float closest_eval = eigen_values.cwiseAbs().minCoeff(&idx);
// find corresponding eigenvector
Vector3f closest_evec = eigen_vectors.col(idx);
std::stringstream logstr;
logstr << "Closest eigenvalue : " << closest_eval << std::endl <<
"Corresponding eigenvector : " << std::endl << closest_evec << std::endl <<
"Centroid : " << std::endl << centroid;
ROS_DEBUG("run_RANSAC(): %s", logstr.str().c_str());
Vector3f all_fitted_n_hat = closest_evec / closest_evec.norm();
// invoke copy constructors for outbound
*out_n = Vector3f(all_fitted_n_hat);
*out_p0 = Vector3f(centroid);
*out_inlier_points = std::vector<Point32>(points_agree);
ROS_DEBUG("run_RANSAC():: Success, total_size: %li, inlier_size: %li, %% agreement %f",
all_points.size(), out_inlier_points->size(), (float) out_inlier_points->size() / all_points.size());
return true;
}
} // end iterations loop
return false;
}
Pseudocode from wikipedia for reference:
Given:
data – a set of observed data points
model – a model that can be fitted to data points
n – minimum number of data points required to fit the model
k – maximum number of iterations allowed in the algorithm
t – threshold value to determine when a data point fits a model
d – number of close data points required to assert that a model fits well to data
Return:
bestfit – model parameters which best fit the data (or nul if no good model is found)
iterations = 0
bestfit = nul
besterr = something really large
while iterations < k {
maybeinliers = n randomly selected values from data
maybemodel = model parameters fitted to maybeinliers
alsoinliers = empty set
for every point in data not in maybeinliers {
if point fits maybemodel with an error smaller than t
add point to alsoinliers
}
if the number of elements in alsoinliers is > d {
% this implies that we may have found a good model
% now test how good it is
bettermodel = model parameters fitted to all points in maybeinliers and alsoinliers
thiserr = a measure of how well model fits these points
if thiserr < besterr {
bestfit = bettermodel
besterr = thiserr
}
}
increment iterations
}
return bestfit
The only difference between my implementation and the wikipedia pseudocode is the following:
thiserr = a measure of how well model fits these points
if thiserr < besterr {
bestfit = bettermodel
besterr = thiserr
}
My guess is that I need to do something related to comparing the (closest_eval) with some sentinel value for the expected minimum eigenvalue corresponding to a normal for planes that tend to fit the model. However this was not covered in class and I have no idea where to start figuring out what's wrong.
Heh, it's funny how thinking about how to present the problem to others can actually solve the problem I'm having.
Solved by simply implementing this with a std::numeric_limits::max() starting best fit eigenvalue. This is because the best fit plane extracted on any n-th iteration of RANSAC is not guaranteed to be THE best fit plane and may have a huge error in consensus amongst each constituent point, so I need to converge on that for each iteration. Woops.
thiserr = a measure of how well model fits these points
if thiserr < besterr {
bestfit = bettermodel
besterr = thiserr
}

Implementing De Boors algorithm for finding points on a B-spline

I've been working on this for several weeks but have been unable to get my algorithm working properly and i'm at my wits end. Here's an illustration of what i have achieved:
If everything was working i would expect a perfect circle/oval at the end.
My sample points (in white) are recalculated every time a new control point (in yellow) is added. At 4 control points everything looks perfect, again as i add a 5th on top of the 1st things look alright, but then on the 6th it starts to go off too the side and on the 7th it jumps up to the origin!
Below I'll post my code, where calculateWeightForPointI contains the actual algorithm. And for reference- here is the information i'm trying to follow. I'd be so greatful if someone could take a look for me.
void updateCurve(const std::vector<glm::vec3>& controls, std::vector<glm::vec3>& samples)
{
int subCurveOrder = 4; // = k = I want to break my curve into to cubics
// De boor 1st attempt
if(controls.size() >= subCurveOrder)
{
createKnotVector(subCurveOrder, controls.size());
samples.clear();
for(int steps=0; steps<=20; steps++)
{
// use steps to get a 0-1 range value for progression along the curve
// then get that value into the range [k-1, n+1]
// k-1 = subCurveOrder-1
// n+1 = always the number of total control points
float t = ( steps / 20.0f ) * ( controls.size() - (subCurveOrder-1) ) + subCurveOrder-1;
glm::vec3 newPoint(0,0,0);
for(int i=1; i <= controls.size(); i++)
{
float weightForControl = calculateWeightForPointI(i, subCurveOrder, controls.size(), t);
newPoint += weightForControl * controls.at(i-1);
}
samples.push_back(newPoint);
}
}
}
//i = the weight we're looking for, i should go from 1 to n+1, where n+1 is equal to the total number of control points.
//k = curve order = power/degree +1. eg, to break whole curve into cubics use a curve order of 4
//cps = number of total control points
//t = current step/interp value
float calculateWeightForPointI( int i, int k, int cps, float t )
{
//test if we've reached the bottom of the recursive call
if( k == 1 )
{
if( t >= knot(i) && t < knot(i+1) )
return 1;
else
return 0;
}
float numeratorA = ( t - knot(i) );
float denominatorA = ( knot(i + k-1) - knot(i) );
float numeratorB = ( knot(i + k) - t );
float denominatorB = ( knot(i + k) - knot(i + 1) );
float subweightA = 0;
float subweightB = 0;
if( denominatorA != 0 )
subweightA = numeratorA / denominatorA * calculateWeightForPointI(i, k-1, cps, t);
if( denominatorB != 0 )
subweightB = numeratorB / denominatorB * calculateWeightForPointI(i+1, k-1, cps, t);
return subweightA + subweightB;
}
//returns the knot value at the passed in index
//if i = 1 and we want Xi then we have to remember to index with i-1
float knot(int indexForKnot)
{
// When getting the index for the knot function i remember to subtract 1 from i because of the difference caused by us counting from i=1 to n+1 and indexing a vector from 0
return knotVector.at(indexForKnot-1);
}
//calculate the whole knot vector
void createKnotVector(int curveOrderK, int numControlPoints)
{
int knotSize = curveOrderK + numControlPoints;
for(int count = 0; count < knotSize; count++)
{
knotVector.push_back(count);
}
}
Your algorithm seems to work for any inputs I tried it on. Your problem might be a that a control point is not where it is supposed to be, or that they haven't been initialized properly. It looks like there are two control-points, half the height below the bottom left corner.

Finding Local Maxima Grayscale Image opencv

I am trying to create my personal Blob Detection algorithm
As far as I know I first must create different Gaussian Kernels with different sigmas (which I am doing using Mat kernel= getGaussianKernel(x,y);) Then get the Laplacian of that kernel and then filter the Image with that so I create my scalespace. Now I need to find the Local Maximas in each result Image of the scalespace. But I cannot seem to find a proper way to do so.... my Code so far is
vector <Point> GetLocalMaxima(const cv::Mat Src,int MatchingSize, int Threshold)
{
vector <Point> vMaxLoc(0);
if ((MatchingSize % 2 == 0) ) // MatchingSize has to be "odd" and > 0
{
return vMaxLoc;
}
vMaxLoc.reserve(100); // Reserve place for fast access
Mat ProcessImg = Src.clone();
int W = Src.cols;
int H = Src.rows;
int SearchWidth = W - MatchingSize;
int SearchHeight = H - MatchingSize;
int MatchingSquareCenter = MatchingSize/2;
uchar* pProcess = (uchar *) ProcessImg.data; // The pointer to image Data
int Shift = MatchingSquareCenter * ( W + 1);
int k = 0;
for(int y=0; y < SearchHeight; ++y)
{
int m = k + Shift;
for(int x=0;x < SearchWidth ; ++x)
{
if (pProcess[m++] >= Threshold)
{
Point LocMax;
Mat mROI(ProcessImg, Rect(x,y,MatchingSize,MatchingSize));
minMaxLoc(mROI,NULL,NULL,NULL,&LocMax);
if (LocMax.x == MatchingSquareCenter && LocMax.y == MatchingSquareCenter)
{
vMaxLoc.push_back(Point( x+LocMax.x,y + LocMax.y ));
// imshow("W1",mROI);cvWaitKey(0); //For gebug
}
}
}
k += W;
}
return vMaxLoc;
}
which I found in this thread here, which it supposedly returns a vector of points where the maximas are. it does return a vector of points but all the x and y coordinates of each point are always -17891602... What to do???
Please if you are to lead me in something else other than correcting my code be informative because I know nothing about opencv. I am just learning
The problem here is that your LocMax point is declared inside the inner loop and never initialized, so it's returning garbage data every time. If you look back at the StackOverflow question you linked, you'll see that their similar variable Point maxLoc(0,0) is declared at the top and constructed to point at the middle of the search window. It only needs to be initialized once. Subsequent loop iterations will replace the value with the minMaxLoc function result.
In summary, remove this line in your inner loop:
Point LocMax; // delete this
And add a slightly altered version near the top:
vector <Point> vMaxLoc(0); // This was your original first line
Point LocMax(0,0); // your new second line
That should get you started anyway.
I found it guys. The problem was my threshold was too high. I do not understand why it gave me negative points instead of zero points but lowering the threshold worked