I am trying to implement some function like below
For this I am trying to use Cubic interpolation and Catmull interpolation ( check both separately to compare the best result) , what i am not understanding is what impact these interpolation show on image and how we can get these points values where we clicked to set that curve ? and do we need to define the function these black points on the image separately ?
I am getting help from these resources
Source 1
Source 2
Approx the same focus
Edit
int main (int argc, const char** argv)
{
Mat input = imread ("E:\\img2.jpg");
for(int i=0 ; i<input.rows ; i++)
{
for (int p=0;p<input.cols;p++)
{
//for(int t=0; t<input.channels(); t++)
//{
input.at<cv::Vec3b>(i,p)[0] = 255*correction(input.at<cv::Vec3b>(i,p)[0]/255.0,ctrl,N); //B
input.at<cv::Vec3b>(i,p)[1] = 255*correction(input.at<cv::Vec3b>(i,p)[1]/255.0,ctrl,N); //G
input.at<cv::Vec3b>(i,p)[2] = 255*correction(input.at<cv::Vec3b>(i,p)[2]/255.0,ctrl,N); //R
//}
}
}
imshow("image" , input);
waitKey();
}
So if your control points are always on the same x coordinate
and linearly dispersed along whole range then you can do it like this:
//---------------------------------------------------------------------------
const int N=5; // number of control points (must be >= 4)
float ctrl[N]= // control points y values initiated with linear function y=x
{ // x value is index*1.0/(N-1)
0.00,
0.25,
0.50,
0.75,
1.00,
};
//---------------------------------------------------------------------------
float correction(float col,float *ctrl,int n)
{
float di=1.0/float(n-1);
int i0,i1,i2,i3;
float t,tt,ttt;
float a0,a1,a2,a3,d1,d2;
// find start control point
col*=float(n-1);
i1=col; col-=i1;
i0=i1-1; if (i0< 0) i0=0;
i2=i1+1; if (i2>=n) i2=n-1;
i3=i1+2; if (i3>=n) i3=n-1;
// compute interpolation coefficients
d1=0.5*(ctrl[i2]-ctrl[i0]);
d2=0.5*(ctrl[i3]-ctrl[i1]);
a0=ctrl[i1];
a1=d1;
a2=(3.0*(ctrl[i2]-ctrl[i1]))-(2.0*d1)-d2;
a3=d1+d2+(2.0*(-ctrl[i2]+ctrl[i1]));
// now interpolate new colro intensity
t=col; tt=t*t; ttt=tt*t;
t=a0+(a1*t)+(a2*tt)+(a3*ttt);
return t;
}
//---------------------------------------------------------------------------
It uses 4-point 1D interpolation cubic (from that link in my comment above) to get new color just do this:
new_col = correction(old_col,ctrl,N);
this is how it looks:
the green arrows shows derivation error (always only on start and end point of whole curve). It can be corrected by adding 2 more control points one before and one after all others ...
[Notes]
color range is < 0.0 , 1.0 > so if you need other then just multiply the result and divide the input ...
[edit1] the start/end derivations fixed a little
float correction(float col,float *ctrl,int n)
{
float di=1.0/float(n-1);
int i0,i1,i2,i3;
float t,tt,ttt;
float a0,a1,a2,a3,d1,d2;
// find start control point
col*=float(n-1);
i1=col; col-=i1;
i0=i1-1;
i2=i1+1; if (i2>=n) i2=n-1;
i3=i1+2;
// compute interpolation coefficients
if (i0>=0) d1=0.5*(ctrl[i2]-ctrl[i0]); else d1=ctrl[i2]-ctrl[i1];
if (i3< n) d2=0.5*(ctrl[i3]-ctrl[i1]); else d2=ctrl[i2]-ctrl[i1];
a0=ctrl[i1];
a1=d1;
a2=(3.0*(ctrl[i2]-ctrl[i1]))-(2.0*d1)-d2;
a3=d1+d2+(2.0*(-ctrl[i2]+ctrl[i1]));
// now interpolate new colro intensity
t=col; tt=t*t; ttt=tt*t;
t=a0+(a1*t)+(a2*tt)+(a3*ttt);
return t;
}
[edit2] just some clarification on the coefficients
they are all derived from this conditions:
y(t) = a0 + a1*t + a2*t*t + a3*t*t*t // direct value
y'(t) = a1 + 2*a2*t + 3*a3*t*t // first derivation
now you have points y0,y1,y2,y3 so I chose that y(0)=y1 and y(1)=y2 which gives c0 continuity (value is the same in the joint points between curves)
now I need c1 continuity so i add y'(0) must be the same as y'(1) from previous curve.
for y'(0) I choose avg direction between points y0,y1,y2
for y'(1) I choose avg direction between points y1,y2,y3
These are the same for the next/previous segments so it is enough. Now put it all together:
y(0) = y0 = a0 + a1*0 + a2*0*0 + a3*0*0*0
y(1) = y1 = a0 + a1*1 + a2*1*1 + a3*1*1*1
y'(0) = 0.5*(y2-y0) = a1 + 2*a2*0 + 3*a3*0*0
y'(1) = 0.5*(y3-y1) = a1 + 2*a2*1 + 3*a3*1*1
And solve this system of equtions (a0,a1,a2,a3 = ?). You will get what I have in source code above. If you need different properties of the curve then just make different equations ...
[edit3] usage
pic1=pic0; // copy source image to destination pic is mine image class ...
for (y=0;y<pic1.ys;y++) // go through all pixels
for (x=0;x<pic1.xs;x++)
{
float i;
// read, convert, write pixel
i=pic1.p[y][x].db[0]; i=255.0*correction(i/255.0,red control points,5); pic1.p[y][x].db[0]=i;
i=pic1.p[y][x].db[1]; i=255.0*correction(i/255.0,green control points,5); pic1.p[y][x].db[1]=i;
i=pic1.p[y][x].db[2]; i=255.0*correction(i/255.0,blue control points,5); pic1.p[y][x].db[2]=i;
}
On top there are control points per R,G,B. On bottom left is original image and on bottom right is corrected image.
Related
Problem: Generate a extrapolated local path which provides path points ahead of the max FOV.
Situation: Having a car move round an unknown looped track of varying shape using a field of view which is limited so can only provide reliably 3 points ahead of the car and the car's current position. Note for more information the tack is defined by cone gates and the information provided about the locations of said gates is 2D (x,y).
Background: I have successfully generated a vector of mid points between gates however wish to generate an extrapolated path for the motion control algorithm to use. the format of this path needs to be a sequence of PathPoint (s) which contain (x,y velocity, gravity). note that gravity is just used to cap the maximum acceleration and is not important to the situation nor is velocity as the post is only concerned about generating respective (x,y) co-ordinates.
Attempted Solution Methodology: To fit two cubic functions for X positions and Y positions using the set of four points i.e f(x) and g(y). These functions should then be provided as the desired (f(x),g(y)) positions as we vary the look ahead distance to supply 20 path points.
Question: I do not believe this method to be correct both in theory and in implementation can anyone think of an easy/simple methodology to achieve the out come of having position in x axis and position in y axis to be the functions from the argument of overall distance from the car?
double PathPlanningClass::squared(double arg)
{
return arg*arg;
}
double PathPlanningClass::cubed(double arg)
{
return arg*arg*arg;
}
//https://eigen.tuxfamily.org/dox/group__TutorialLinearAlgebra.html
void PathPlanningClass::Coeffs()
{
Eigen::Matrix4f Aone;
Eigen::Vector4f bone;
Aone << _x, squared(_x), cubed(_x), _midpoints[0].getX(), squared(_midpoints[0].getX()), cubed(_midpoints[0].getX()), _midpoints[1].getX(), squared(_midpoints[1].getX()), cubed(_midpoints[1].getX()), _midpoints[_midpoints.size()-1].getX(), squared(_midpoints[_midpoints.size()-1].getX()), cubed(_midpoints[_midpoints.size()-1].getX());
bone << _y, _midpoints[0].getY(), _midpoints[1].getY(), _midpoints[_midpoints.size()-1].getY();
Eigen::Vector4f x = Aone.colPivHouseholderQr().solve(bone);
_Ax = x(1);
_Bx = x(2);
_Cx = x(3);
_Dx = x(4);
Eigen::Matrix4f Atwo;
Eigen::Vector4f btwo;
Atwo << _y, squared(_y), cubed(_y), _midpoints[0].getY(), squared(_midpoints[0].getY()), cubed(_midpoints[0].getY()), _midpoints[1].getY(), squared(_midpoints[1].getY()), cubed(_midpoints[1].getY()), _midpoints[_midpoints.size()-1].getY(), squared(_midpoints[_midpoints.size()-1].getY()), cubed(_midpoints[_midpoints.size()-1].getY());
btwo << _x, _midpoints[0].getX(), _midpoints[1].getX(), _midpoints[_midpoints.size()-1].getX();
Eigen::Vector4f y = Aone.colPivHouseholderQr().solve(bone);
_Ay = y(1);
_By = y(2);
_Cy = y(3);
_Dx = y(4);
return;
}
void PathPlanningClass::extrapolate()
{
// number of desired points
int numOfpoints = 20;
// distance to be extrapolated from car's location
double distance = 10;
// the argument for g(y) and f(x)
double arg = distance/numOfpoints;
for (int i = 0 ; i < numOfpoints; i++)
{
double farg = _Ax + _Bx*arg*i + _Cx*squared(arg*i) + _Dx*cubed(arg*i);
double garg = _Ay + _By*arg*i + _Cy*squared(arg*i) + _Dy*cubed(arg*i);
PathPoint newPoint(farg, garg, velocity(_x, _y, _yaw), 9.8);
_path.push_back(newPoint);
}
return;
}
I'm fairly new to programming and would like to know how to start implementing the following algorithm in C++,
Given a binary image where pixels with intensity 255 show edges and pixels with intensity 0 show the background, find line segments longer than n pixels in the image. t is a counter showing the number of iterations without finding a line, and tm is the maximum number of iterations allowed before exiting the program.
Let t=0.
Take two edge points randomly from the image and find equation of the line passing
through them.
Find m, the number of other edge points in the image that are within distance d pixels of
the line.
If m > n, go to Step 5.
Otherwise (m ≤ n), increment t by 1 and if t < tm go to Step 2, and
if t ≥ tm exit program.
Draw the line and remove the edge points falling within distance d pixels of it from the
image. Then, go to Step 1
Basically, I just want to randomly pick two points from the image, find the distance between them, and if that distance is too small, I would detect a line between them.
I would appreciate if a small code snippet is provided, to get me started.
this is more like a RANSAC parametric line detection. I would also keep this post updated if I get it done.
/* Display Routine */
#include "define.h"
ByteImage bimg; //A copy of the image to be viewed
int width, height; //Window dimensions
GLfloat zoomx = 1.0, zoomy = 1.0; //Pixel zoom
int win; //Window index
void resetViewer();
void reshape(int w, int h) {
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
if ((w!=width) || (h!=height)) {
zoomx=(GLfloat)w/(GLfloat)bimg.nc;
zoomy=(GLfloat)h/(GLfloat)bimg.nr;
glPixelZoom(zoomx,zoomy);
}
width=w; height=h;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, (GLdouble)w, 0.0, (GLdouble)h);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void mouse(int button, int state, int x, int y) {
glutPostRedisplay();
if((button == GLUT_LEFT_BUTTON) && (state == GLUT_DOWN) &&
(zoomx==1.0) && (zoomy==1.0)){
printf(" row=%d, col=%d, int=%d.\n", y,x, (int)bimg.image[(bimg.nr-1-y)*bimg.nc+x]);
glutPostRedisplay();
}
}
void display() {
glClear(GL_COLOR_BUFFER_BIT);
glRasterPos2i(0, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glDrawPixels((GLsizei)bimg.nc,(GLsizei)bimg.nr, GL_LUMINANCE,GL_UNSIGNED_BYTE, bimg.image);
glutSwapBuffers();
}
Let us assume you have an int[XDIMENSION][YDIMENSION]
Let t=0.
int t = 0; // ;-)
Take two edge points randomly from the image and find equation of the line passing through them.
Brute force: you could randomly search the image for points and re-search when they are not edge points
struct Point {
int x;
int y;
};
bool is_edge(Point a) {
return image[a.x][a.y] == 255;
}
int randomUpto(int upto) {
int r = rand() % upto;
return r;
}
, which needs the pseudo-random number generator to be initialized via
srand(time(NULL));
To find edge points
Point a;
do {
a.x = randomUpto(XDIMENSION);
a.y = randomUpto(YDIMENSION);
} while ( ! is_edge(a) );
Find m, the number of other edge points in the image that are within distance d pixels of the line.
You need the line between the points. Some searching yields this fine answer, which leads to
std::vector<Point> getLineBetween(Point a, Point b) {
double dx = b.x - a.x;
double dy = b.y - a.y;
double dist = sqrt(dx * dx + dy * dy);
dx /= dist;
dy /= dist;
std::vector<Point> points;
points.push_back(a);
for ( int i = 0 ; i < 2*dist; i++ ) {
Point tmp;
tmp.x = a.x + (int)(i * dx /2.0);
tmp.y = a.y + (int)(i * dy /2.0);
if ( tmp.x != points.back().x
|| tmp.y != points.back().y ) {
points.push_back(tmp);
}
}
return points;
}
Do you see a pattern here? Isolate the steps into substeps, ask google, look at the documentation, try out stuff until it works.
Your next steps might be to
create a distance function, euclidean should suffice
find all points next to line (or next to a point, which is easier) based on the distance function
Try out some and come back if you still need help.
given a shapes orignal centroid + vertices .. i.e. if its a triangle, i know all three vertices coords. How could i then create a scaling function with a scaling factor as a parameter as below.. however my current code is with error and the result are huge shapes, much more than what im scaling by (only want scale factor of 2).
void Shape::scale(double factor)
{
int x, y, xx, xy;
int disx, disy;
for (itr = vertices.begin(); itr != vertices.end(); ++itr) {
//translate obj to origin (0,0)
x = itr->getX() - centroid.getX();
y = itr->getY() - centroid.getY();
//finds distance between centroid and vertex
disx = x + itr->getX();
disy = y + itr->getY();
xx = disx * factor;
xy = disy * factor;
//translate obj back
xx = xx + centroid.getX();
xy = xy + centroid.getY();
//set new coord
itr->setX(xx);
itr->setY(xy);
}
}
I know of using iterations to run through the vertices, my main point of confusion is how can i do the maths between the factor to scale my shapes size?
this is how i declare and itialise a vertex
// could i possible do (scale*x,scale*y)? or would that be problematic..
vertices.push_back(Vertex(x, y));
Also.. the grid is i.e. 100x100. if a scaled shape was to be too big to fit into that grid, i want an exit from the scale function so that the shape wont be enlarged, how can this be done effectively? so far i have a for look but that just loops on vertices, so it will only stop those that would be outside the grid, instead of cancelling the entire shape which would be ideal
if my question is too broad, please ask and i shall edit further to standard
First thing you need to do is find the center of mass of your set of points. That is the arithmetic mean of the coordinates of your points. Then, for each point calculate the line between the center of mass and that point. Now the only thing left is to put the point on that line, but in factor * current_distance away, where current_distance is the distance from the mass center to the given point before rescaling.
void Shape::scale(double factor)
{
Vertex mass_center = Vertex(0., 0.);
for(int i = 0; i < vertices.size(); i++)
{
mass_center.x += vertices[i].x;
mass_center.y += vertices[i].y;
}
mass_center.x /= vertices.size();
mass_center.y /= vertices.size();
for(int i = 0; i < vertices.size(); i++)
{
//this is a vector that leads from mass center to current vertex
Vertex vec = Vertex(vertices[i].x - mass_center.x, vertices[i].y - mass_center.y);
vertices[i].x = mass_center.x + factor * vec.x;
vertices[i].y = mass_center.y + factor * vec.y;
}
}
If you already know the centroid of a shape and the vertexes are the distance from that point then scaling in rectangular coordinates is just multiplying the x and y components of each vertex by the appropriate scaling factor (with a negative value flipping the shape around the axis.
void Shape::scale(double x_factor, double y_factor){
for(auto i=0; i < verticies.size();++i){
verticies[i].x *= x_scale;
verticies[i].y *= y_scale;
}
}
You could then just overload this function with one that takes a single parameter and calls this function with the same value for x and y.
void Shape::scale(double factor){
Shape::scale(factor, factor);
}
If you're vertex values are not centered at the origin then you will also have to multiply those values by your scaling factor.
I was wandering what the best approach would be for detecting 'figures' in an array of 2D points.
In this example I have two 'templates'. Figure 1 is a template and figure 2 is a template.
Each of these templates exists only as a vector of points with an x,y coordinate.
Let's say we have a third vector with points with x,y coordinate
What would be the best way to find out and isolate points matching one of the first two arrays in the third one. (including scaling, rotation)?
I have been trying nearest neigbours(FlannBasedMatcehr) or even SVM implementation but it doesn't seem to get me any result, template matching doesn't seem to be the way to go either, I think. I am not working on images but only on 2D points in memory...
Especially because the input vector always has more points than the original data set to be compared with.
All it needs to do is find points in array that match a template.
I am not a 'specialist' in machine learning or opencv. I guess I am overlooking something from the beginning...
Thank you very much for your help/suggestions.
just for fun I tried this:
Choose two points of the point dataset and compute the transformation mapping the first two pattern points to those points.
Test whether all transformed pattern points can be found in the data set.
This approach is very naive and has a complexity of O(m*n²) with n data points and a single pattern of size m (points). This complexity might be increased for some nearest neighbor search methods. So you have to consider whether it's not efficient enough for your appplication.
Some improvements could include some heuristic to not choose all n² combinations of points but, but you need background information of maximal pattern scaling or something like that.
For evaluation I first created a pattern:
Then I create random points and add the pattern somewhere within (scaled, rotated and translated):
After some computation this method recognizes the pattern. The red line shows the chosen points for transformation computation.
Here's the code:
// draw a set of points on a given destination image
void drawPoints(cv::Mat & image, std::vector<cv::Point2f> points, cv::Scalar color = cv::Scalar(255,255,255), float size=10)
{
for(unsigned int i=0; i<points.size(); ++i)
{
cv::circle(image, points[i], 0, color, size);
}
}
// assumes a 2x3 (affine) transformation (CV_32FC1). does not change the input points
std::vector<cv::Point2f> applyTransformation(std::vector<cv::Point2f> points, cv::Mat transformation)
{
for(unsigned int i=0; i<points.size(); ++i)
{
const cv::Point2f tmp = points[i];
points[i].x = tmp.x * transformation.at<float>(0,0) + tmp.y * transformation.at<float>(0,1) + transformation.at<float>(0,2) ;
points[i].y = tmp.x * transformation.at<float>(1,0) + tmp.y * transformation.at<float>(1,1) + transformation.at<float>(1,2) ;
}
return points;
}
const float PI = 3.14159265359;
// similarity transformation uses same scaling along both axes, rotation and a translation part
cv::Mat composeSimilarityTransformation(float s, float r, float tx, float ty)
{
cv::Mat transformation = cv::Mat::zeros(2,3,CV_32FC1);
// compute rotation matrix and scale entries
float rRad = PI*r/180.0f;
transformation.at<float>(0,0) = s*cosf(rRad);
transformation.at<float>(0,1) = s*sinf(rRad);
transformation.at<float>(1,0) = -s*sinf(rRad);
transformation.at<float>(1,1) = s*cosf(rRad);
// translation
transformation.at<float>(0,2) = tx;
transformation.at<float>(1,2) = ty;
return transformation;
}
// create random points
std::vector<cv::Point2f> createPointSet(cv::Size2i imageSize, std::vector<cv::Point2f> pointPattern, unsigned int nRandomDots = 50)
{
// subtract center of gravity to allow more intuitive rotation
cv::Point2f centerOfGravity(0,0);
for(unsigned int i=0; i<pointPattern.size(); ++i)
{
centerOfGravity.x += pointPattern[i].x;
centerOfGravity.y += pointPattern[i].y;
}
centerOfGravity.x /= (float)pointPattern.size();
centerOfGravity.y /= (float)pointPattern.size();
pointPattern = applyTransformation(pointPattern, composeSimilarityTransformation(1,0,-centerOfGravity.x, -centerOfGravity.y));
// create random points
//unsigned int nRandomDots = 0;
std::vector<cv::Point2f> pointset;
srand (time(NULL));
for(unsigned int i =0; i<nRandomDots; ++i)
{
pointset.push_back( cv::Point2f(rand()%imageSize.width, rand()%imageSize.height) );
}
cv::Mat image = cv::Mat::ones(imageSize,CV_8UC3);
image = cv::Scalar(255,255,255);
drawPoints(image, pointset, cv::Scalar(0,0,0));
cv::namedWindow("pointset"); cv::imshow("pointset", image);
// add point pattern to a random location
float scaleFactor = rand()%30 + 10.0f;
float translationX = rand()%(imageSize.width/2)+ imageSize.width/4;
float translationY = rand()%(imageSize.height/2)+ imageSize.height/4;
float rotationAngle = rand()%360;
std::cout << "s: " << scaleFactor << " r: " << rotationAngle << " t: " << translationX << "/" << translationY << std::endl;
std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,composeSimilarityTransformation(scaleFactor,rotationAngle,translationX,translationY));
//std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,trans);
drawPoints(image, transformedPattern, cv::Scalar(0,0,0));
drawPoints(image, transformedPattern, cv::Scalar(0,255,0),3);
cv::imwrite("dataPoints.png", image);
cv::namedWindow("pointset + pattern"); cv::imshow("pointset + pattern", image);
for(unsigned int i=0; i<transformedPattern.size(); ++i)
pointset.push_back(transformedPattern[i]);
return pointset;
}
void programDetectPointPattern()
{
cv::Size2i imageSize(640,480);
// create a point pattern, this can be in any scale and any relative location
std::vector<cv::Point2f> pointPattern;
pointPattern.push_back(cv::Point2f(0,0));
pointPattern.push_back(cv::Point2f(2,0));
pointPattern.push_back(cv::Point2f(4,0));
pointPattern.push_back(cv::Point2f(1,2));
pointPattern.push_back(cv::Point2f(3,2));
pointPattern.push_back(cv::Point2f(2,4));
// transform the pattern so it can be drawn
cv::Mat trans = cv::Mat::ones(2,3,CV_32FC1);
trans.at<float>(0,0) = 20.0f; // scale x
trans.at<float>(1,1) = 20.0f; // scale y
trans.at<float>(0,2) = 20.0f; // translation x
trans.at<float>(1,2) = 20.0f; // translation y
// draw the pattern
cv::Mat drawnPattern = cv::Mat::ones(cv::Size2i(128,128),CV_8U);
drawnPattern *= 255;
drawPoints(drawnPattern,applyTransformation(pointPattern, trans), cv::Scalar(0),5);
// display and save pattern
cv::imwrite("patternToDetect.png", drawnPattern);
cv::namedWindow("pattern"); cv::imshow("pattern", drawnPattern);
// draw the points and the included pattern
std::vector<cv::Point2f> pointset = createPointSet(imageSize, pointPattern);
cv::Mat image = cv::Mat(imageSize, CV_8UC3);
image = cv::Scalar(255,255,255);
drawPoints(image,pointset, cv::Scalar(0,0,0));
// normally we would have to use some nearest neighbor distance computation, but to make it easier here,
// we create a small area around every point, which allows to test for point existence in a small neighborhood very efficiently (for small images)
// in the real application this "inlier" check should be performed by k-nearest neighbor search and threshold the distance,
// efficiently evaluated by a kd-tree
cv::Mat pointImage = cv::Mat::zeros(imageSize,CV_8U);
float maxDist = 3.0f; // how exact must the pattern be recognized, can there be some "noise" in the position of the data points?
drawPoints(pointImage, pointset, cv::Scalar(255),maxDist);
cv::namedWindow("pointImage"); cv::imshow("pointImage", pointImage);
// choose two points from the pattern (can be arbitrary so just take the first two)
cv::Point2f referencePoint1 = pointPattern[0];
cv::Point2f referencePoint2 = pointPattern[1];
cv::Point2f diff1; // difference vector
diff1.x = referencePoint2.x - referencePoint1.x;
diff1.y = referencePoint2.y - referencePoint1.y;
float referenceLength = sqrt(diff1.x*diff1.x + diff1.y*diff1.y);
diff1.x = diff1.x/referenceLength; diff1.y = diff1.y/referenceLength;
std::cout << "reference: " << std::endl;
std::cout << referencePoint1 << std::endl;
// now try to find the pattern
for(unsigned int j=0; j<pointset.size(); ++j)
{
cv::Point2f targetPoint1 = pointset[j];
for(unsigned int i=0; i<pointset.size(); ++i)
{
cv::Point2f targetPoint2 = pointset[i];
cv::Point2f diff2;
diff2.x = targetPoint2.x - targetPoint1.x;
diff2.y = targetPoint2.y - targetPoint1.y;
float targetLength = sqrt(diff2.x*diff2.x + diff2.y*diff2.y);
diff2.x = diff2.x/targetLength; diff2.y = diff2.y/targetLength;
// with nearest-neighborhood search this line will be similar or the maximal neighbor distance must be relative to targetLength!
if(targetLength < maxDist) continue;
// scale:
float s = targetLength/referenceLength;
// rotation:
float r = -180.0f/PI*(atan2(diff2.y,diff2.x) + atan2(diff1.y,diff1.x));
// scale and rotate the reference point to compute the translation needed
std::vector<cv::Point2f> origin;
origin.push_back(referencePoint1);
origin = applyTransformation(origin, composeSimilarityTransformation(s,r,0,0));
// compute the translation which maps the two reference points on the two target points
float tx = targetPoint1.x - origin[0].x;
float ty = targetPoint1.y - origin[0].y;
std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,composeSimilarityTransformation(s,r,tx,ty));
// now test if all transformed pattern points can be found in the dataset
bool found = true;
for(unsigned int i=0; i<transformedPattern.size(); ++i)
{
cv::Point2f curr = transformedPattern[i];
// here we check whether there is a point drawn in the image. If you have no image you will have to perform a nearest neighbor search.
// this can be done with a balanced kd-tree in O(log n) time
// building such a balanced kd-tree has to be done once for the whole dataset and needs O(n*(log n)) afair
if((curr.x >= 0)&&(curr.x <= pointImage.cols-1)&&(curr.y>=0)&&(curr.y <= pointImage.rows-1))
{
if(pointImage.at<unsigned char>(curr.y, curr.x) == 0) found = false;
// if working with kd-tree: if nearest neighbor distance > maxDist => found = false;
}
else found = false;
}
if(found)
{
std::cout << composeSimilarityTransformation(s,r,tx,ty) << std::endl;
cv::Mat currentIteration;
image.copyTo(currentIteration);
cv::circle(currentIteration,targetPoint1,5, cv::Scalar(255,0,0),1);
cv::circle(currentIteration,targetPoint2,5, cv::Scalar(255,0,255),1);
cv::line(currentIteration,targetPoint1,targetPoint2,cv::Scalar(0,0,255));
drawPoints(currentIteration, transformedPattern, cv::Scalar(0,0,255),4);
cv::imwrite("detectedPattern.png", currentIteration);
cv::namedWindow("iteration"); cv::imshow("iteration", currentIteration); cv::waitKey(-1);
}
}
}
}
I'm attempting to determine if a specific point lies inside a polyhedron. In my current implementation, the method I'm working on take the point we're looking for an array of the faces of the polyhedron (triangles in this case, but it could be other polygons later). I've been trying to work from the info found here: http://softsurfer.com/Archive/algorithm_0111/algorithm_0111.htm
Below, you'll see my "inside" method. I know that the nrml/normal thing is kind of weird .. it's the result of old code. When I was running this it seemed to always return true no matter what input I give it. (This is solved, please see my answer below -- this code is working now).
bool Container::inside(Point* point, float* polyhedron[3], int faces) {
Vector* dS = Vector::fromPoints(point->X, point->Y, point->Z,
100, 100, 100);
int T_e = 0;
int T_l = 1;
for (int i = 0; i < faces; i++) {
float* polygon = polyhedron[i];
float* nrml = normal(&polygon[0], &polygon[1], &polygon[2]);
Vector* normal = new Vector(nrml[0], nrml[1], nrml[2]);
delete nrml;
float N = -((point->X-polygon[0][0])*normal->X +
(point->Y-polygon[0][1])*normal->Y +
(point->Z-polygon[0][2])*normal->Z);
float D = dS->dot(*normal);
if (D == 0) {
if (N < 0) {
return false;
}
continue;
}
float t = N/D;
if (D < 0) {
T_e = (t > T_e) ? t : T_e;
if (T_e > T_l) {
return false;
}
} else {
T_l = (t < T_l) ? t : T_l;
if (T_l < T_e) {
return false;
}
}
}
return true;
}
This is in C++ but as mentioned in the comments, it's really very language agnostic.
The link in your question has expired and I could not understand the algorithm from your code. Assuming you have a convex polyhedron with counterclockwise oriented faces (seen from outside), it should be sufficient to check that your point is behind all faces. To do that, you can take the vector from the point to each face and check the sign of the scalar product with the face's normal. If it is positive, the point is behind the face; if it is zero, the point is on the face; if it is negative, the point is in front of the face.
Here is some complete C++11 code, that works with 3-point faces or plain more-point faces (only the first 3 points are considered). You can easily change bound to exclude the boundaries.
#include <vector>
#include <cassert>
#include <iostream>
#include <cmath>
struct Vector {
double x, y, z;
Vector operator-(Vector p) const {
return Vector{x - p.x, y - p.y, z - p.z};
}
Vector cross(Vector p) const {
return Vector{
y * p.z - p.y * z,
z * p.x - p.z * x,
x * p.y - p.x * y
};
}
double dot(Vector p) const {
return x * p.x + y * p.y + z * p.z;
}
double norm() const {
return std::sqrt(x*x + y*y + z*z);
}
};
using Point = Vector;
struct Face {
std::vector<Point> v;
Vector normal() const {
assert(v.size() > 2);
Vector dir1 = v[1] - v[0];
Vector dir2 = v[2] - v[0];
Vector n = dir1.cross(dir2);
double d = n.norm();
return Vector{n.x / d, n.y / d, n.z / d};
}
};
bool isInConvexPoly(Point const& p, std::vector<Face> const& fs) {
for (Face const& f : fs) {
Vector p2f = f.v[0] - p; // f.v[0] is an arbitrary point on f
double d = p2f.dot(f.normal());
d /= p2f.norm(); // for numeric stability
constexpr double bound = -1e-15; // use 1e15 to exclude boundaries
if (d < bound)
return false;
}
return true;
}
int main(int argc, char* argv[]) {
assert(argc == 3+1);
char* end;
Point p;
p.x = std::strtod(argv[1], &end);
p.y = std::strtod(argv[2], &end);
p.z = std::strtod(argv[3], &end);
std::vector<Face> cube{ // faces with 4 points, last point is ignored
Face{{Point{0,0,0}, Point{1,0,0}, Point{1,0,1}, Point{0,0,1}}}, // front
Face{{Point{0,1,0}, Point{0,1,1}, Point{1,1,1}, Point{1,1,0}}}, // back
Face{{Point{0,0,0}, Point{0,0,1}, Point{0,1,1}, Point{0,1,0}}}, // left
Face{{Point{1,0,0}, Point{1,1,0}, Point{1,1,1}, Point{1,0,1}}}, // right
Face{{Point{0,0,1}, Point{1,0,1}, Point{1,1,1}, Point{0,1,1}}}, // top
Face{{Point{0,0,0}, Point{0,1,0}, Point{1,1,0}, Point{1,0,0}}}, // bottom
};
std::cout << (isInConvexPoly(p, cube) ? "inside" : "outside") << std::endl;
return 0;
}
Compile it with your favorite compiler
clang++ -Wall -std=c++11 code.cpp -o inpoly
and test it like
$ ./inpoly 0.5 0.5 0.5
inside
$ ./inpoly 1 1 1
inside
$ ./inpoly 2 2 2
outside
If your mesh is concave, and not necessarily watertight, that’s rather hard to accomplish.
As a first step, find the point on the surface of the mesh closest to the point. You need to keep track the location, and specific feature: whether the closest point is in the middle of face, on the edge of the mesh, or one of the vertices of the mesh.
If the feature is face, you’re lucky, can use windings to find whether it’s inside or outside. Compute normal to face (don't even need to normalize it, non-unit-length will do), then compute dot( normal, pt - tri[0] ) where pt is your point, tri[0] is any vertex of the face. If the faces have consistent winding, the sign of that dot product will tell you if it’s inside or outside.
If the feature is edge, compute normals to both faces (by normalizing a cross-product), add them together, use that as a normal to the mesh, and compute the same dot product.
The hardest case is when a vertex is the closest feature. To compute mesh normal at that vertex, you need to compute sum of the normals of the faces sharing that vertex, weighted by 2D angles of that face at that vertex. For example, for vertex of cube with 3 neighbor triangles, the weights will be Pi/2. For vertex of a cube with 6 neighbor triangles the weights will be Pi/4. And for real-life meshes the weights will be different for each face, in the range [ 0 .. +Pi ]. This means you gonna need some inverse trigonometry code for this case to compute the angle, probably acos().
If you want to know why that works, see e.g. “Generating Signed Distance Fields From Triangle Meshes” by J. Andreas Bærentzen and Henrik Aanæs.
I have already answered this question couple years ago. But since that time I’ve discovered much better algorithm. It was invented in 2018, here’s the link.
The idea is rather simple. Given that specific point, compute a sum of signed solid angles of all faces of the polyhedron as viewed from that point. If the point is outside, that sum gotta be zero. If the point is inside, that sum gotta be ±4·π steradians, + or - depends on the winding order of the faces of the polyhedron.
That particular algorithm is packing the polyhedron into a tree, which dramatically improves performance when you need multiple inside/outside queries for the same polyhedron. The algorithm only computes solid angles for individual faces when the face is very close to the query point. For large sets of faces far away from the query point, the algorithm is instead using an approximation of these sets, using some numbers they keep in the nodes of that BVH tree they build from the source mesh.
With limited precision of FP math, and if using that approximated BVH tree losses from the approximation, that angle will never be exactly 0 nor ±4·π. But still, the 2·π threshold works rather well in practice, at least in my experience. If the absolute value of that sum of solid angles is less than 2·π, consider the point to be outside.
It turns out that the problem was my reading of the algorithm referenced in the link above. I was reading:
N = - dot product of (P0-Vi) and ni;
as
N = - dot product of S and ni;
Having changed this, the code above now seems to work correctly. (I'm also updating the code in the question to reflect the correct solution).