I'm using findContours in Opencv 2.9 (C++). What I obtain is a vector> contours, which describes my contours. Lets say I've got a rectangle with its contour stored in the vector. What I would like to do next is, to connect the left and right side of the contour at any point with a line. E.g. 10 pixels below the upper left corner to the 10 pixels below upper right corner of the rectangle. The line should end where the contour does. Is there a better approach then just going scanline wise through that lane and checking every pixel if pointPolygonTest is true?
Thanks in advance!
Suppose you have corners (top-left, top-right, bottom-right and bottom-left points) then you can easily calculate intersections between two lines each defined with two points.
For example, line(P1, P4) intersects with line(R1,R2) and the intersection point is I:
Here is a snippet code for calculating intersection point if the lines do intersect:
// Finds the intersection of two lines, or returns false.
// The lines are defined by (o1, p1) and (o2, p2).
bool intersection(cv::Point2f o1, cv::Point2f p1, cv::Point2f o2, cv::Point2f p2, cv::Point2f &r)
{
cv::Point2f x = o2 - o1;
cv::Point2f d1 = p1 - o1;
cv::Point2f d2 = p2 - o2;
float cross = d1.x*d2.y - d1.y*d2.x;
if (std::abs(cross) < /*EPS*/1e-8)
return false;
double t1 = (x.x * d2.y - x.y * d2.x)/cross;
r = o1 + d1 * t1;
return true;
}
Related
My idea was to implement a simple square detection using openCV & C++ (objc++). I've already extracted the biggest areas of the image like you can see below (the colored ones) but now I'd like to extract the corner points (like TopLeft, TopRight, BottomLeft, BottomRight) of all the areas to afterwards check if the distance between the 4 corners is similar on each of them or the angle between the lines is nearly 45°.
See the images I was talking about:
However - I got to this point where I've tried to extract the areas corner points to get something like this afterwards:
This was my first idea how to get the 4 corner points (See the steps below):
1.calculate the contours center by
for (int i=0; i<contourPoints.size(); i++) {
avgx += contourPoints[i].x;
avgy += contourPoints[i].y;
}
avgx/=contourPoints.size(); // centerx
avgy/=contourPoints.size(); // centery
2.loop trough all contour points to get the points with the highest distance to the center --> Probably the corners if the contour is a square/rectangle
for (int i=0; i<contourPoints.size(); i++) {
dx = abs(avgx - contourPoints[i].x);
dy = abs(avgy - contourPoints[i].y);
dist = sqrt( dx*dx + dy*dy );
distvector.push_back(dist);
}
// sort distvector > distvector and get 4 corners with highest distance to the center -> hopefully the corners.
This procedure was my idea but I'm pretty sure there must be a better way to detect squares and extract it's corner points using just the given contour coordinates.
So any help how to improve my code to get a way better & more efficient detection would be very appreciated. Thanks a million in advance, Tempi.
I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ?
Pseudo code of the algorithm:
for every 3D feature point
convert it 2D projected coordinates
for every 2D feature point
cast a ray toward the polygons of the mesh
get intersection point
if zintersection < z of 3D feature point
for ( every triangle vertices )
cull that triangle.
Here is a follow up implementation of the algorithm mentioned by the Guru Spektre :)
Update code for the algorithm:
int i;
for (i = 0; i < out.numberofpoints; i++)
{
Ogre::Vector3 ray_pos = pos; // camera position);
Ogre::Vector3 ray_dir = (Ogre::Vector3 (out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]) - pos).normalisedCopy(); // vertex - camea pos ;
Ogre::Ray ray;
ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z));
ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z));
Ogre::Vector3 result;
unsigned int u1;
unsigned int u2;
unsigned int u3;
bool rayCastResult = RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3);
if ( rayCastResult )
{
Ogre::Vector3 targetVertex(out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]);
float distanceTargetFocus = targetVertex.squaredDistance(pos);
float distanceIntersectionFocus = result.squaredDistance(pos);
if(abs(distanceTargetFocus) >= abs(distanceIntersectionFocus))
{
if ( u1 != -1 && u2 != -1 && u3 != -1)
{
std::cout << "Remove index "<< "u1 ==> " <<u1 << "u2 ==>"<<u2<<"u3 ==> "<<u3<< std::endl;
updatedIndices.erase(updatedIndices.begin()+ u1);
updatedIndices.erase(updatedIndices.begin()+ u2);
updatedIndices.erase(updatedIndices.begin()+ u3);
}
}
}
}
if ( updatedIndices.size() <= out.numberoftrifaces)
{
std::cout << "current face list===> "<< out.numberoftrifaces << std::endl;
std::cout << "deleted face list===> "<< updatedIndices.size() << std::endl;
manual->begin("Pointcloud", Ogre::RenderOperation::OT_TRIANGLE_LIST);
for (int n = 0; n < out.numberofpoints; n++)
{
Ogre::Vector3 vertexTransformed = Ogre::Vector3( out.pointlist[3*n+0], out.pointlist[3*n+1], out.pointlist[3*n+2]) - mReferencePoint;
vertexTransformed *=1000.0 ;
vertexTransformed = mDeltaYaw * vertexTransformed;
manual->position(vertexTransformed);
}
for (int n = 0 ; n < updatedIndices.size(); n++)
{
int n0 = updatedIndices[n+0];
int n1 = updatedIndices[n+1];
int n2 = updatedIndices[n+2];
if ( n0 < 0 || n1 <0 || n2 <0 )
{
std::cout<<"negative indices"<<std::endl;
break;
}
manual->triangle(n0, n1, n2);
}
manual->end();
Follow up with the algorithm:
I have now two versions one is the triangulated one and the other is the carved version.
It's not not a surface mesh.
Here are the two files
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_non_triangulated.obj
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_triangulated.obj
I see it like this:
So you got image from camera with known matrix and FOV and focal length.
From that you know where exactly the focal point is and where the image is proected onto the camera chip (Z_near plane). So any vertex, its corresponding pixel and focal point lies on the same line.
So for each view cas ray from focal point to each visible vertex of the pointcloud. and test if any face of the mesh hits before hitting face containing target vertex. If yes remove it as it would block the visibility.
Landmark in this context is just feature point corresponding to vertex from pointcloud. It can be anything detectable (change of intensity, color, pattern whatever) usually SIFT/SURF is used for this. You should have them located already as that is the input for pointcloud generation. If not you can peek pixel corresponding to each vertex and test for background color.
Not sure how you want to do this without the input images. For that you need to decide which vertex is visible from which side/view. May be it is doable form nearby vertexes somehow (like using vertex density points or corespondence to planar face...) or the algo is changed somehow for finding unused vertexes inside mesh.
To cast a ray do this:
ray_pos=tm_eye*vec4(imgx/aspect,imgy,0.0,1.0);
ray_dir=ray_pos-tm_eye*vec4(0.0,0.0,-focal_length,1.0);
where tm_eye is camera direct transform matrix, imgx,imgy is the 2D pixel position in image normalized to <-1,+1> where (0,0) is the middle of image. The focal_length determines the FOV of camera and aspect ratio is ratio of image resolution image_ys/image_xs
Ray triangle intersection equation can be found here:
Reflection and refraction impossible without recursive ray tracing?
If I extract it:
vec3 v0,v1,v2; // input triangle vertexes
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) no intersection;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) no intersection;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) no intersection;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt)) // tt is distance to target vertex
{
// intersection
}
Follow ups:
To move between normalized image (imgx,imgy) and raw image (rawx,rawy) coordinates for image of size (imgxs,imgys) where (0,0) is top left corner and (imgxs-1,imgys-1) is bottom right corner you need:
imgx = (2.0*rawx / (imgxs-1)) - 1.0
imgy = 1.0 - (2.0*rawy / (imgys-1))
rawx = (imgx + 1.0)*(imgxs-1)/2.0
rawy = (1.0 - imgy)*(imgys-1)/2.0
[progress update 1]
I finally got to the point I can compile sample test input data for this to get even started (as you are unable to share valid data at all):
I created small app with hard-coded table mesh (gray) and pointcloud (aqua) and simple camera control. Where I can save any number of views (screenshot + camera direct matrix). When loaded back it aligns with the mesh itself (yellow ray goes through aqua dot in image and goes through the table mesh too). The blue lines are casted from camera focal point to its corners. This will emulate the input you got. The second part of the app will use only these images and matrices with the point cloud (no mesh surface anymore) tetragonize it (already finished) now just cast ray through each landmark in each view (aqua dot) and remove all tetragonals before target vertex in pointcloud is hit (this stuff is not even started yet may be in weekend)... And lastly store only surface triangles (easy just use all triangles which are used just once also already finished except the save part but to write wavefront obj from it is easy ...).
[Progress update 2]
I added landmark detection and matching with the point cloud
as you can see only valid rays are cast (those that are visible on image) so some points on point cloud does not cast rays (singular aqua dots)). So now just the ray/triangle intersection and tetrahedron removal from list is what is missing...
I would like to draw my Plane with OpenGL to debug my program but I don't know how I can do that (I'm not very good in math).
I've got a Plane with 2 attributs:
A constant
A normal
Here is what i've got:
////////////////////////////////////////////////////////////
Plane::Plane( const glm::vec3& a, const glm::vec3& b, const glm::vec3& c )
{
glm::vec3 edge1 = b - a;
glm::vec3 edge2 = c - a;
this->normal = glm::cross(edge1, edge2);
this->constant = -glm::dot( this->normal, a );
this->normalize();
}
////////////////////////////////////////////////////////////
Plane::Plane( const glm::vec4& values )
{
this->normal = glm::vec3( values.x, values.y, values.z );
this->constant = values.w;
}
////////////////////////////////////////////////////////////
Plane::Plane( const glm::vec3& normal, const float constant ) :
constant (constant),
normal (normal)
{
}
////////////////////////////////////////////////////////////
Plane::Plane( const glm::vec3& normal, const glm::vec3& point )
{
this->normal = normal;
this->constant = -glm::dot(normal, point);
this->normalize();
}
I would like to draw it to see if everything is ok. How I can do that?
(I need to compute vertices and indices to draw it)
When you want to draw, you need to find two vectors that are perpendecular to normal and a point on the plane. That's not so hard. First, let's get a vector that is not normal. Call it some_vect. For example:
if normal == [0, 0, 1]
some_vect = [0, 1, 0]
else
some_vect = [0, 0, 1]
Then, calculating vect1 = cross(normal, some_vect) would give you a vector perpendecular to normal. Calculating vect2 = cross(normal, vect1) would give you another vector that is perpendecular to normal.
Having two perpendecular vectors vect1 and vect2 and one point on the plane, drawing the plane becomes trivial. For example the sqaure with the following four points (remember to normalize the vectors):
point + vect1 * SIZE
point + vect2 * SIZE
point - vect1 * SIZE
point - vect2 * SIZE
where point is a point on the plane. If your constant is distance from the origin, then one point would be constant * normal.
The difficulty with drawing a plane is that it's an infinite surface; i.e. by definition it has no edges or vertices. If you want to show the plane in a typical polygonal fashion then you'll have to crop it to a particular area, such as a square.
A fairly easy approach is this:
Pick an arbitrary unit vector which is perpendicular to the normal. Store it as v1.
Use the cross product of v1 and the plane normal to get v2.
Negate v1 to get v3.
Negate v2 to get v4.
The points v1-4 now express the four corners of a square which has the same orientation as your plane. All you need to do is multiply them up to whatever size you want, and draw it relative to any point on your plane.
I am trying to create an arrow class that takes two Points. This is what I have:
struct Arrow : Lines { // an Arrow is a Line that has an arrow at the end
Arrow(Point p1, Point p2) // construct an Arrow from two points
:p1(p1), p2(p2) {}
void draw_lines() const;
Point p1;
Point p2;
};
This is what I have of the draw_lines() const function:
void Arrow::draw_lines() const
{
Lines arrow;
arrow.add(p1, p2);
arrow.draw_lines();
}
It works by typing Arrow a(Point(x,y), Point(x1,y1));. It is then supposed to calculate the arrow using (x,y), which becomes p1, and (x1,y1), which becomes p2, as guides to the direction and (x1,y1) as the arrow base point. I would like the arrow to look like this, only solid: --->. How can I calculate the angle that the arrow points? The lines of the arrow head need to be as two x,y coordinates, such as (p2.x, p2.y, (x coordinate of back point of arrow relative to p2), (y coordinate of back point of arrow relative to p2). The .x and .y notation returns the x and y coordinates of Point p.
Thank you for any help.
Here's what it looks like using atan2.
const double pi = 3.1415926535897931;
const int r = 5;
const double join_angle = pi / 6.0; // 30 degrees
const double stem_angle = atan2(p2.y-p1.y, p2.x-p1.x);
Lines arrow;
arrow.add(p1, p2);
arrow.add(p2, Point(p2.x - r*cos(stem_angle+join_angle), p2.y - r*sin(stem_angle+join_angle)));
arrow.add(p2, Point(p2.x - r*cos(stem_angle-join_angle), p2.y - r*sin(stem_angle-join_angle)));
This is exactly the approach you described in your comment:
Ideally, I would like to enter a const int to be added or subtracted to the angle of the slope to create the angle of the arrowhead lines, and another for the length, and it takes care of the nitty gritty point coordinates itself.
Except that double is much better than int for storing the join_angle in radians.
You don't need trigonometric functions like atan. Just normalize the vector from p1 to p2 (you need sqrt for that), get the orthogonal vector to that (easy in 2D) and get the missing two points by adding multiples (factors determine length and width of the arrow head) of these two unit vectors to the end point.
So I decided to write a ray tracer the other day, but I got stuck because I forgot all my vector math.
I've got a point behind the screen (the eye/camera, 400,300,-1000) and then a point on the screen (a plane, from 0,0,0 to 800,600,0), which I'm getting just by using the x and y values of the current pixel I'm looking for (using SFML for rendering, so it's something like 267,409,0)
Problem is, I have no idea how to cast the ray correctly. I'm using this for testing sphere intersection(C++):
bool SphereCheck(Ray& ray, Sphere& sphere, float& t)
{ //operator * between 2 vec3s is a dot product
Vec3 dist = ray.start - sphere.pos; //both vec3s
float B = -1 * (ray.dir * dist);
float D = B*B - dist * dist + sphere.radius * sphere.radius; //radius is float
if(D < 0.0f)
return false;
float t0 = B - sqrtf(D);
float t1 = B + sqrtf(D);
bool ret = false;
if((t0 > 0.1f) && (t0 < t))
{
t = t0;
ret = true;
}
if((t1 > 0.1f) && (t1 < t))
{
t = t1;
ret = true;
}
return ret;
}
So I get that the start of the ray would be the eye position, but what is the direction?
Or, failing that, is there a better way of doing this? I've heard of some people using the ray start as (x, y, -1000) and the direction as (0,0,1) but I don't know how that would work.
On a side note, how would you do transformations? I'm assuming that to change the camera angle you just adjust the x and y of the camera (or the screen if you need a drastic change)
The parameter "ray" in the function,
bool SphereCheck(Ray& ray, Sphere& sphere, float& t)
{
...
}
should already contain the direction information and with this direction you need to check if the ray intersects the sphere or not. (The incoming "ray" parameter is the vector between the camera point and the pixel the ray is sent.)
Therefore the local "dist" variable seems obsolete.
One thing I can see is that when you create your rays you are not using the center of each pixel in the screen as the point for building the direction vector. You do not want to use just the (x, y) coordinates on the grid for building those vectors.
I've taken a look at your sample code and the calculation is indeed incorrect. This is what you want.
http://www.csee.umbc.edu/~olano/435f02/ray-sphere.html (I took this course in college, this guy knows his stuff)
Essentially it means you have this ray, which has an origin and direction. You have a sphere with a point and a radius. You use the ray equation and plug it into the sphere equation and solve for t. That t is the distance between the ray origin and the intersection point on the spheres surface. I do not think your code does this.
So I get that the start of the ray would be the eye position, but what is the direction?
You have camera defined by vectors front, up, and right (perpendicular to each other and normalized) and "position" (eye position).
You also have width and height of viewport (pixels), vertical field of view (vfov) and horizontal field of view (hfov) in degrees or radians.
There are also 2D x and y coordinates of pixel. X axis (2D) points to the right, Y axis (2D) points down.
For a flat screen ray can be calculated like this:
startVector = eyePos;
endVector = startVector
+ front
+ right * tan(hfov/2) * (((x + 0.5)/width)*2.0 - 1.0)
+ up * tan(vfov/2) * (1.0 - ((y + 0.5f)/height)*2.0);
rayStart = startVector;
rayDir = normalize(endVector - startVector);
That assumes that screen plane is flat. For extreme field of view angles (fov >= 180 degreess) you might want to make screen plane spherical, and use different formulas.
how would you do transformations
Matrices.