better detection C++ - c++

My idea was to implement a simple square detection using openCV & C++ (objc++). I've already extracted the biggest areas of the image like you can see below (the colored ones) but now I'd like to extract the corner points (like TopLeft, TopRight, BottomLeft, BottomRight) of all the areas to afterwards check if the distance between the 4 corners is similar on each of them or the angle between the lines is nearly 45°.
See the images I was talking about:
However - I got to this point where I've tried to extract the areas corner points to get something like this afterwards:
This was my first idea how to get the 4 corner points (See the steps below):
1.calculate the contours center by
for (int i=0; i<contourPoints.size(); i++) {
avgx += contourPoints[i].x;
avgy += contourPoints[i].y;
}
avgx/=contourPoints.size(); // centerx
avgy/=contourPoints.size(); // centery
2.loop trough all contour points to get the points with the highest distance to the center --> Probably the corners if the contour is a square/rectangle
for (int i=0; i<contourPoints.size(); i++) {
dx = abs(avgx - contourPoints[i].x);
dy = abs(avgy - contourPoints[i].y);
dist = sqrt( dx*dx + dy*dy );
distvector.push_back(dist);
}
// sort distvector > distvector and get 4 corners with highest distance to the center -> hopefully the corners.
This procedure was my idea but I'm pretty sure there must be a better way to detect squares and extract it's corner points using just the given contour coordinates.
So any help how to improve my code to get a way better & more efficient detection would be very appreciated. Thanks a million in advance, Tempi.

Related

Perspective Transformation for bird's eye view opencv c++

I am interested in perspective transformation to bird's eye view. So far I have tried getPerspectiveTransform and findHomography and then passing it onto warpPerspective. The results are quite close but a skew in TL and BR is present. Also the contourArea are not translated equally post transformation.
The contour is a square with multiple shapes inside.
Any suggestion on how to go ahead.
Code block of what I have done so far.
std::vector<Point2f> quad_pts;
std::vector<Point2f> squre_pts;
cv::approxPolyDP( Mat(validContours[largest_contour_index]), contours_poly[0], epsilon, true );
if (approx_poly.size() > 4) return false;
for (int i=0; i< 4; i++)
quad_pts.push_back(contours_poly[0][i]);
if (! orderRectPoints(quad_pts))
return false;
float widthTop = (float)distanceBetweenPoints(quad_pts[1], quad_pts[0]); // sqrt( pow(quad_pts[1].x - quad_pts[0].x, 2) + pow(quad_pts[1].y - quad_pts[0].y, 2));
float widthBottom = (float)distanceBetweenPoints(quad_pts[2], quad_pts[3]); // sqrt( pow(quad_pts[2].x - quad_pts[3].x, 2) + pow(quad_pts[2].y - quad_pts[3].y, 2));
float maxWidth = max(widthTop, widthBottom);
float heightLeft = (float)distanceBetweenPoints(quad_pts[1], quad_pts[2]); // sqrt( pow(quad_pts[1].x - quad_pts[2].x, 2) + pow(quad_pts[1].y - quad_pts[2].y, 2));
float heightRight = (float)distanceBetweenPoints(quad_pts[0], quad_pts[3]); // sqrt( pow(quad_pts[0].x - quad_pts[3].x, 2) + pow(quad_pts[0].y - quad_pts[3].y, 2));
float maxHeight = max(heightLeft, heightRight);
int mDist = (int)max(maxWidth, maxHeight);
// transform TO points
const int offset = 50;
squre_pts.push_back(Point2f(offset, offset));
squre_pts.push_back(Point2f(mDist-1, offset));
squre_pts.push_back(Point2f(mDist-1, mDist-1));
squre_pts.push_back(Point2f(offset, mDist-1));
maxWidth += offset; maxHeight += offset;
Size matSize ((int)maxWidth, (int)maxHeight);
Mat transmtx = getPerspectiveTransform(quad_pts, squre_pts);
// Mat homo = findHomography(quad_pts, squre_pts);
warpPerspective(mRgba, mRgba, transmtx, matSize);
return true;
Link to transformed image
Image pre-transformation
corner on pre-transformed image
Corners from CornerSubPix
Your original pre-transformation image is not so good, the squares have different sizes there and it looks wavy. The results you get are quite good given the quality of your input.
You could try to calibrate your camera (https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html) to compensate lens distortion, and your results may improve.
EDIT: Just to summarize the comments below, approxPolyDp may not locate the corners properly if the square has rounded corners or it is blurred. You may need to improve the corner location by other means such as a sharper original image, different preprocessing (median filter or threshold, as you suggest in the comments), or other algorithms for finer corner location (such as using the cornersubpix function or detecting the sides with Hough Transform and then calculating the intersections of them)

Oriented bounding box defined by angle of orientation

I know how to find minimum bounding box using regionprops() in Matlab and boundingRect() using OpenCV. How to find a bounding box of points (not minimum bounding box) given the angle of orientation? For example in image below where the line is the major axis of the rectangle.
Well, there are many possible solutions. I'll mention three of these:
rotate all points so that the red line becomes parallel to the x (or y) axis; then, find the minimum axis oriented bounding box of transformed points; then, rotate back the box to find the desired one.
find the projection of every point on the red line and on a blue line (perpendicular to the red one), storing minimum and maximum values per coordinate.
find the distance of all points from the red line; then, with some consideration you should easily find two coords of the box; then you can repeat the computation taking a line perpendicular to the red one to find the missing two coords for the box
You can transform the points such that axis matches with the x-axis or y-axis. Find the bounding box of the transformed points and transform the bounding box back to original space by inverse transform.
Details of transformation :
https://en.wikipedia.org/wiki/Rotation_(mathematics)
You can find the bounding box of transformed points like this.
double xmin = 1e+37,ymin = 1e+37;
double xmax = -1e+37,ymax = -1e+37;
for(size_t i = 0; i < numPoints; ++i)
{
if( x[i] >= xmin ) xmin = x[i];
if( y[i] >= ymin ) ymin = y[i];
if( x[i] <= xmax ) xmax = x[i];
if( y[i] <= ymax ) ymax = y[i];
}
You can replace 1e+37 with any large number or you use the BIG FLOAT
More details of oriented bounding box implementation of Stefan Gottschalk idea can also be found at this blog Oriented bounding box

Raytracing - Rays shot from camera through screen don't deviate on the y axis - C++

So I am trying to write a Raytracer as a personal project, and I have got the basic recursion, mesh geometry, and ray triangle intersection down.
I am trying to get a plausible image out of it but encounter the problem that all pixel rows are the same, giving me straight vertical lines.
I found that all pixel positions generated from the camera function are the same on the y axis but cannot find the problem with my vector math here (I use my Vertex structure as vectors too, its lazy I know):
void Renderer::CameraShader()
{
//compute the width and height of the screen based on angle and distance of the near clip plane
double widthRad = tan(0.5*m_Cam.angle)*m_Cam.nearClipPlane;
double heightRad = ((double)m_Cam.pixelRows / (double)m_Cam.pixelCols)*widthRad;
//get the horizontal vector of the camera by crossing the direction angle with an
Vertex cross = ((m_Cam.direction - m_Cam.origin).CrossProduct(Vertex(0, 1, 0)).Normalized(0.0001))*widthRad;
//get the up/down vector of the camera by crossing the horizontal vector with the direction vector
Vertex crossDown = m_Cam.direction.CrossProduct(cross).Normalized(0.0001)*heightRad;
//generate rays per pixel row and column
for (int i = 0; i < m_Cam.pixelCols;i++)
{
for (int j = 0; j < m_Cam.pixelRows; j++)
{
Vertex pixelPos = m_Cam.origin + (m_Cam.direction - m_Cam.origin).Normalized(0.0001)*m_Cam.nearClipPlane //vector of the screen center
- cross + (cross*((i / (double)m_Cam.pixelCols)*widthRad*2)) //horizontal vector based on i
+ crossDown - (crossDown*((j / (double)m_Cam.pixelRows)*heightRad*2)); //vertical vector based on j
//cast a ray through according screen pixel to get color
m_Image[i][j] = raycast(m_Cam.origin, pixelPos - m_Cam.origin, p_MaxBounces);
}
}
}
I hope the comments in the code make clear what is happening.
If anyone sees the problem help would be nice
The problem was that I had to substract the camera origin from the direction point. It now actually renders sillouettes, so I guess I can say its fixed :)

sorting pixels in picture

I have picture on which I need to detect shape. To do this, I used template from which I shelled points, and then try to fit this points to my image for best matching. When I found best matching i need to draw curve around this points.
Problem is that taken points are not in order. (you can see on picture).
How to sort points that curve will be with no "jumping", but smooth.
I tried with stable_sort() but without success.
stable_sort(points.begin(), points.end(), Ordering_Functor_y());
stable_sort(points.begin(), points.end(), Ordering_Functor_x());
Using stable_sort()
For drawing i used this function:
polylines(result_image, &pts, &npts, 1, true, Scalar(0, 255, 0), 3, CV_AA);
Any idea how to solve this problem? Thank you.
Edit:
Here is the code for getting points from template
for (int model_row = 0; (model_row < model.rows); model_row++)
{
uchar *curr_point = model.ptr<uchar>(model_row);
for (int model_column = 0; (model_column < model.cols); model_column++)
{
if (*curr_point > 0)
{
Point& new_point = Point(model_column, model_row);
model_points.push_back(new_point);
}
curr_point += image_channels;
}
}
In this part of code you can see where is the problem of points order. Is any better option how to save points in correct order, that i wont have problems with drawing contour?
Your current approach is to sort either the x or y values, and this is not the proper ordering for drawing the contour.
One approach could the following
Caculate the center of mass of the detected object
Place a polar coordinate system at the center of mass
Calculate direction to all points
Sort points by the direction coordinates
It will better than your current sorting, but might not be perfect.
A more involved approach is to determine the shortest path going through all the detected points. If the points are spread evenly around the contour this approach will locate the contour. The search term for this approach is the travelling salesman problem.
I think you searching for Concave Hull algorithm.
See here:
http://ubicomp.algoritmi.uminho.pt/local/concavehull.html
Java implementation: here

CDC::Ellipse doesn't draw correctly circles

In a project of mine (VC++2010, MFC), I want to draw a circle using the CDC::Ellipse. I set two points: the first one is the center of the circle, the second one is a point I want it to be on the circumference.
I pass to the CDC::Ellipse( int x1, int y1, int x2, int y2 ) the coordinates of the upper-left corner and lower-right one.
Briefly: with Pitagora Theorem I calculate the distance between the two points ( radius ), then I subtract this value from the coordinates of the center to obtain the upper-left corner and add to obtain the lower-right one.
When I draw the cirlce and the points, and I zoom in, I see that the second one isn't on the circumference as expected, it is slightly inside unless you set it at 0°, 45°, 90° and so on with respect to the absolute sistem of coordinates.
Then I tried to draw the same circle using CDC::Polyline, I gave to this method the points obtained rotating another point around the center, at the distance equal to the radius. In this case the point is on the circumference every where I set it.
The overlap of these two circles has shown that they perfectly overlap at 0°, 45°, 90° and so on, but the gap is maximum at 22.5°, 67.5° and so on.
Has anyone ever noticed a similar behavior?
Thanks to everybody that can help me!
Code snippet:
this is how I calculate the radius given 2 points:
centerPX = vvFPoint( 1380, 845 );
secondPointPX = vvFPoint( 654,654 );
double radiusPX = (sqrt( (secondPointPX.x - centerPX.x) * (secondPointPX.x - centerPX.x) + (secondPointPX.y - centerPX.y) * (secondPointPX.y - centerPX.y) ));
( vvFPoint is a custom type derived from CPoint )
this is how I draw the "circle" with the CDC::Ellipse:
int up = (int)(((double)(m_p1.y-(double)originY - m_radius) / zoom) + 0.5) + offY;
int left = (int)(((double)(m_p1.x-(double)originX - m_radius) / zoom) + 0.5) + offX;
int down = (int)(((double)(m_p1.y-(double)originY + m_radius) / zoom) + 0.5) + offY;
int right = (int)(((double)(m_p1.x-(double)originX + m_radius) / zoom) + 0.5) + offX;
pDC->Ellipse( left, up, right, down);
(m_p1 is the center of the circle, originX/Y is the origin of the image, m_radius is the radius of the circle, zoom is the scale factor, offX/Y is an offset in the client area of my SW)
this is how I draw the circle "manually" (and quite trivial method) using a custom polyline class:
1) create the array of points:
point.x = centerPX.x + radiusPX;
point.y = centerPX.y;
for ( i=0; i < 3600; i++ )
{
pt1.RotateDeg ( centerPX, (double)0.1 );
poly->AddPoint( pt1 );
}
(RotateDeg is a custom method to rotate a point using first argument as a pivot and second argument as angle value in degrees, AddPoint is a custom method to create the array of points, poly is my custom polyline object).
2) draw it:
When I call the Draw( CDC* pDC ) I use the previous array to draw the polyline:
pDC->MoveTo(p);
I hope this can help you to reproduce my weird observations!
code snippet 2:
void vvPoint<Tipo>::RotateDeg(const vvPoint<Tipo> &center, double angle)
{
vvPoint<Tipo> ptB;
angle *= -(M_PI / 180);
*this -= center;
ptB.x = ((this->x * cos(angle)) - (this->y * sin(angle)));
ptB.y = ((this->x * sin(angle)) + (this->y * cos(angle)));
*this = ptB + center;
}
But to let you better understand my observations I would like to add a few images so you can see where my whole question started from... The problem is: I can't add images since I need to have 10 reputation. I uploaded a .zip file on dropbox and if you want I can send you the URL of this file. Let me know if this is the correct (and safe..) way to bypass this problem.
Thanks!
This might be a possible explanation. As MSDN says about CDC::Ellipse (with my emphasis):
The center of the ellipse is the center of the bounding rectangle
specified by x1, y1, x2, and y2, or lpRect. The ellipse is drawn with
the current pen, and its interior is filled with the current brush.
The figure drawn by this function extends up to, but does not include,
the right and bottom coordinates. This means that the height of the
figure is y2 – y1 and the width of the figure is x2 – x1.
The way you described how you calculate the bounding rectangle is not entirely clear (some source code would have helped) but, given the second paragraph quoted above, you possibly need to add 1 to your x2 and y2 values, to make sure you have a circle with the desired radius.
It's also worth noting that there may be slight rounding differences between your two drawing methods where you have an odd-sized bounding box (i.e. so the centre point falls logically on a half-pixel).
UPDATE
Using your code snippets (thanks), and assuming no zoom and zero offsets etc., I get a radius of 750.704 pixels and the following parameters for the ellipse:
pDC->Ellipse(629, 94, 2131, 1596);
According to MSDN, this means that the ellipse will be drawn in a figure of the following dimensions:
width = (2131 - 629) = 1502
height = (1596 - 94) = 1502
So as far as I can see, this should produce a circle rather than an ellipse.
The next thing to do is to find out how you're drawing the polygon - for that we need to see the implementation of RotateDeg - can you post that code? I'm suspecting some simple rounding error here, that maybe gets magnified when you zoom.
UPDATE 2
Just looking at this code:
for ( i=0; i < 3600; i++ )
{
pt1.RotateDeg ( centerPX, (double)0.1 );
poly->AddPoint( pt1 );
}
You are rotating your polygon points incrementally by 0.1 degrees each time. This will possibly accumulate some errors, so it may be worth doing it something like this instead:
for ( i=0; i < 3600; i++ )
{
vvFPoint ptNew = pt1;
ptNew.RotateDeg ( centerPX, (double)i * 0.1 );
poly->AddPoint( ptNew );
}
Maybe this will mean you have to change your RotateDeg function to take care of the correct quadrants.
One other point, you mentioned that you see the problem when you zoom into the image. If this means you are using you zoom variable, it is worth checking in this line ...:
pDC->Ellipse( left, up, right, down);
... that the parameters still form a square shape, so (right - left) == (down - up).
UPDATE 3
I just ran your RotateDeg function, in its current form, to see how the error accumulates (by feeding in the previous result to the next iteration). At each step, I calculated the distance between the new point and the centre and compared this with the required radius.
The chart below shows the result, where you can see an error of 4 pixels by the time the points have been calculated.
I think that this at least explains part of the difference (i.e. your polygon drawing is flawed) and - depending on zoom - you may introduce asymmetry into the ellipse parameters, which you can debug by comparing the width to the height as I described above.