How to find length of upper and lower arc from ellipse image - c++

Here i try to find the upper arc and lower arc using image vector(contours of images) But It could n't gave Extract result. Suggest any other method to find upper and lower arc from images and their length.
Here my code
Mat image =cv::imread("thinning/20d.jpg");
int i=0,j=0,k=0,x=320;
for(int y = 0; y < image.rows; y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 250 && image.at<Vec3b>(Point(x, y))[1] >= 250 && image.at<Vec3b>(Point(x, y))[2] >= 250){
qDebug()<<x<<y;
x1[i]=x;
y1[i]=y;
i=i+1;
}
}
for(i=0;i<=1;i++){
qDebug()<<x1[i]<<y1[i];
}
qDebug()<<"UPPER ARC";
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y <= (y1[0]+20); y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 240 && image.at<Vec3b>(Point(x, y))[1] >= 240 && image.at<Vec3b>(Point(x, y))[2] >= 240){
x2[j]=x;
y2[j]=y;
j=j+1;
qDebug()<<x<<y;
}}
}
qDebug()<<"Lower ARC";
for(int x = 0; x < image.cols; x++)
{
for(int y = (y1[1]-20); y <= image.rows; y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 240 && image.at<Vec3b>(Point(x, y))[1] >= 240 && image.at<Vec3b>(Point(x, y))[2] >= 240){
x3[k]=x;
y3[k]=y;
k=k+1;
qDebug()<<x<<y;
}}
}
By Above code I get Coordinates, by using Coordinates points I can find the length of arc but its mismatch with extract result.
Here is actual image:
Image1:
After thinning i got:
Expected Output:

As you are unable to define what exactly is upper/lower arc then I will assume you cut the ellipse in halves by horizontal line going through the ellipse's middle point. If that is not the case then you have to adapt this on your own... Ok now how to do it:
binarize image
As you provide JPG the colors are distorted so there is more then just black and white
thin the border to 1 pixel
Fill the inside with white and then recolor all white pixels not neighboring any black pixels to some unused or black color. There are many other variation how to achieve this...
find the bounding box
search all pixels and remember min,max x,y coordinates of all white pixels. Let call them x0,y0,x1,y1.
compute center of ellipse
simply find middle point of bounding box
cx=(x0+x1)/2
cy=(y0+y1)/2
count the pixels for each elliptic arc
have counter for each arc and simply increment upper arc counter for any white pixel that have y<=cy and lower if y>=cy. If your coordinate system is different then the conditions can be reverse.
find ellipse parameters
simply find white pixel closest to (cx,cy) this will be endpoint of minor semi-axis b let call it (bx,by). Also find the most far white pixel to (cx,cy) that will be the major semi axis endpoint (ax,ay). The distances between them and center will give you a,b and their position substracted by center will give you vectors with rotation of your ellipse. the angle can be obtained by atan2 or use basis vectors as I do. You can test ortogonality by dot product. There can be more then 2 points for closest and farest point. in that case you should find the middle of each group to enhance precision.
Integrate fitted ellipse
You need first to find angle at which the ellipse points are with y=cy then integrate ellipse between these two angles. The other half is the same just integrate angles + PI. To determine which half it is just compute point in the middle between angle range and decide according y>=cy ...
[Edit2] Here updated C++ code I busted for this:
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
float a,b,a0,a1,da,xx0,xx1,yy0,yy1,ll0,ll1;
int x,y,i,threshold=127,x0,y0,x1,y1,cx,cy,ax,ay,bx,by,aa,bb,dd,l0,l1;
pic1=pic0;
// bbox,center,recolor (white,black)
x0=pic1.xs; x1=0;
y0=pic1.ys; y1=0;
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].db[0]>=threshold)
{
if (x0>x) x0=x;
if (y0>y) y0=y;
if (x1<x) x1=x;
if (y1<y) y1=y;
pic1.p[y][x].dd=0x00FFFFFF;
} else pic1.p[y][x].dd=0x00000000;
cx=(x0+x1)/2; cy=(y0+y1)/2;
// fill inside (gray) left single pixel width border (thining)
for (y=y0;y<=y1;y++)
{
for (x=x0;x<=x1;x++) if (pic1.p[y][x].dd)
{
for (i=x1;i>=x;i--) if (pic1.p[y][i].dd)
{
for (x++;x<i;x++) pic1.p[y][x].dd=0x00202020;
break;
}
break;
}
}
for (x=x0;x<=x1;x++)
{
for (y=y0;y<=y1;y++) if (pic1.p[y][x].dd) { pic1.p[y][x].dd=0x00FFFFFF; break; }
for (y=y1;y>=y0;y--) if (pic1.p[y][x].dd) { pic1.p[y][x].dd=0x00FFFFFF; break; }
}
// find min,max radius (periaxes)
bb=pic1.xs+pic1.ys; bb*=bb; aa=0;
ax=cx; ay=cy; bx=cx; by=cy;
for (y=y0;y<=y1;y++)
for (x=x0;x<=x1;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
dd=((x-cx)*(x-cx))+((y-cy)*(y-cy));
if (aa<dd) { ax=x; ay=y; aa=dd; }
if (bb>dd) { bx=x; by=y; bb=dd; }
}
aa=sqrt(aa); ax-=cx; ay-=cy;
bb=sqrt(bb); bx-=cx; by-=cy;
//a=float((ax*bx)+(ay*by))/float(aa*bb); // if (fabs(a)>zero_threshold) not perpendicular semiaxes
// separate/count upper,lower arc by horizontal line
l0=0; l1=0;
for (y=y0;y<=y1;y++)
for (x=x0;x<=x1;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
if (y>=cy) { l0++; pic1.p[y][x].dd=0x000000FF; } // red
if (y<=cy) { l1++; pic1.p[y][x].dd=0x00FF0000; } // blue
}
// here is just VCL/GDI info layer output so you can ignore it...
// arc separator axis
pic1.bmp->Canvas->Pen->Color=0x00808080;
pic1.bmp->Canvas->MoveTo(x0,cy);
pic1.bmp->Canvas->LineTo(x1,cy);
// draw analytical ellipse to compare
pic1.bmp->Canvas->Pen->Color=0x0000FF00;
pic1.bmp->Canvas->MoveTo(cx,cy);
pic1.bmp->Canvas->LineTo(cx+ax,cy+ay);
pic1.bmp->Canvas->MoveTo(cx,cy);
pic1.bmp->Canvas->LineTo(cx+bx,cy+by);
pic1.bmp->Canvas->Pen->Color=0x00FFFF00;
da=0.01*M_PI; // dash step [rad]
a0=0.0; // start
a1=2.0*M_PI; // end
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
x=cx+(ax*cos(a))+(bx*sin(a));
y=cy+(ay*cos(a))+(by*sin(a));
pic1.bmp->Canvas->MoveTo(x,y);
a+=da; if (a>=a1) { a=a1; i=0; }
x=cx+(ax*cos(a))+(bx*sin(a));
y=cy+(ay*cos(a))+(by*sin(a));
pic1.bmp->Canvas->LineTo(x,y);
}
// integrate the arclengths from fitted ellipse
da=0.001*M_PI; // integration step [rad] (accuracy)
// find start-end angles
ll0=M_PI; ll1=M_PI;
for (i=1,a=0.0;i;)
{
a+=da; if (a>=2.0*M_PI) { a=0.0; i=0; }
xx1=(ax*cos(a))+(bx*sin(a));
yy1=(ay*cos(a))+(by*sin(a));
b=atan2(yy1,xx1);
xx0=fabs(b-0.0); if (xx0>M_PI) xx0=2.0*M_PI-xx0;
xx1=fabs(b-M_PI);if (xx1>M_PI) xx1=2.0*M_PI-xx1;
if (ll0>xx0) { ll0=xx0; a0=a; }
if (ll1>xx1) { ll1=xx1; a1=a; }
}
// [upper half]
ll0=0.0;
xx0=cx+(ax*cos(a0))+(bx*sin(a0));
yy0=cy+(ay*cos(a0))+(by*sin(a0));
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
xx1=cx+(ax*cos(a))+(bx*sin(a));
yy1=cy+(ay*cos(a))+(by*sin(a));
// sum arc-line sizes
xx0-=xx1; xx0*=xx0;
yy0-=yy1; yy0*=yy0;
ll0+=sqrt(xx0+yy0);
// pic1.p[int(yy1)][int(xx1)].dd=0x0000FF00; // recolor for visualy check the right arc selection
xx0=xx1; yy0=yy1;
}
// lower half
a0+=M_PI; a1+=M_PI; ll1=0.0;
xx0=cx+(ax*cos(a0))+(bx*sin(a0));
yy0=cy+(ay*cos(a0))+(by*sin(a0));
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
xx1=cx+(ax*cos(a))+(bx*sin(a));
yy1=cy+(ay*cos(a))+(by*sin(a));
// sum arc-line sizes
xx0-=xx1; xx0*=xx0;
yy0-=yy1; yy0*=yy0;
ll1+=sqrt(xx0+yy0);
// pic1.p[int(yy1)][int(xx1)].dd=0x00FF00FF; // recolor for visualy check the right arc selection
xx0=xx1; yy0=yy1;
}
// handle if the upper/lower parts are swapped
a=a0+0.5*(a1-a0);
if ((ay*cos(a))+(by*sin(a))<0.0) { a=ll0; ll0=ll1; ll1=a; }
// info texts
pic1.bmp->Canvas->Font->Color=0x00FFFF00;
pic1.bmp->Canvas->Brush->Style=bsClear;
x=5; y=5; i=16; y-=i;
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("center = (%i,%i) px",cx,cy));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("a = %i px",aa));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("b = %i px",bb));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("upper = %i px",l0));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("lower = %i px",l1));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("upper`= %.3lf px",ll0));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("lower`= %.3lf px",ll1));
pic1.bmp->Canvas->Brush->Style=bsSolid;
It use my own picture class with members:
xs,ys resolution of image
p[y][x].dd pixel access as 32bit unsigned integer as color
p[y][x].db[4] pixel access as 4*8bit unsigned integer as color channels
You can look at picture::p member as simple 2D array of
union color
{
DWORD dd; WORD dw[2]; byte db[4];
int i; short int ii[2];
color(){}; color(color& a){ *this=a; }; ~color(){}; color* operator = (const color *a) { dd=a->dd; return this; }; /*color* operator = (const color &a) { ...copy... return this; };*/
};
int xs,ys;
color p[ys][xs];
Graphics::TBitmap *bmp; // VCL GDI Bitmap object you do not need this...
where each cell can be accessed as 32 bit pixel p[][].dd as 0xAABBGGRR or 0xAARRGGBB not sure now which. Also you can access the channels directly with p[][].db[4] as 8bit BYTEs.
The bmp member is GDI bitmap so bmp->Canvas-> access all the GDI stuff which is not important for you.
Here result for your second image:
Gray horizontal line is the arc boundary line going through center
Red,Blue are the arc halves (recolored during counting)
Green are the semi-axes basis vectors
Aqua dash-dash is analytical ellipse overlay to compare the fit.
As you can see the fit is pretty close (+/-1 pixel). The counted arc-lengths upper,lower are pretty close to approximated average circle half perimeter(circumference).
You should add a0 range check to decide if the start is upper or lower half because there is no quarantee which side of major axis this will find. The integration of both halves are almost the same and saturated around integration step 0.001*M_PI around 307.3 pixels per arc-length which is only 17 and 22 pixels difference from the direct pixel count which is even better then I anticipate due to aliasing ...
For more eccentric ellipses the fit is not as good but the results are still good enough:

Related

cocos2dx detect intersection with polygon sprite

I am using cocos2d-x 3.8.
I try to create two polygon sprites with the following code.
I know we can detect intersect with BoundingBox but is too rough.
Also, I know we can use Cocos2d-x C++ Physics engine to detect collisions but doesn't it waste a lot of resource of the mobile device? The game I am developing does not need physics engine.
is there a way to detect the intersect of polygon sprites?
Thank you.
auto pinfoTree = AutoPolygon::generatePolygon("Tree.png");
auto treeSprite= Sprite::create(pinfoTree);
treeSprite-> setPosition(width / 4 * 3 - 30 , height / 2 - 200);
this->addChild(treeSprite);
auto pinfoBird = AutoPolygon::generatePolygon("Bird.png");
auto Bird= Sprite::create(pinfoTree);
Bird->setPosition(width / 4 * 3, height / 2);
this->addChild(Bird)
This is a bit more complicated: AutoPolygon gives you a bunch of triangles - the PhysicsBody::createPolygon requires a convex polygon with clockwise winding… so these are 2 different things. The vertex count might even be limited. I think Box2d’s maximum count for 1 polygon is 8.
If you want to try this you’ll have to merge the triangles to form polygons. An option would be to start with one triangle and add more as long as the whole thing stays convex. If you can’t add any more triangles start a new polygon. Add all the polygons as PhysicsShapes to your physics body to form a compound object.
I would propose that you don’t follow this path because
Autopolygon is optimized for rendering - not for best fitting
physics - that is a difference. A polygon traced with Autopolygon will always be bigger than the original sprite - Otherwise you would see rendering artifacts.
You have close to no control over the generated polygons
Tracing the shape in the app will increase your startup time
Triangle meshes and physics outlines are 2 different things
I would try some different approach: Generate the collision shapes offline. This gives you a bunch of advantages:
You can generate and tweak the polygons in a visual editor e.g. by
using PhysicsEditor
Loading the prepares polygons is way faster
You can set additional parameters like mass etc
The solution is battle proven and works out of the box
But if you want to know how polygon intersect work. You can look at this code.
// Calculate the projection of a polygon on an axis
// and returns it as a [min, max] interval
public void ProjectPolygon(Vector axis, Polygon polygon, ref float min, ref float max) {
// To project a point on an axis use the dot product
float dotProduct = axis.DotProduct(polygon.Points[0]);
min = dotProduct;
max = dotProduct;
for (int i = 0; i < polygon.Points.Count; i++) {
flaot d = polygon.Points[i].DotProduct(axis);
if (d < min) {
min = dotProduct;
} else {
if (dotProduct> max) {
max = dotProduct;
}
}
}
}
// Calculate the distance between [minA, maxA] and [minB, maxB]
// The distance will be negative if the intervals overlap
public float IntervalDistance(float minA, float maxA, float minB, float maxB) {
if (minA < minB) {
return minB - maxA;
} else {
return minA - maxB;
}
}
// Check if polygon A is going to collide with polygon B.
public boolean PolygonCollision(Polygon polygonA, Polygon polygonB) {
boolean result = true;
int edgeCountA = polygonA.Edges.Count;
int edgeCountB = polygonB.Edges.Count;
float minIntervalDistance = float.PositiveInfinity;
Vector edge;
// Loop through all the edges of both polygons
for (int edgeIndex = 0; edgeIndex < edgeCountA + edgeCountB; edgeIndex++) {
if (edgeIndex < edgeCountA) {
edge = polygonA.Edges[edgeIndex];
} else {
edge = polygonB.Edges[edgeIndex - edgeCountA];
}
// ===== Find if the polygons are currently intersecting =====
// Find the axis perpendicular to the current edge
Vector axis = new Vector(-edge.Y, edge.X);
axis.Normalize();
// Find the projection of the polygon on the current axis
float minA = 0; float minB = 0; float maxA = 0; float maxB = 0;
ProjectPolygon(axis, polygonA, ref minA, ref maxA);
ProjectPolygon(axis, polygonB, ref minB, ref maxB);
// Check if the polygon projections are currentlty intersecting
if (IntervalDistance(minA, maxA, minB, maxB) > 0)
result = false;
return result;
}
}
The function can be used this way
boolean result = PolygonCollision(polygonA, polygonB);
I once had to program a collision detection algorithm where a ball was to collide with a rotating polygon obstacle. In my case the obstacles where arcs with certain thickness. and where moving around an origin. Basically it was rotating in an orbit. The ball was also rotating around an orbit about the same origin. It can move between orbits. To check the collision I had to just check if the balls angle with respect to the origin was between the lower and upper bound angles of the arc obstacle and check if the ball and the obstacle where in the same orbit.
In other words I used the various constrains and properties of the objects involved in the collision to make it more efficient. So use properties of your objects to cause the collision. Try using a similar approach depending on your objects

Dynamic Body gets stuck in static body after flipping

I have a dynamic body with many polygon shapes for my game character. In order to turn back the game character I flip vertices using this code:
void Box2dManager::flipFixtures(bool horizzontally, b2Body* physBody)
{
b2Fixture* fix = physBody->GetFixtureList();
while(fix)
{
b2Shape* shape = fix->GetShape();
if(shape->GetType()== b2Shape::e_polygon)
{
// flipping x or y coordinates
b2PolygonShape* ps = (b2PolygonShape*)shape;
for(int i=0; i < ps->GetVertexCount(); i++)
horizzontally ? ps->m_vertices[i].x *= -1 : ps->m_vertices[i].y *= -1;
// revert the vertices (no need after Box2D 2.3.0 as polygon creation computes the convex hull)
b2Vec2* reVert = new b2Vec2[ps->GetVertexCount()];
int j = ps->GetVertexCount() -1;
for(int i=0; i<ps->GetVertexCount();i++)
reVert[i] = ps->m_vertices[j--];
ps->Set(&reVert[0], ps->GetVertexCount());
}
fix = fix->GetNext();
}
}
I also have static edge shapes as walls. And it happens, that when I flip the character, sometimes vertices of the same polygon appear to be in different sides of the same static edge shape. As a result my character sticks to the wall (it is being trapped in the static edge shape). How I should handle this situation?

Only drawing contours that exist over several frames to remove flickering

I've been researching here and the rest of the web for over a week now and am unable to come up with anything.
I'm coding using C++ and opencv on linux.
I have this video in black and white of a cloud chamber (http://youtu.be/40wnB8ukI7s). I want to draw contours around the moving particle tracks. Currently I'm using findContours and drawContours; however, it draws contours around all of the white pixels, including the ones that quickly appear and disappear. I don't want to draw contours around my background, the flickering white pixels.
My problem is that the background is also moving so background subtraction doesn't work. Is there a way to:
a) only draw a contour if it exists roughly in the same location over several frames
b) remove a white pixel if it doesn't exist for multiple frames (probably at least 4 or 5 frames)
Thank you for any help you can provide.
Edit: Code for comparing two frames (firstFrame and secondFrame)
Vec3b frameColour;
Vec3b frameColour2;
for (int x = 0; x < firstFrame.cols; x++){
for (int y = 0; y < firstFrame.rows; y++){
frameColour = firstFrame.at<Vec3b>(Point(x, y));
frameColour2 = secondFrame.at<Vec3b>(Point(x, y));
if(frameColour == white && frameColour2 == white){
secondFrameAfter.at<Vec3b>(Point(x, y)) = white;
}else{
secondFrameAfter.at<Vec3b>(Point(x, y)) = black;
}
}
}
You could implement your idea:
For each frame do:
For each white pixel do:
If the pixels in the neigbourhood of the last N frames are *mostly* white
Set the current pixel to white
Else
Set the current pixel to black
The neigbourhood can be defined as a 3x3 mask around the pixel.
Mostly refers to an appropriate threshold, let's say 80% of the N frames should support (be white) the pixel position.
The red pixel is the current pixel (x,y) and the green pixels are its neigbourhood.
Comparing the neigbouring pixel of a pixel (x,y) can be achieved as follows:
const int MASK_SIZE = 3;
int numberOfSupportingFrames = 0;
for(int k = 0; k < N; k++)
{
Mat currentPreviousFrame = previousFrames.at(k);
bool whitePixelAvailable = false;
for(int i = x-(MASK_SIZE/2); i < x+(MASK_SIZE/2) && !whitePixelAvailable; i++)
{
for(int j = y-(MASK_SIZE/2); j < y+(MASK_SIZE/2) && !whitePixelAvailable; j++)
{
if(currentPreviousFrame.at<Vec3b>(Point(i, j)) == white)
{
whitePixelAvailable = true;
numberOfSupportingFrames++;
}
}
}
}
if((float)numberOfSupportingFrames / (float)N > 0.8)
secondFrameAfter.at<Vec3b>(Point(x, y)) = white;
else
secondFrameAfter.at<Vec3b>(Point(x, y)) = black;
The previous frames are stored inside std::vector previousFrames.
The algorithm checks the spatio-temporal neigbourhood of the pixel (x,y). The outer loop iterates over the neigbouring frames (temporal neigbourhood), while the inner two loops iterate over the neigbouring eight pixels (spatial neighbourhood). If there is a white pixel in the current spatial neighbourhood, this previous frame supports the current pixel (x,y). At the end it is checked if there are enough frames supporting the current pixel (80% of the previous frames should contain at least on white pixel in the 8-neigbourhood).
This code should be nested inside your two for-loops with some modifications (variable names, border handling).

Finding furthest away pixel (Open CV)

I've been struggling with a small problem for some time now and just can't figure out what is wrong.
So I have a black 126 x 126 image with a 1 pixel blue border ( [B,G,R] = [255, 0, 0] ).
What I want, is the pixel which is furthest away from all blue pixels (such as the border). I understand how this is done. Iterate through every pixel, if it is black then compute distance to every other pixel which is blue looking for the minimum, then select the black pixel with the largest minimum distance to any blue.
Note: I don't need to actually know the true distance, so when doing the sum of the squares for distance I don't square root, I only want to know which distance is larger (less expensive).
First thing I do is loop through every pixel and if it is blue, add the row and column to a vector. I can confirm this part works correctly. Next, I loop through all pixels again and compare every black pixel's distance to every pixel in the blue pixel vector.
Where blue is a vector of Blue objects (has row and column)
region is the image
int distance;
int localShortest = 0;
int bestDist = 0;
int posX = 0;
int posY = 0;
for(int i = 0; i < image.rows; i++)
{
for(int j = 0; j < image.cols; j++)
{
//Make sure pixel is black
if(image.at<cv::Vec3b>(i,j)[0] == 0
&& image.at<cv::Vec3b>(i,j)[1] == 0
&& image.at<cv::Vec3b>(i,j)[2] == 0)
{
for(int k = 0; k < blue.size(); k++)
{
//Distance between pixels
distance = (i - blue.at(k).row)*(i - blue.at(k).row) + (j - blue.at(k).col)*(j - blue.at(k).col);
if(k == 0)
{
localShortest = distance;
}
if(distance < localShortest)
{
localShortest = distance;
}
}
if(localShortest > bestDist)
{
posX = i;
posY = j;
bestDistance = localShortest;
}
}
}
}
This works absolutely fine for a 1 pixel border around the edge.
https://dl.dropboxusercontent.com/u/3879939/works.PNG
Similarly, if I add more blue but keep a square ish black region, then it also works.
https://dl.dropboxusercontent.com/u/3879939/alsoWorks.PNG
But as soon as I make the image not have a square black portion, but maybe rectangular. Then the 'furthest away' is off. Sometimes it even says a blue pixel is the furthest away from blue, which is just not right.
https://dl.dropboxusercontent.com/u/3879939/off.PNG
Any help much appreciated! Hurting my head a bit.
One possibility, given that you're using OpenCV anyway, is to just use the supplied distance transform function.
For your particular case, you would need to do the following:
Convert your input to a single-channel binary image (e.g. map black to white and blue to black)
Run the cv::distanceTransform function with CV_DIST_L2 (Euclidean distance)
Examine the resulting greyscale image to get the results.
Note that there may be more than one pixel at the maximum distance from the border, so you need to handle this case according to your application.
The brightest pixels in the distance transform will be the ones that you need. For example, here is a white rectangle and its distance transform:
In square due to its symmetry the furthest black point (the center) is also the furthest no matter in which direction you look from there. But now try to imagine a very long rectangle with a very short height. There will be multiple points on its horizontal axis, to which the largest minimum distance will be the short distance to both the top and bottom sides, because the left and right sides are far away. In this case the pixel your algorithm finds can be any one on this line, and the result will depend on your pixel scanning order.
It's because there is a line(more than one pixel) to meet your condition for a rectangular

Extracting geometric forms with opencv

I running a video in which I want to detect 3 geometric forms triangle circle and pentagon, and nothing else, here a frame from this video and the result that I get :
the correct result
One bad result from many.
and here's my code:
img =src;
cv::Moments mom;
cv::Mat result(img.size(),CV_8U,cv::Scalar(255));
cv::threshold(img,img,127,255,CV_THRESH_BINARY_INV);
cv::findContours(img,contours,/*hiararchy,*/CV_RETR_LIST, CV_CHAIN_APPROX_NONE );
for ( int i=0; i<contours.size();i++){
cv::approxPolyDP(cv::Mat(contours[i]),approx,(cv::arcLength(cv::Mat(contours[i]),true)*.02),true);
switch(approx.size()){
case 8: // should be a circle
cv::minEnclosingCircle(cv::Mat(contours[i]),center,radius);
cv::circle(src,cv::Point(center),static_cast<int> (radius),cv::Scalar(255,255,0),3,3);
mom= cv::moments(cv::Mat(contours[i]));
// draw mass center
cv::circle(result,
// position of mass center converted to integer
cv::Point(mom.m10/mom.m00,mom.m01/mom.m00),
2,cv::Scalar(0,0,255),2);// draw black dot
break;
case 3: // should be a triangle
poly.clear();
cv::approxPolyDP(cv::Mat(contours[i]),poly,
5, // accuracy of the approximation
true); // yes it is a closed shape
// Iterate over each segment and draw it
itp= poly.begin();
while (itp!=(poly.end()-1)) {
cv::line(src,*itp,*(itp+1),cv::Scalar(0,255,255),2);
++itp;
}
// last point linked to first point
cv::line(src,*(poly.begin()),*(poly.end()-1),cv::Scalar(100,255,100),2);
mom= cv::moments(cv::Mat(contours[i]));
// draw mass center
cv::circle(result,
// position of mass center converted to integer
cv::Point(mom.m10/mom.m00,mom.m01/mom.m00),
2,cv::Scalar(0,0,255),2);// draw black dot
break;
case 5 :// should be a pentagon
poly.clear();
cv::approxPolyDP(cv::Mat(contours[i]),poly,
5, // accuracy of the approximation
true); // yes it is a closed shape
// Iterate over each segment and draw it
itp= poly.begin();
while (itp!=(poly.end()-1)) {
cv::line(src,*itp,*(itp+1),cv::Scalar(0,0,255),2);
++itp;
}
// last point linked to first point
cv::line(src,*(poly.begin()),*(poly.end()-1),cv::Scalar(255,0,0),2);
mom= cv::moments(cv::Mat(contours[i]));
// draw mass center
cv::circle(result,
// position of mass center converted to integer
cv::Point(mom.m10/mom.m00,mom.m01/mom.m00),
2,cv::Scalar(0,0,255),2);// draw black dot
break;
default :
contours[i].clear();
}
// iterate over all contours
int j = 0;
for( int i = 0; i < contours.size();i++) {
if ( !contours[i].empty()){
// compute all moments
j++; // At the end j should be 3
}
if(j ==3 ){
cv::drawContours(result,contours,-1, // draw all contours
cv::Scalar(0), // in black
2); // with a thickness of 2
cv::imshow("result",result);
}
} std::cout<<j<<std::endl;
return j;
}
Any idea how to solve this ?
thanks!
If the geometric shapes are always located within such a checkerboard pattern and the shapes have same size and orientation within that pattern all the time, I would:
detect the checkerboard pattern.
remove the perspective distortion with a homography
scale and rotate so that the checkerboard pattern is axis aligned and at a given scale
use chamfer matching to detect the specified shapes.