I am extremely new to computer vision and the opencv library.
I've done some googling around to try to find how to make a new image from a vector of Point2fs and haven't found any examples that work. I've seen vector<Point> to Mat but when I use those examples I always get errors.
I'm working from this example and any help would be appreciated.
Code: I pass in occludedSquare.
resize(occludedSquare, occludedSquare, Size(0, 0), 0.5, 0.5);
Mat occludedSquare8u;
cvtColor(occludedSquare, occludedSquare8u, CV_BGR2GRAY);
//convert to a binary image. pixel values greater than 200 turn to white. otherwize black
Mat thresh;
threshold(occludedSquare8u, thresh, 170.0, 255.0, THRESH_BINARY);
GaussianBlur(thresh, thresh, Size(7, 7), 2.0, 2.0);
//Do edge detection
Mat edges;
Canny(thresh, edges, 45.0, 160.0, 3);
//Do straight line detection
vector<Vec2f> lines;
HoughLines( edges, lines, 1.5, CV_PI/180, 50, 0, 0 );
//imshow("thresholded", edges);
cout << "Detected " << lines.size() << " lines." << endl;
// compute the intersection from the lines detected...
vector<Point2f> intersections;
for( size_t i = 0; i < lines.size(); i++ )
{
for(size_t j = 0; j < lines.size(); j++)
{
Vec2f line1 = lines[i];
Vec2f line2 = lines[j];
if(acceptLinePair(line1, line2, CV_PI / 32))
{
Point2f intersection = computeIntersect(line1, line2);
intersections.push_back(intersection);
}
}
}
if(intersections.size() > 0)
{
vector<Point2f>::iterator i;
for(i = intersections.begin(); i != intersections.end(); ++i)
{
cout << "Intersection is " << i->x << ", " << i->y << endl;
circle(occludedSquare8u, *i, 1, Scalar(0, 255, 0), 3);
}
}
//Make new matrix bounded by the intersections
...
imshow("localized", localized);
Should be as simple as
std::vector<cv::Point2f> points;
cv::Mat image(points);
//or
cv::Mat image = cv::Mat(points)
The probably confusion is that a cv::Mat is an image width*height*number of channels but it also a mathematical matrix , rows*columns*other dimension.
If you make a Mat from a vector of 'n' 2D points it will create a 2 column by 'n' rows matrix. You are passing this to a function which expects an image.
If you just have a scattered set of 2D points and want to display them as an image you need to make an empty cv::Mat of large enough size (whatever your maximum x,y point is) and then draw the dots using the drawing functions http://docs.opencv.org/doc/tutorials/core/basic_geometric_drawing/basic_geometric_drawing.html
If you just want to set the pixel values at those point coordinates search SO for opencv setting pixel values, there are lots of answers
Martin's answer is right but IMO it depends on how image cv::Mat is used further along the line. I had some issues and Haofeng's comment helped me fix them. Here is my attempt to explain it in detail:
Let's say the code looks like this:
std::vector<cv::Point2f> points = {cv::Point2f(1.0, 2.0), cv::Point2f(3.0, 4.0), cv::Point2f(5.0, 6.0), cv::Point2f(7.0, 8.0), cv::Point2f(9.0, 10.0)};
cv::Mat image(points); // or cv::Mat image = cv::Mat(points)
std::cout << image << std::endl;
This will print:
[1, 2;
3, 4;
5, 6;
7, 8;
9, 10]
So, at first glance, this looks perfectly correct and as expected: for the five 2D points in the given vector, we got a cv::Mat with 5 rows and 2 columns, right? However, that's not the case here!
If further properties are inspected:
std::cout << image.rows << std::endl; // 5
std::cout << image.cols << std::endl; // 1
std::cout << image.channels() << std::endl; // 2
it can be seen that the above cv::Mat has 5 rows, 1 column, and 2 channels. Depending on the pipeline, we may not want that. Most of the time, we want a matrix with 5 rows, 2 columns, and just 1 channel.
To fix this problem, all we need to do is reshape the matrix:
cv::Mat image(points).reshape(1);
In the above code, 1 is for 1 channel. Check out OpenCV reshape() documentation for further information.
If this matrix is printed out, it will look the same as the previous one. However, that's not the whole picture (metaphorically!) The new matrix has 5 rows, 2 columns, and 1 channel.
I wish OpenCV had different ways of printing out these two similar yet different matrices (from the OpenCV data structure point of view)!
Related
I have obtained a labeling with the connectedComponents function of C++ OpenCV, which looks like in the picture :
This is the output of the ccLabels variable, which is a cv::Mat of the same size with the original image.
So what I need to do is :
Count the occurences of each number, and select the ones that
occur more than N times, which are the "big" ones.
Segment the
areas of the "big" components, and then count the number of 4's and
0's inside that area.
My ultimate aim is to count the number of holes in the image, so I aim to infer number of holes from (number of 0's / number of 4's). This is probably not the prettiest way but the images are very uniform in terms of size and illumination, so it will meet my needs.
But I'm new to OpenCV and I don't have much idea how to accomplish this task.
Here is what I've done so far:
cv::Mat1b outImg;
cv::threshold(grayImg, outImg, 150, 255, 0); // Thresholded -binary- image
cv::Mat ccLabels;
cv::connectedComponents(outImg, ccLabels); // Each non-zero pixel is labeled with their connectedComponent ID's
// write the labels to file:
std::ofstream myfile;
myfile.open("ccLabels.txt");
cv::Size s = ccLabels.size();
myfile << "Size: " << s.height << " , " << s.width <<"\n";
for (int r1 = 0; r1 < s.height; r1++) {
for (int c1 = 0; c1 < s.height; c1++) {
myfile << ccLabels.at<int>(r1,c1);
}
myfile << "\n";
}
myfile.close();
Since I know how to iterate inside the matrix, counting the numbers should be OK, but first I have to separate(eliminate / ignore) the "background" pixels, which are the 0's outside the connected components. Then counting should be easy.
How can I segment these "big" components? Maybe obtaining a mask, and only consider pixels where mask(x,y) = 1?
Thanks for any help !
Edit
This is the thresholded image:
And this is what I get after Canny edge detection :
This is the actual image (thresholded) :
Here a simple procedure to find the number on the dices, starting from your thresholded image
find external contours
for each contour
eventually discard small blobs
draw the filled mask
use AND and XOR to isolate internal holes
find contours, again
count contours
Result:
Number: 5
Number: 2
Image:
Code:
#include <opencv2\opencv.hpp>
#include <iostream>
#include <vector>
using namespace std;
using namespace cv;
int main(void)
{
// Grayscale image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Minimum area of the contour
double minContourArea = 10;
// Prepare outpot
Mat3b result;
cvtColor(img, result, COLOR_GRAY2BGR);
// Find contours
vector<vector<Point>> contours;
findContours(img.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); ++i)
{
// Check area
if (contourArea(contours[i]) < minContourArea) continue;
// Black mask
Mat1b mask(img.rows, img.cols, uchar(0));
// Draw filled contour
drawContours(mask, contours, i, Scalar(255), CV_FILLED);
mask = (mask & img) ^ mask;
vector<vector<Point>> cntrs;
findContours(mask, cntrs, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
cout << "Number: " << cntrs.size() << endl;
// Just for showing results
drawContours(result, cntrs, -1, Scalar(0,0,255), CV_FILLED);
}
imshow("Result", result);
waitKey();
return 0;
}
The easier way is findContours method. You find the inner contours and calculate their area( since the inner contours will be holes) and process this information accordingly.
To solve your 1st problem consider you have a set of values in values.Count the occurences of each number that as appeared.
int m=0;
for(int n=0;n<256;n++)
{
int c=0;
for(int q=0;q<values.size();q++)
{
if(n==values[q])
{
//int c;
c++;
m++;
}
}
cout<<n<<"= "<< c<<endl;
}
cout<<"Total number of elements "<< m<<endl;
To solve your second problem find the largest contour in the image using findcontours, draw bounding rectangle around it and then crop it. Again use the above code to count the pixel value "4" and "0". You can find the link of it here https://stackoverflow.com/a/32998275/3853072
The complete error:
OpenCV Error: Assertion failed (nimages > 0 && nimages ==
(int)imagePoints1.total() && (!imgPtMat2 || nimages ==
(int)imagePoints2.total())) in collectCalibrationData, file C:\OpenCV
\sources\modules\calib3d\src\calibration.cpp, line 3164
The code:
cv::VideoCapture kalibrowanyPlik; //the video
cv::Mat frame;
cv::Mat testTwo; //undistorted
cv::Mat cameraMatrix = (cv::Mat_<double>(3, 3) << 2673.579, 0, 1310.689, 0, 2673.579, 914.941, 0, 0, 1);
cv::Mat distortMat = (cv::Mat_<double>(1, 4) << -0.208143, 0.235290, 0.001005, 0.001339);
cv::Mat intrinsicMatrix = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 1);
cv::Mat distortCoeffs = cv::Mat::zeros(8, 1, CV_64F);
//there are two sets for testing purposes. Values for the first two came from GML camera calibration app.
std::vector<cv::Mat> rvecs;
std::vector<cv::Mat> tvecs;
std::vector<std::vector<cv::Point2f> > imagePoints;
std::vector<std::vector<cv::Point3f> > objectPoints;
kalibrowanyPlik.open("625.avi");
//cv::namedWindow("Distorted", CV_WINDOW_AUTOSIZE); //gotta see things
//cv::namedWindow("Undistorted", CV_WINDOW_AUTOSIZE);
int maxFrames = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_COUNT);
int success = 0; //so we can do the calibration only after we've got a bunch
for(int i=0; i<maxFrames-1; i++) {
kalibrowanyPlik.read(frame);
std::vector<cv::Point2f> corners; //creating these here so they're effectively reset each time
std::vector<cv::Point3f> objectCorners;
int sizeX = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_WIDTH); //imageSize
int sizeY = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_HEIGHT);
cv::cvtColor(frame, frame, CV_BGR2GRAY); //must be gray
cv::Size patternsize(9,6); //interior number of corners
bool patternfound = cv::findChessboardCorners(frame, patternsize, corners, cv::CALIB_CB_ADAPTIVE_THRESH + cv::CALIB_CB_NORMALIZE_IMAGE + cv::CALIB_CB_FAST_CHECK); //finding them corners
if(patternfound == false) { //gotta know
qDebug() << "failure";
}
if(patternfound) {
qDebug() << "success!";
std::vector<cv::Point3f> objectCorners; //low priority issue - if I don't do this here, it becomes empty. Not sure why.
for(int y=0; y<6; ++y) {
for(int x=0; x<9; ++x) {
objectCorners.push_back(cv::Point3f(x*28,y*28,0)); //filling the array
}
}
cv::cornerSubPix(frame, corners, cv::Size(11, 11), cv::Size(-1, -1),
cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
cv::cvtColor(frame, frame, CV_GRAY2BGR); //I don't want gray lines
imagePoints.push_back(corners); //filling array of arrays with pixel coord array
objectPoints.push_back(objectCorners); //filling array of arrays with real life coord array, or rather copies of the same thing over and over
cout << corners << endl << objectCorners;
cout << endl << objectCorners.size() << "___" << objectPoints.size() << "___" << corners.size() << "___" << imagePoints.size() << endl;
cv::drawChessboardCorners(frame, patternsize, cv::Mat(corners), patternfound); //drawing.
if(success > 5) {
double rms = cv::calibrateCamera(objectPoints, corners, cv::Size(sizeX, sizeY), intrinsicMatrix, distortCoeffs, rvecs, tvecs, cv::CALIB_USE_INTRINSIC_GUESS);
//error - caused by passing CORNERS instead of IMAGEPOINTS. Also, imageSize is 640x480, and I've set the central point to 1310... etc
cout << endl << intrinsicMatrix << endl << distortCoeffs << endl;
cout << "\nrms - " << rms << endl;
}
success = success + 1;
//cv::imshow("Distorted", frame);
//cv::imshow("Undistorted", testTwo);
}
}
I've done a little bit of reading (This was an especially informative read), including over a dozen threads made here on StackOverflow, and all I found is that this error is produced by either by uneven imagePoints and objectPoints or by them being partially null or empty or zero (and links to tutorials that don't help). None of that is the case - the output from .size() check is:
54___7___54___7
For objectCorners (real life coords), objectPoints (number of arrays inserted) and the same for corners (pixel coords) and imagePoints. They're not empty either, the output is:
(...)
277.6792, 208.92903;
241.83429, 208.93048;
206.99866, 208.84637;
(...)
84, 56, 0;
112, 56, 0;
140, 56, 0;
168, 56, 0;
(...)
A sample frame:
I know it's a mess, but so far I'm trying to complete the code rather than get an accurate reading.
Each one hs exactly 54 lines of that. Does anyone have any ideas on what is causing the error? I'm using OpenCV 2.4.8 and Qt Creator 5.4 on Windows 7.
First of all, corners and imagePoints have to be switched, as you have aready noticed.
In most cases (if not all), size <= 25 is enough to get a good result. Focal length around 633 is not wierd, it means the focal length is 633 * sensor size. The CCD or CMOS size must be somewhere on the INSTRUCTIONS along with your camera. Find it out , times 633, the result is your focal length.
One suggestion to reduce the number of images used: using images taken from different viewpoints. 10 images from 10 different viewpoints bring much better result than 100 images from the same ( or nearby ) viewpoints. That is one of the reasons why video is not a good input. I guess with your code, all the images passed to calibratecamera may be from nearby viewpoints. If so, the calibration accuracy degrades.
I am writing a C++ Application using the OpenCV library to detect objects in images. These images look like this:
http://fs1.directupload.net/images/150311/my6uczfn.png
The upper part of the image, which is black, can be ignored.
I know, that every pixel, which is not part of a desired object, will be colored in white. What I am trying to do is to find out how many objects of interest are on an image and where they are.
Up until now I wrote the following code:
Mat image = imread("2.png", CV_LOAD_IMAGE_COLOR);
if(!image.data)
{
std::cout << "Could not open or find the image." << std::endl;
}
Range range_rows(0, image.size().height);
Range range_columns_left(0, image.size().width);
Range range_columns_middle(image.size().width, image.size().width * 2);
Range range_columns_right(image.size().width * 2, image.size().width * 3);
Mat display_mat(image.size().height, image.size().width * 3, CV_8UC3);
Mat left(display_mat, range_rows, range_columns_left);
image.copyTo(left);
Mat classified_image;
threshold(image, classified_image, 254, 255, THRESH_BINARY);
Mat middle(display_mat, range_rows, range_columns_middle);
classified_image.copyTo(middle);
Mat cimage = Mat::zeros(image.size(), CV_8UC3);
Mat classified_grayscale_image;
cvtColor(classified_image, classified_grayscale_image, CV_RGB2GRAY);
std::vector< std::vector<cv::Point> > contours;
findContours(classified_grayscale_image, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(size_t counter = 0; counter < contours.size(); counter++)
{
std::cout << "Contours size: " << contours[counter].size() << std::endl;
if(contours[counter].size() < 6)
continue;
Mat pointsf;
Mat(contours[counter]).convertTo(pointsf, CV_32F);
RotatedRect box = fitEllipse(pointsf);
drawContours(cimage, contours, (int)counter, Scalar::all(255), 1, 8);
ellipse(cimage, box, Scalar(0,0,255), 1, CV_AA);
std::cout << "Ellipse Parameter:\t";
ellipse(cimage, box.center, box.size*0.5f, box.angle, 0, 360, Scalar(0,255,255), 1, CV_AA);
Point2f vtx[4];
box.points(vtx);
for( int j = 0; j < 4; j++ )
line(cimage, vtx[j], vtx[(j+1)%4], Scalar(0,255,0), 1, CV_AA);
}
Mat right(display_mat, range_rows, range_columns_right);
cimage.copyTo(right);
namedWindow("Results", CV_WINDOW_AUTOSIZE);
imshow("Results", display_mat);
waitKey(0);
return 0;
The result looks like this:
http://fs1.directupload.net/images/150311/toiy3aes.png
As you see, the classification, what is an object and what is not, is not perfect, so 2 objects are recognized as one. The classification will be improved, but something like this can happen, if those objects are very close. Even more of a problem is, when they are touching each other.
How can I do a proper object recognition in the case shown above? Any ideas?
You have got few options:
use some filtering/thresholding method on your result image to split objects from each other. Otsu binarization should be enough in this case, alternatively you can try to use dilate operation.
inverse your result image and than use distance transform and Otsu binarization (or some other kind o thresholding - most of them should work fine). It will make you objects smaller but will make counting them much easier.
if you need to mark the objects as precise as possible you need to use more complicated method. Here there is an tutorial which use techniques i've described above and connected components and watershed.
Now I have been working on the analysis of images with OpenCV, what I'm trying to do is recognize the lane dividing lines, what I do is the following:
1.I receive a image,
2. Then transform it to grayscale
3.I apply the GaussianBlur
4.After I place me in the ROI
5.I apply the canny
6.then I look for lines with hough transform Lines
7.Draw the lines obtained from hough
But I've run into a problem which is:
that recognizes no dividing lines both rail and neither recognizes the yellow lines.
I hope to help me solve this problem, you will thank a lot.
Then I put the code
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <stdio.h>
#include "linefinder.h"
using namespace cv;
int main(int argc, char* argv[]) {
int houghVote = 200;
string arg = argv[1];
Mat image;
image = imread(argv[1]);
Mat gray;
cvtColor(image,gray,CV_RGB2GRAY);
GaussianBlur( gray, gray, Size( 5, 5 ), 0, 0 );
vector<string> codes;
Mat corners;
findDataMatrix(gray, codes, corners);
drawDataMatrixCodes(image, codes, corners);
//Mat image = imread("");
//Rect region_of_interest = Rect(x, y, w, h);
//Mat image_roi = image(region_of_interest);
std::cout << image.cols << "\n";
std::cout << image.rows << "\n";
Rect roi(0,290,640,190);// set the ROI for the image
Mat imgROI = image(roi);
// Display the image
imwrite("original.bmp", imgROI);
// Canny algorithm
Mat contours;
Canny(imgROI, contours, 120, 300, 3);
imwrite("canny.bmp", contours);
Mat contoursInv;
threshold(contours,contoursInv,128,255,THRESH_BINARY_INV);
// Display Canny image
imwrite("contours.bmp", contoursInv);
/*
Hough tranform for line detection with feedback
Increase by 25 for the next frame if we found some lines.
This is so we don't miss other lines that may crop up in the next frame
but at the same time we don't want to start the feed back loop from scratch.
*/
std::vector<Vec2f> lines;
if (houghVote < 1 or lines.size() > 2){ // we lost all lines. reset
houghVote = 200;
}else{
houghVote += 25;
}
while(lines.size() < 5 && houghVote > 0){
HoughLines(contours,lines,1,PI/180, houghVote);
houghVote -= 5;
}
std::cout << houghVote << "\n";
Mat result(imgROI.size(),CV_8U,Scalar(255));
imgROI.copyTo(result);
// Draw the limes
std::vector<Vec2f>::const_iterator it= lines.begin();
Mat hough(imgROI.size(),CV_8U,Scalar(0));
while (it!=lines.end()) {
float rho= (*it)[0]; // first element is distance rho
float theta= (*it)[1]; // second element is angle theta
if ( theta > 0.09 && theta < 1.48 || theta < 3.14 && theta > 1.66 ) {
// filter to remove vertical and horizontal lines
// point of intersection of the line with first row
Point pt1(rho/cos(theta),0);
// point of intersection of the line with last row
Point pt2((rho-result.rows*sin(theta))/cos(theta),result.rows);
// draw a white line
line( result, pt1, pt2, Scalar(255), 8);
line( hough, pt1, pt2, Scalar(255), 8);
}
++it;
}
// Display the detected line image
std::cout << "line image:"<< "\n";
namedWindow("Detected Lines with Hough");
imwrite("hough.bmp", result);
// Create LineFinder instance
LineFinder ld;
// Set probabilistic Hough parameters
ld.setLineLengthAndGap(60,10);
ld.setMinVote(4);
// Detect lines
std::vector<Vec4i> li= ld.findLines(contours);
Mat houghP(imgROI.size(),CV_8U,Scalar(0));
ld.setShift(0);
ld.drawDetectedLines(houghP);
std::cout << "First Hough" << "\n";
imwrite("houghP.bmp", houghP);
// bitwise AND of the two hough images
bitwise_and(houghP,hough,houghP);
Mat houghPinv(imgROI.size(),CV_8U,Scalar(0));
Mat dst(imgROI.size(),CV_8U,Scalar(0));
threshold(houghP,houghPinv,150,255,THRESH_BINARY_INV); // threshold and invert to black lines
namedWindow("Detected Lines with Bitwise");
imshow("Detected Lines with Bitwise", houghPinv);
Canny(houghPinv,contours,100,350);
li= ld.findLines(contours);
// Display Canny image
imwrite("contours.bmp", contoursInv);
// Set probabilistic Hough parameters
ld.setLineLengthAndGap(5,2);
ld.setMinVote(1);
ld.setShift(image.cols/3);
ld.drawDetectedLines(image);
std::stringstream stream;
stream << "Lines Segments: " << lines.size();
putText(image, stream.str(), Point(10,image.rows-10), 2, 0.8, Scalar(0,0,255),0);
imwrite("processed.bmp", image);
char key = (char) waitKey(10);
lines.clear();
}
The following are the input images respectively:
Here I show two photos one that recognizes the white line and another that does not recognize the yellow line, what I require is to recognize the dividing lines because I monitor the lane, but is complicated to me and it does not recognize the presence of all dividing lines, I hope help me because I have honestly tried everything but I have not had good results.
I think it's because you are doing a bitwise addition of both probabilistic hough and regular hough transforms. This means that the outputted image will only contain lines that appear in both of these transforms. I'm pretty sure in the regular transform the line is not detected but in the probabilistic hough output the line is detected. You're best bet is to output both transforms separately and debug. I'm doing a similar project, I imagine you could include a separate ROI to exclude from the bitwise addition and that area would be along the centrum of the lane markings.
My goal is to recognize all the shapes present in an image.
The idea is:
Extract contours
Fit each contour with different shapes
The correct shape should be the one with area closest to the
contour's area.
Example image:
I use fitEllipse() to find the best fit ellipse to the contours, but the result is a bit messy:
The likely-correct ellipses are filled with blue, and the bounding ellipses are yellow.
The likely-incorrect contours are filled with green, and the (wrong) bounding ellipses are cyan.
As you can see, the ellipse bounding the triangle in the first row looks pretty good for the best fit. The bounding ellipse of the triangle in the third row doesn't seem to be the best fit, but still acceptable as a criteria for rejecting an incorrect ellipse.
But I can't understand why the remaining triangles have bounding ellipse completely outside their contour.
And the worst case is the third triangle in the last row: The ellipse is completely wrong but it happens to have the area close to the contour's area, so the triangle is wrongly recognized as an ellipse.
Do I miss anything? My code:
#include <iostream>
#include <opencv/cv.h>
#include <opencv/highgui.h>
using namespace std;
using namespace cv;
void getEllipses(vector<vector<Point> >& contours, vector<RotatedRect>& ellipses) {
ellipses.clear();
Mat img(Size(800,500), CV_8UC3);
for (unsigned i = 0; i<contours.size(); i++) {
if (contours[i].size() >= 5) {
RotatedRect temp = fitEllipse(Mat(contours[i]));
if (area(temp) <= 1.1 * contourArea(contours[i])) {
//cout << area(temp) << " < 1.1* " << contourArea(contours[i]) << endl;
ellipses.push_back(temp);
drawContours(img, contours, i, Scalar(255,0,0), -1, 8);
ellipse(img, temp, Scalar(0,255,255), 2, 8);
imshow("Ellipses", img);
waitKey();
} else {
//cout << "Reject ellipse " << i << endl;
drawContours(img, contours, i, Scalar(0,255,0), -1, 8);
ellipse(img, temp, Scalar(255,255,0), 2, 8);
imshow("Ellipses", img);
waitKey();
}
}
}
}
int main() {
Mat img = imread("image.png", CV_8UC1);
threshold(img, img, 127,255,CV_THRESH_BINARY);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(img, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
vector<RotatedRect> ellipses;
getEllipses(contours, ellipses);
return 0;
}
Keep in mind, that fitEllipse is not the computation of a boundingEllipse but a least square optimization that assumes the points to lie on an ellipse.
I can't tell you why it fails on the 3 triangles in the last row so badly but "works" on the triangle one line above, but one thing I've seen is, that all 3 triangles in the last row were fitted to a rotatedRect with angle 0. Probably the least square fitting just failed there.
But I don't know whether there is a bug in the openCV implementation, or wether the algorithm can't handle those cases. This algorithm is used: http://www.bmva.org/bmvc/1995/bmvc-95-050.pdf
My advice is, to only use fitEllipse if you are quite sure that the points really belong to an ellipse. You wont either assume to get reasonable results from fitLine if you have random data points. Other functions you might want to look at are: minAreaRect and minEnclosingCircle
if you use RotatedRect temp = minAreaRect(Mat(contours[i])); instead of fitEllipse you will get an image like this:
maybe you can even use both methods and refuse all ellipses that fail in both versions and accept all that are accepted in both versions, but investigate further in the ones that differ?!?
Changing cv::CHAIN_APPROX_SIMPLE to cv::CHAIN_APPROX_NONE
in the call to cv::findContours() gives me much more reasonable results.
It makes sense that we would get a better ellipse approximation with more points included in the contour but I am still not sure why the results are so off with the simple chain approximation. See opencv docs for explanation of the difference
It appears that when using cv::CHAIN_APPROX_SIMPLE, the relatively horizontal edges of the triangles are almost completely removed from the contour.
As to your classification of best fit, as others have pointed out, using only the area will give you the results you observe as positioning is not taken into account at all.
If you are having problems with cv::fitEllipse(), this post discuss a few methods to minimize those errors that happen when the cv::RotatedRect is draw directly without any further tests. Turns out cv::fitEllipse() is not perfect and can have issues as noted in the question.
Now, it's not entirely clear what the constraints of the project are, but another way to solve this problem is to separate these shapes based on the area of the contours:
This approach is extremely simple yet efficient on this specific case: the area of a circle varies between 1300-1699 and the area of a triangle between 1-1299.
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
int main()
{
cv::Mat img = cv::imread("input.png");
if (img.empty())
{
std::cout << "!!! Failed to open image" << std::endl;
return -1;
}
/* Convert to grayscale */
cv::Mat gray;
cv::cvtColor(img, gray, cv::COLOR_BGR2GRAY);
/* Convert to binary */
cv::Mat thres;
cv::threshold(gray, thres, 127, 255, cv::THRESH_BINARY);
/* Find contours */
std::vector<std::vector<cv::Point> > contours;
cv::findContours(thres, contours, cv::RETR_LIST, cv::CHAIN_APPROX_SIMPLE);
int circles = 0;
int triangles = 0;
for (size_t i = 0; i < contours.size(); i++)
{
// Draw a contour based on the size of its area:
// - Area > 0 and < 1300 means it's a triangle;
// - Area >= 1300 and < 1700 means it's a circle;
double area = cv::contourArea(contours[i]);
if (area > 0 && area < 1300)
{
std::cout << "* Triangle #" << ++triangles << " area: " << area << std::endl;
cv::drawContours(img, contours, i, cv::Scalar(0, 255, 0), -1, 8); // filled (green)
cv::drawContours(img, contours, i, cv::Scalar(0, 0, 255), 2, 8); // outline (red)
}
else if (area >= 1300 && area < 1700)
{
std::cout << "* Circle #" << ++circles << " area: " << area << std::endl;
cv::drawContours(img, contours, i, cv::Scalar(255, 0, 0), -1, 8); // filled (blue)
cv::drawContours(img, contours, i, cv::Scalar(0, 0, 255), 2, 8); // outline (red)
}
else
{
std::cout << "* Ignoring area: " << area << std::endl;
continue;
}
cv::imshow("OBJ", img);
cv::waitKey(0);
}
cv::imwrite("output.png", img);
return 0;
}
You can invoke other functions to draw more precise outline (borders) of the shapes.
It may be a better idea to get a pixel-by-pixel comparison i.e. what percentage is the overlap between the contour and the "fitted" ellipse.
Another, simpler idea is to also compare the centroids of the contour and its ellipse fit.