How to segment objects after OpenCV connectedComponents function - c++

I have obtained a labeling with the connectedComponents function of C++ OpenCV, which looks like in the picture :
This is the output of the ccLabels variable, which is a cv::Mat of the same size with the original image.
So what I need to do is :
Count the occurences of each number, and select the ones that
occur more than N times, which are the "big" ones.
Segment the
areas of the "big" components, and then count the number of 4's and
0's inside that area.
My ultimate aim is to count the number of holes in the image, so I aim to infer number of holes from (number of 0's / number of 4's). This is probably not the prettiest way but the images are very uniform in terms of size and illumination, so it will meet my needs.
But I'm new to OpenCV and I don't have much idea how to accomplish this task.
Here is what I've done so far:
cv::Mat1b outImg;
cv::threshold(grayImg, outImg, 150, 255, 0); // Thresholded -binary- image
cv::Mat ccLabels;
cv::connectedComponents(outImg, ccLabels); // Each non-zero pixel is labeled with their connectedComponent ID's
// write the labels to file:
std::ofstream myfile;
myfile.open("ccLabels.txt");
cv::Size s = ccLabels.size();
myfile << "Size: " << s.height << " , " << s.width <<"\n";
for (int r1 = 0; r1 < s.height; r1++) {
for (int c1 = 0; c1 < s.height; c1++) {
myfile << ccLabels.at<int>(r1,c1);
}
myfile << "\n";
}
myfile.close();
Since I know how to iterate inside the matrix, counting the numbers should be OK, but first I have to separate(eliminate / ignore) the "background" pixels, which are the 0's outside the connected components. Then counting should be easy.
How can I segment these "big" components? Maybe obtaining a mask, and only consider pixels where mask(x,y) = 1?
Thanks for any help !
Edit
This is the thresholded image:
And this is what I get after Canny edge detection :
This is the actual image (thresholded) :

Here a simple procedure to find the number on the dices, starting from your thresholded image
find external contours
for each contour
eventually discard small blobs
draw the filled mask
use AND and XOR to isolate internal holes
find contours, again
count contours
Result:
Number: 5
Number: 2
Image:
Code:
#include <opencv2\opencv.hpp>
#include <iostream>
#include <vector>
using namespace std;
using namespace cv;
int main(void)
{
// Grayscale image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Minimum area of the contour
double minContourArea = 10;
// Prepare outpot
Mat3b result;
cvtColor(img, result, COLOR_GRAY2BGR);
// Find contours
vector<vector<Point>> contours;
findContours(img.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); ++i)
{
// Check area
if (contourArea(contours[i]) < minContourArea) continue;
// Black mask
Mat1b mask(img.rows, img.cols, uchar(0));
// Draw filled contour
drawContours(mask, contours, i, Scalar(255), CV_FILLED);
mask = (mask & img) ^ mask;
vector<vector<Point>> cntrs;
findContours(mask, cntrs, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
cout << "Number: " << cntrs.size() << endl;
// Just for showing results
drawContours(result, cntrs, -1, Scalar(0,0,255), CV_FILLED);
}
imshow("Result", result);
waitKey();
return 0;
}

The easier way is findContours method. You find the inner contours and calculate their area( since the inner contours will be holes) and process this information accordingly.

To solve your 1st problem consider you have a set of values in values.Count the occurences of each number that as appeared.
int m=0;
for(int n=0;n<256;n++)
{
int c=0;
for(int q=0;q<values.size();q++)
{
if(n==values[q])
{
//int c;
c++;
m++;
}
}
cout<<n<<"= "<< c<<endl;
}
cout<<"Total number of elements "<< m<<endl;
To solve your second problem find the largest contour in the image using findcontours, draw bounding rectangle around it and then crop it. Again use the above code to count the pixel value "4" and "0". You can find the link of it here https://stackoverflow.com/a/32998275/3853072

Related

Detect the coin and from there measure the leaf in the image

Friends, I am trying to detect the coin first, since I already know its size is 1.011cm2. And then measure the leaves in the image.
I am using findContours, but I am not always able to distinguish the currency first, I have also tried to use hougCircles but it is not working in my case. Would anyone have any ideas?
OpenCv 4.5.0 C++
My code
//variables for segmentation image
cv::Mat imagem_original, imagem_gray, imagem_binaria, imagem_inRange, imagem_threshold, dst, src;
vector<Vec3f> circles;
cv::Scalar min_color = Scalar(50, 50, 50);
cv::Scalar max_color = Scalar(90, 120, 180);
imagem_original = load_image("IMG_1845.jpg");
//imshow("Imagem Original", imagem_original);
cv::cvtColor(imagem_original, imagem_gray, COLOR_BGR2GRAY);
//imshow("imagem_gray", imagem_gray);
//cv::inRange(imagem_gray, min_color, max_color, imagem_inRange);
cv::threshold(imagem_gray, imagem_threshold, 0, 255, THRESH_BINARY_INV | THRESH_OTSU);
imshow(" Threshold", imagem_threshold);
// find outer-contours in the image these should be the circles!
cv::Mat conts = imagem_threshold.clone();
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(conts, contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
int total_IAF = 0;
cout << "\n\n";
cout << contours.size() << "\n\n";
for (int i = 0; i < contours.size(); i++) {
int area = contourArea(contours[i]);
if (area <= 10) {
cv::drawContours(imagem_original, contours, i, Scalar(0, 0, 255));
}
else {
cout << area << "\n";
cv::drawContours(imagem_original, contours, i, Scalar(255, 0, 0));
}
if (area > 5000) {
total_IAF += contourArea(contours[i]);
}
}
imshow(" ORIGINAL ", imagem_original);
double iAF_cm2 = total_IAF / 4658;
cout << "\n\n TOTAL AREA IAF: " << total_IAF;
cout << "\n IAF em cm2: " << iAF_cm2 << " cm2\n\n";
If your setup has constant white-ish/gray-ish background and green leaves, I'd use the HSV color space to detect all objects using the S channel (the green leaves and the golden part of the coin will have significantly more saturation than the background) and then distinguish between the coin and the leaves using the H channel (the green leaves will have hue values around 45). The remainder is to determine the image areas of all contours, and set the coin's image area as some kind of reference area to calculate the object areas w.r.t. the coin's object area of 1.011.
That's the saturation channel of the given image:
The saturation channel thresholded at 64:
That's the hue channel of the image:
Here's some code executing the above idea:
int main()
{
// Read image
cv::Mat img = cv::imread("Wcj1R.jpg", cv::IMREAD_COLOR);
// Convert image to HSV color space, and split H, S, V channels
cv::Mat img_hsv;
cv::cvtColor(img, img_hsv, cv::COLOR_BGR2HSV);
std::vector<cv::Mat> hsv;
cv::split(img_hsv, hsv);
// Binary threshold S channel at fixed threshold
cv::Mat img_thr;
cv::threshold(hsv[1], img_thr, 64, 255, cv::THRESH_BINARY);
// Find most outer contours only
std::vector<std::vector<cv::Point>> cnts;
std::vector<cv::Vec4i> hier;
cv::findContours(img_thr.clone(), cnts, hier, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
// Iterate found contours
std::vector<cv::Point> cnt_centers;
std::vector<double> cnt_areas;
double ref_area = -1;
for (int i = 0; i < cnts.size(); i++)
{
// Current contour
std::vector<cv::Point> cnt = cnts[i];
// If contour is too small, discard
if (cnt.size() < 100)
continue;
// Calculate and store center (just for visualization) and area of contour
cv::Moments m = cv::moments(cnt);
cnt_centers.push_back(cv::Point(m.m10 / m.m00 - 30, m.m01 / m.m00));
cnt_areas.push_back(cv::contourArea(cnt));
// Check H channel, whether the contour's image parts are mostly green
cv::Mat mask = hsv[0].clone().setTo(cv::Scalar(0));
cv::drawContours(mask, cnts, i, cv::Scalar(255), cv::FILLED);
double h_mean = cv::mean(hsv[0], mask)[0];
// If it's not mostly green, that's the coin, thus the reference area
if (h_mean < 40 || h_mean > 50)
ref_area = cv::contourArea(cnt);
}
// Iterate all contours again
for (int i = 0; i < cnt_centers.size(); i++)
{
// Calculate actual object area
double area = cnt_areas[i] / ref_area * 1.011;
// Put area on image w.r.t. the contour's center
cv::putText(img, std::to_string(area), cnt_centers[i], cv::FONT_HERSHEY_COMPLEX_SMALL, 1, cv::Scalar(255, 255, 255));
}
return 0;
}
And, that'd be the output:
Your code finds all contours in a image and shows them. So I'm confused about the meaning of "detect the coin first".
If you want to draw the contour of the coin first, sort contours vector by size. The coin is the smallest object so it would be the first element of the vector after sorting.(Of course, some unwanted contours should removed before sorting.)

How to detect contour self-intersection with C++ and OpenCV?

I need to test contour on self-intersection but I don't know how it implement. Or how I can detect only contours without self-intersection in cv::Mat?
F.ex. left contour must be matched, right contour don't matched
Here is a solution:
Skeleton + pruning => reduce the contours to a single pixel width
For each pixel, compute the number of neighbors
If a pixel has more than 2 neighbors, then there it is in the middle of an intersection.
(optional) Connected component labeling in order to separate the different shapes.
You can also use a Hough transform.
If the lines are represented by a polygon (you know the corner points), you may draw the lines on an accumulation matrix.
Declare an new blank cv::Mat of type CV_8UC1 and initialize it with zero values. For every pixel between the two lines, increment the matrix by 1.
I am not if using the cv::line method is the best way to accomplish this task (you may create a new image for every line and sum up all the images as the final step). The best way that I can think of is to increment the points by using the equation of the line.
When you draw lines that intersect, in the accumulation matrix you'll have values of 2. If you find them, you'll know that the contour has self-intersections and you also know where they are.
If you have the image as an input, then the previously mentioned solution might work.
Best regards!
I tried ma best to implement it but couldn't due to lack the logic to code it. The logic i tried is you have the set of points of contours. Now check the occurrence of each point i.e how many number of times each point has appeared, if it has appeared more then one time it indicates the intersection point.
Let me know if i'm wrong.
the code i tried isn't working for this logic maybe someone might help you with it.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
RNG rng(12345);
int main( )
{
Mat image;
image = imread("0.png", CV_LOAD_IMAGE_COLOR); // Read the file
if(! image.data ) // Check for invalid input
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
cvtColor( image, image, CV_BGR2GRAY );
namedWindow( "Display window12", WINDOW_AUTOSIZE );// Create a window for display.
imshow( "Display window12", image );
Mat drawing;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( image, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
int m = 1;
vector<Point> contours1;
for(int i= 0; i < contours.size(); i++)
{
for(int j= 0; j < contours[i].size();j++) // run until j < contours[i].size();
{
contours1.push_back(Point (contours[i][j]));
// cout << contours[i][j] << "contours1"<<contours1<<endl; //do whatever
}
}
cout<<contours.size();
// Finding the occrence of each point it has appeared
//for(int i=0;i<contours.size();i++)
//{
// for(int j=0; j<contours[i].size();j++) // run until j < contours[i].size();
// {
// //contours1.push_back(Point (contours[i][j]));
// //if (contours[i][j] == contours[i][j])
// if( contours[i] ==contours1.at(i).x)
// // if( posX ==points.at(p).x)
// cout<<"hi";
// // cout << contours[i][j] << "contours1"<<contours1<<endl; //do whatever
// }
//}
namedWindow( "Display window", WINDOW_AUTOSIZE );// Create a window for display.
imshow( "Display window", image );
waitKey(0); // Wait for a keystroke in the window
return 0;
}

Lane Detector divider lines c ++ with OpenCV

Now I have been working on the analysis of images with OpenCV, what I'm trying to do is recognize the lane dividing lines, what I do is the following:
1.I receive a image,
2. Then transform it to grayscale
3.I apply the GaussianBlur
4.After I place me in the ROI
5.I apply the canny
6.then I look for lines with hough transform Lines
7.Draw the lines obtained from hough
But I've run into a problem which is:
that recognizes no dividing lines both rail and neither recognizes the yellow lines.
I hope to help me solve this problem, you will thank a lot.
Then I put the code
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <stdio.h>
#include "linefinder.h"
using namespace cv;
int main(int argc, char* argv[]) {
int houghVote = 200;
string arg = argv[1];
Mat image;
image = imread(argv[1]);
Mat gray;
cvtColor(image,gray,CV_RGB2GRAY);
GaussianBlur( gray, gray, Size( 5, 5 ), 0, 0 );
vector<string> codes;
Mat corners;
findDataMatrix(gray, codes, corners);
drawDataMatrixCodes(image, codes, corners);
//Mat image = imread("");
//Rect region_of_interest = Rect(x, y, w, h);
//Mat image_roi = image(region_of_interest);
std::cout << image.cols << "\n";
std::cout << image.rows << "\n";
Rect roi(0,290,640,190);// set the ROI for the image
Mat imgROI = image(roi);
// Display the image
imwrite("original.bmp", imgROI);
// Canny algorithm
Mat contours;
Canny(imgROI, contours, 120, 300, 3);
imwrite("canny.bmp", contours);
Mat contoursInv;
threshold(contours,contoursInv,128,255,THRESH_BINARY_INV);
// Display Canny image
imwrite("contours.bmp", contoursInv);
/*
Hough tranform for line detection with feedback
Increase by 25 for the next frame if we found some lines.
This is so we don't miss other lines that may crop up in the next frame
but at the same time we don't want to start the feed back loop from scratch.
*/
std::vector<Vec2f> lines;
if (houghVote < 1 or lines.size() > 2){ // we lost all lines. reset
houghVote = 200;
}else{
houghVote += 25;
}
while(lines.size() < 5 && houghVote > 0){
HoughLines(contours,lines,1,PI/180, houghVote);
houghVote -= 5;
}
std::cout << houghVote << "\n";
Mat result(imgROI.size(),CV_8U,Scalar(255));
imgROI.copyTo(result);
// Draw the limes
std::vector<Vec2f>::const_iterator it= lines.begin();
Mat hough(imgROI.size(),CV_8U,Scalar(0));
while (it!=lines.end()) {
float rho= (*it)[0]; // first element is distance rho
float theta= (*it)[1]; // second element is angle theta
if ( theta > 0.09 && theta < 1.48 || theta < 3.14 && theta > 1.66 ) {
// filter to remove vertical and horizontal lines
// point of intersection of the line with first row
Point pt1(rho/cos(theta),0);
// point of intersection of the line with last row
Point pt2((rho-result.rows*sin(theta))/cos(theta),result.rows);
// draw a white line
line( result, pt1, pt2, Scalar(255), 8);
line( hough, pt1, pt2, Scalar(255), 8);
}
++it;
}
// Display the detected line image
std::cout << "line image:"<< "\n";
namedWindow("Detected Lines with Hough");
imwrite("hough.bmp", result);
// Create LineFinder instance
LineFinder ld;
// Set probabilistic Hough parameters
ld.setLineLengthAndGap(60,10);
ld.setMinVote(4);
// Detect lines
std::vector<Vec4i> li= ld.findLines(contours);
Mat houghP(imgROI.size(),CV_8U,Scalar(0));
ld.setShift(0);
ld.drawDetectedLines(houghP);
std::cout << "First Hough" << "\n";
imwrite("houghP.bmp", houghP);
// bitwise AND of the two hough images
bitwise_and(houghP,hough,houghP);
Mat houghPinv(imgROI.size(),CV_8U,Scalar(0));
Mat dst(imgROI.size(),CV_8U,Scalar(0));
threshold(houghP,houghPinv,150,255,THRESH_BINARY_INV); // threshold and invert to black lines
namedWindow("Detected Lines with Bitwise");
imshow("Detected Lines with Bitwise", houghPinv);
Canny(houghPinv,contours,100,350);
li= ld.findLines(contours);
// Display Canny image
imwrite("contours.bmp", contoursInv);
// Set probabilistic Hough parameters
ld.setLineLengthAndGap(5,2);
ld.setMinVote(1);
ld.setShift(image.cols/3);
ld.drawDetectedLines(image);
std::stringstream stream;
stream << "Lines Segments: " << lines.size();
putText(image, stream.str(), Point(10,image.rows-10), 2, 0.8, Scalar(0,0,255),0);
imwrite("processed.bmp", image);
char key = (char) waitKey(10);
lines.clear();
}
The following are the input images respectively:
Here I show two photos one that recognizes the white line and another that does not recognize the yellow line, what I require is to recognize the dividing lines because I monitor the lane, but is complicated to me and it does not recognize the presence of all dividing lines, I hope help me because I have honestly tried everything but I have not had good results.
I think it's because you are doing a bitwise addition of both probabilistic hough and regular hough transforms. This means that the outputted image will only contain lines that appear in both of these transforms. I'm pretty sure in the regular transform the line is not detected but in the probabilistic hough output the line is detected. You're best bet is to output both transforms separately and debug. I'm doing a similar project, I imagine you could include a separate ROI to exclude from the bitwise addition and that area would be along the centrum of the lane markings.

Glasses detection

What I'm trying to do is measure the thickness of the eyeglasses frames. I had the idea to measure the thickness of the frame's contours (may be a better way?). I have so far outlined the frame of the glasses, but there are gaps where the lines don't meet. I thought about using HoughLinesP, but I'm not sure if this is what I need.
So far I have conducted the following steps:
Convert image to grayscale
Create ROI around the eye/glasses area
Blur the image
Dilate the image (have done this to remove any thin framed glasses)
Conduct Canny edge detection
Found contours
These are the results:
This is my code so far:
//convert to grayscale
cv::Mat grayscaleImg;
cv::cvtColor( img, grayscaleImg, CV_BGR2GRAY );
//create ROI
cv::Mat eyeAreaROI(grayscaleImg, centreEyesRect);
cv::imshow("roi", eyeAreaROI);
//blur
cv::Mat blurredROI;
cv::blur(eyeAreaROI, blurredROI, Size(3,3));
cv::imshow("blurred", blurredROI);
//dilate thin lines
cv::Mat dilated_dst;
int dilate_elem = 0;
int dilate_size = 1;
int dilate_type = MORPH_RECT;
cv::Mat element = getStructuringElement(dilate_type,
cv::Size(2*dilate_size + 1, 2*dilate_size+1),
cv::Point(dilate_size, dilate_size));
cv::dilate(blurredROI, dilated_dst, element);
cv::imshow("dilate", dilated_dst);
//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
cv::Canny(dilated_dst, dilated_dst, lowThreshold, lowThreshold*ratio, kernel_size);
//create matrix of the same type and size as ROI
Mat dst;
dst.create(eyeAreaROI.size(), dilated_dst.type());
dst = Scalar::all(0);
dilated_dst.copyTo(dst, dilated_dst);
cv::imshow("edges", dst);
//join the lines and fill in
vector<Vec4i> hierarchy;
vector<vector<Point>> contours;
cv::findContours(dilated_dst, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::imshow("contours", dilated_dst);
I'm not entirely sure what the next steps would be, or as I said above, if I should use HoughLinesP and how to implement it. Any help is very much appreciated!
I think there are 2 main problems.
segment the glasses frame
find the thickness of the segmented frame
I'll now post a way to segment the glasses of your sample image. Maybe this method will work for different images too, but you'll probably have to adjust parameters, or you might be able to use the main ideas.
Main idea is:
First, find the biggest contour in the image, which should be the glasses. Second, find the two biggest contours within the previous found biggest contour, which should be the glasses within the frame!
I use this image as input (which should be your blurred but not dilated image):
// this functions finds the biggest X contours. Probably there are faster ways, but it should work...
std::vector<std::vector<cv::Point>> findBiggestContours(std::vector<std::vector<cv::Point>> contours, int amount)
{
std::vector<std::vector<cv::Point>> sortedContours;
if(amount <= 0) amount = contours.size();
if(amount > contours.size()) amount = contours.size();
for(int chosen = 0; chosen < amount; )
{
double biggestContourArea = 0;
int biggestContourID = -1;
for(unsigned int i=0; i<contours.size() && contours.size(); ++i)
{
double tmpArea = cv::contourArea(contours[i]);
if(tmpArea > biggestContourArea)
{
biggestContourArea = tmpArea;
biggestContourID = i;
}
}
if(biggestContourID >= 0)
{
//std::cout << "found area: " << biggestContourArea << std::endl;
// found biggest contour
// add contour to sorted contours vector:
sortedContours.push_back(contours[biggestContourID]);
chosen++;
// remove biggest contour from original vector:
contours[biggestContourID] = contours.back();
contours.pop_back();
}
else
{
// should never happen except for broken contours with size 0?!?
return sortedContours;
}
}
return sortedContours;
}
int main()
{
cv::Mat input = cv::imread("../Data/glass2.png", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat inputColors = cv::imread("../Data/glass2.png"); // used for displaying later
cv::imshow("input", input);
//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
cv::Mat canny;
cv::Canny(input, canny, lowThreshold, lowThreshold*ratio, kernel_size);
cv::imshow("canny", canny);
// close gaps with "close operator"
cv::Mat mask = canny.clone();
cv::dilate(mask,mask,cv::Mat());
cv::dilate(mask,mask,cv::Mat());
cv::dilate(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::imshow("closed mask",mask);
// extract outermost contour
std::vector<cv::Vec4i> hierarchy;
std::vector<std::vector<cv::Point>> contours;
//cv::findContours(mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// find biggest contour which should be the outer contour of the frame
std::vector<std::vector<cv::Point>> biggestContour;
biggestContour = findBiggestContours(contours,1); // find the one biggest contour
if(biggestContour.size() < 1)
{
std::cout << "Error: no outer frame of glasses found" << std::endl;
return 1;
}
// draw contour on an empty image
cv::Mat outerFrame = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
cv::drawContours(outerFrame,biggestContour,0,cv::Scalar(255),-1);
cv::imshow("outer frame border", outerFrame);
// now find the glasses which should be the outer contours within the frame. therefore erode the outer border ;)
cv::Mat glassesMask = outerFrame.clone();
cv::erode(glassesMask,glassesMask, cv::Mat());
cv::imshow("eroded outer",glassesMask);
// after erosion if we dilate, it's an Open-Operator which can be used to clean the image.
cv::Mat cleanedOuter;
cv::dilate(glassesMask,cleanedOuter, cv::Mat());
cv::imshow("cleaned outer",cleanedOuter);
// use the outer frame mask as a mask for copying canny edges. The result should be the inner edges inside the frame only
cv::Mat glassesInner;
canny.copyTo(glassesInner, glassesMask);
// there is small gap in the contour which unfortunately cant be closed with a closing operator...
cv::dilate(glassesInner, glassesInner, cv::Mat());
//cv::erode(glassesInner, glassesInner, cv::Mat());
// this part was cheated... in fact we would like to erode directly after dilation to not modify the thickness but just close small gaps.
cv::imshow("innerCanny", glassesInner);
// extract contours from within the frame
std::vector<cv::Vec4i> hierarchyInner;
std::vector<std::vector<cv::Point>> contoursInner;
//cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// find the two biggest contours which should be the glasses within the frame
std::vector<std::vector<cv::Point>> biggestInnerContours;
biggestInnerContours = findBiggestContours(contoursInner,2); // find the one biggest contour
if(biggestInnerContours.size() < 1)
{
std::cout << "Error: no inner frames of glasses found" << std::endl;
return 1;
}
// draw the 2 biggest contours which should be the inner glasses
cv::Mat innerGlasses = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
for(unsigned int i=0; i<biggestInnerContours.size(); ++i)
cv::drawContours(innerGlasses,biggestInnerContours,i,cv::Scalar(255),-1);
cv::imshow("inner frame border", innerGlasses);
// since we dilated earlier and didnt erode quite afterwards, we have to erode here... this is a bit of cheating :-(
cv::erode(innerGlasses,innerGlasses,cv::Mat() );
// remove the inner glasses from the frame mask
cv::Mat fullGlassesMask = cleanedOuter - innerGlasses;
cv::imshow("complete glasses mask", fullGlassesMask);
// color code the result to get an impression of segmentation quality
cv::Mat outputColors1 = inputColors.clone();
cv::Mat outputColors2 = inputColors.clone();
for(int y=0; y<fullGlassesMask.rows; ++y)
for(int x=0; x<fullGlassesMask.cols; ++x)
{
if(!fullGlassesMask.at<unsigned char>(y,x))
outputColors1.at<cv::Vec3b>(y,x)[1] = 255;
else
outputColors2.at<cv::Vec3b>(y,x)[1] = 255;
}
cv::imshow("output", outputColors1);
/*
cv::imwrite("../Data/Output/face_colored.png", outputColors1);
cv::imwrite("../Data/Output/glasses_colored.png", outputColors2);
cv::imwrite("../Data/Output/glasses_fullMask.png", fullGlassesMask);
*/
cv::waitKey(-1);
return 0;
}
I get this result for segmentation:
the overlay in original image will give you an impression of quality:
and inverse:
There are some tricky parts in the code and it's not tidied up yet. I hope it's understandable.
The next step would be to compute the thickness of the the segmented frame. My suggestion is to compute the distance transform of the inversed mask. From this you will want to compute a ridge detection or skeletonize the mask to find the ridge. After that use the median value of ridge distances.
Anyways I hope this posting can help you a little, although it's not a solution yet.
Depending on lighting, frame color etc this may or may not work but how about simple color detection to separate the frame ? Frame color will usually be a lot darker than human skin. You'll end up with a binary image (just black and white) and by calculating the number (area) of black pixels you get the area of the frame.
Another possible way is to get better edge detection, by adjusting/dilating/eroding/both until you get better contours. You will also need to differentiate the contour from the lenses and then apply cvContourArea.

OpenCV's fitEllipse() sometimes returns completely wrong ellipses

My goal is to recognize all the shapes present in an image.
The idea is:
Extract contours
Fit each contour with different shapes
The correct shape should be the one with area closest to the
contour's area.
Example image:
I use fitEllipse() to find the best fit ellipse to the contours, but the result is a bit messy:
The likely-correct ellipses are filled with blue, and the bounding ellipses are yellow.
The likely-incorrect contours are filled with green, and the (wrong) bounding ellipses are cyan.
As you can see, the ellipse bounding the triangle in the first row looks pretty good for the best fit. The bounding ellipse of the triangle in the third row doesn't seem to be the best fit, but still acceptable as a criteria for rejecting an incorrect ellipse.
But I can't understand why the remaining triangles have bounding ellipse completely outside their contour.
And the worst case is the third triangle in the last row: The ellipse is completely wrong but it happens to have the area close to the contour's area, so the triangle is wrongly recognized as an ellipse.
Do I miss anything? My code:
#include <iostream>
#include <opencv/cv.h>
#include <opencv/highgui.h>
using namespace std;
using namespace cv;
void getEllipses(vector<vector<Point> >& contours, vector<RotatedRect>& ellipses) {
ellipses.clear();
Mat img(Size(800,500), CV_8UC3);
for (unsigned i = 0; i<contours.size(); i++) {
if (contours[i].size() >= 5) {
RotatedRect temp = fitEllipse(Mat(contours[i]));
if (area(temp) <= 1.1 * contourArea(contours[i])) {
//cout << area(temp) << " < 1.1* " << contourArea(contours[i]) << endl;
ellipses.push_back(temp);
drawContours(img, contours, i, Scalar(255,0,0), -1, 8);
ellipse(img, temp, Scalar(0,255,255), 2, 8);
imshow("Ellipses", img);
waitKey();
} else {
//cout << "Reject ellipse " << i << endl;
drawContours(img, contours, i, Scalar(0,255,0), -1, 8);
ellipse(img, temp, Scalar(255,255,0), 2, 8);
imshow("Ellipses", img);
waitKey();
}
}
}
}
int main() {
Mat img = imread("image.png", CV_8UC1);
threshold(img, img, 127,255,CV_THRESH_BINARY);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(img, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
vector<RotatedRect> ellipses;
getEllipses(contours, ellipses);
return 0;
}
Keep in mind, that fitEllipse is not the computation of a boundingEllipse but a least square optimization that assumes the points to lie on an ellipse.
I can't tell you why it fails on the 3 triangles in the last row so badly but "works" on the triangle one line above, but one thing I've seen is, that all 3 triangles in the last row were fitted to a rotatedRect with angle 0. Probably the least square fitting just failed there.
But I don't know whether there is a bug in the openCV implementation, or wether the algorithm can't handle those cases. This algorithm is used: http://www.bmva.org/bmvc/1995/bmvc-95-050.pdf
My advice is, to only use fitEllipse if you are quite sure that the points really belong to an ellipse. You wont either assume to get reasonable results from fitLine if you have random data points. Other functions you might want to look at are: minAreaRect and minEnclosingCircle
if you use RotatedRect temp = minAreaRect(Mat(contours[i])); instead of fitEllipse you will get an image like this:
maybe you can even use both methods and refuse all ellipses that fail in both versions and accept all that are accepted in both versions, but investigate further in the ones that differ?!?
Changing cv::CHAIN_APPROX_SIMPLE to cv::CHAIN_APPROX_NONE
in the call to cv::findContours() gives me much more reasonable results.
It makes sense that we would get a better ellipse approximation with more points included in the contour but I am still not sure why the results are so off with the simple chain approximation. See opencv docs for explanation of the difference
It appears that when using cv::CHAIN_APPROX_SIMPLE, the relatively horizontal edges of the triangles are almost completely removed from the contour.
As to your classification of best fit, as others have pointed out, using only the area will give you the results you observe as positioning is not taken into account at all.
If you are having problems with cv::fitEllipse(), this post discuss a few methods to minimize those errors that happen when the cv::RotatedRect is draw directly without any further tests. Turns out cv::fitEllipse() is not perfect and can have issues as noted in the question.
Now, it's not entirely clear what the constraints of the project are, but another way to solve this problem is to separate these shapes based on the area of the contours:
This approach is extremely simple yet efficient on this specific case: the area of a circle varies between 1300-1699 and the area of a triangle between 1-1299.
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
int main()
{
cv::Mat img = cv::imread("input.png");
if (img.empty())
{
std::cout << "!!! Failed to open image" << std::endl;
return -1;
}
/* Convert to grayscale */
cv::Mat gray;
cv::cvtColor(img, gray, cv::COLOR_BGR2GRAY);
/* Convert to binary */
cv::Mat thres;
cv::threshold(gray, thres, 127, 255, cv::THRESH_BINARY);
/* Find contours */
std::vector<std::vector<cv::Point> > contours;
cv::findContours(thres, contours, cv::RETR_LIST, cv::CHAIN_APPROX_SIMPLE);
int circles = 0;
int triangles = 0;
for (size_t i = 0; i < contours.size(); i++)
{
// Draw a contour based on the size of its area:
// - Area > 0 and < 1300 means it's a triangle;
// - Area >= 1300 and < 1700 means it's a circle;
double area = cv::contourArea(contours[i]);
if (area > 0 && area < 1300)
{
std::cout << "* Triangle #" << ++triangles << " area: " << area << std::endl;
cv::drawContours(img, contours, i, cv::Scalar(0, 255, 0), -1, 8); // filled (green)
cv::drawContours(img, contours, i, cv::Scalar(0, 0, 255), 2, 8); // outline (red)
}
else if (area >= 1300 && area < 1700)
{
std::cout << "* Circle #" << ++circles << " area: " << area << std::endl;
cv::drawContours(img, contours, i, cv::Scalar(255, 0, 0), -1, 8); // filled (blue)
cv::drawContours(img, contours, i, cv::Scalar(0, 0, 255), 2, 8); // outline (red)
}
else
{
std::cout << "* Ignoring area: " << area << std::endl;
continue;
}
cv::imshow("OBJ", img);
cv::waitKey(0);
}
cv::imwrite("output.png", img);
return 0;
}
You can invoke other functions to draw more precise outline (borders) of the shapes.
It may be a better idea to get a pixel-by-pixel comparison i.e. what percentage is the overlap between the contour and the "fitted" ellipse.
Another, simpler idea is to also compare the centroids of the contour and its ellipse fit.