I am working in C++ and opencv
I am detecting the big contour in an image because I have a black area in it.
In this case, the area is only horizontally, but it can be in any place.
Mat resultGray;
cvtColor(result,resultGray, COLOR_BGR2GRAY);
medianBlur(resultGray,resultGray,3);
Mat resultTh;
Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Canny( resultGray, canny_output, 100, 100*2, 3 );
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
Vector<Point> best= contours[0];
int max_area = -1;
for( int i = 0; i < contours.size(); i++ ) {
Scalar color = Scalar( 0, 0, 0 );
if(contourArea(contours[i])> max_area)
{
max_area=contourArea(contours[i]);
best=contours[i];
}
}
Mat approxCurve;
approxPolyDP(Mat(best),approxCurve,0.01*arcLength(Mat(best),true),true);
Wiht this, i have the big contour and it approximation (in approxCurve). Now, I want to obtain the corners of this approximation and get the image inside this contour, but I dont know how can I do it.
I am using this How to remove black part from the image?
But the last part I dont understad very well.
Anyone knows how can I obtain the corners? It is another way more simple that this?
Thanks for your time,
One much simpler way you could do that is to check the image pixels and find the minimum/maximum coordinates of non-black pixels.
Something like this:
int maxx,maxy,minx,miny;
maxx=maxy=-std::numeric_limits<int>::max();
minx=miny=std::numeric_limits<int>::min();
for(int y=0; y<img.rows; ++y)
{
for(int x=0; x<img.cols; ++x)
{
const cv::Vec3b &px = img.at<cv::Vec3b>(y,x);
if(px(0)==0 && px(1)==0 && px(2)==0)
continue;
if(x<minx) minx=x;
if(x>maxx) maxx=x;
if(y<miny) miny=y;
if(y>maxy) maxy=y;
}
}
cv::Mat subimg;
img(cv::Rect(cv::Point(minx,miny),cv::Point(maxx,maxy))).copyTo(subimg);
In my opinion, this approach is more reliable since you don't have to detect any contour, which could lead to false detections depending on the input image.
In a very efficient way, you can sample the original image until you find a pixel on, and from there move along a row and along a column to find the first (0,0,0) pixel. It will work, unless in the good part of the image you can have (0,0,0) pixels. If this is the case (e.g.: dead pixel), you can add a double check checking the neighbourhood of this (0,0,0) pixel (it should contain other (0,0,0) pixels.
Related
How can I grow bright pixel in grey region?
Input:
image
Output: image
My answer is somewhat less helpful than my usual efforts, but it is hard to get up enthusiasm for questions with so little effort...
You can solve your issue by using OpenCV findContours() - documentation here. You will need to be sure to use the retrieval mode CV_RETR_TREE.
You then need to write a loop, iterating through all the contours found. In the loop, you need to look for a contour that:
a) has a colour of white and,
b) which has a parent with colour grey.
There is a decent explanation of how the hierarchy works here.
Mat im = imread("ask.png", 0);
Mat mat;
mat = im==255;
findContours( mat, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE);
for( size_t i = 0; i< contours.size(); i++ )
{
floodFill(mat, contours[i].at(0), 255, 0, Scalar(128), Scalar(255), FLOODFILL_FIXED_RANGE);
}
mat = mat==255; // output image
I have image with curved line like this :
I couldn't find a technique to straighten curved line using OpenCV. It is similar to this post Straightening a curved contour, but my question is specific to coding using opencv (in C++ is better).
So far, I'm only able to find the contour of the curved line.
int main()
{
Mat src; Mat src_gray;
src = imread("D:/2.jpg");
cvtColor(src, src_gray, COLOR_BGR2GRAY);
cv::blur(src_gray, src_gray, Size(1, 15));
Canny(src_gray, src_gray, 100, 200, 3);
/// Find contours
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
RNG rng(12345);
findContours(src_gray, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
/// Draw contours
Mat drawing = Mat::zeros(src_gray.size(), CV_8UC3);
for (int i = 0; i < contours.size(); i++)
{
drawContours(drawing, contours, i, (255), 1, 8, hierarchy, 0, Point());
}
imshow("Result window", drawing);
imwrite("D:/C_Backup_Folder/Ivan_codes/VideoStitcher/result/2_res.jpg", drawing);
cv::waitKey();
return 0;
}
But I have no idea how to determine which line is curved and not, and how to straighten it. Is it possible? Any help would be appreciated.
Here is my suggestion:
Before everything, resize your image into a much bigger image (for example 5 times bigger). Then do what you did before, and get the contours. Find the right-most pixel of each contour, and then survey all pixel of that contour and count the horizontal distance of each pixels to the right-most pixel and make a shift for that row (entire row). This method makes a right shift to some rows and left shift to the others.
If you have multiple contours, calculate this shift value for every one of them in every single row and compute their "mean" value, and do the shift according to that mean value for each row.
At the end resize back your image. This is the simplest and fastest thing I could think of.
I am trying to detect a rectangle using find contours, but I don't get any contours from the following image.
I cant detect any contours in the image. Is find contours is bad with the following image, or should I use hough transform.
UPDATE: I have updated the source code to use approximated polygon.
but I still I get the outlier bounding rect, I cant find the smallest rectangle that is in the screenshot.
I have another case which the current solution it doesnt work even when adding erosion or dilation.
image 2
and here is the code
using namespace cm;
using namespace cv;
using namespace std;
cv::Mat input = cv::imread("heightmap.png");
RNG rng(12345);
// convert to grayscale (you could load as grayscale instead)
cv::Mat gray;
cv::cvtColor(input,gray, CV_BGR2GRAY);
// compute mask (you could use a simple threshold if the image is always as good as the one you provided)
cv::Mat mask;
cv::threshold(gray, mask, 0, 255,CV_THRESH_OTSU);
cv::namedWindow("threshold");
cv::imshow("threshold",mask);
// find contours (if always so easy to segment as your image, you could just add the black/rect pixels to a vector)
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(mask,contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
cv::Mat drawing = cv::Mat::zeros( mask.size(), CV_8UC3 );
vector<vector<cv::Point> > contours_poly( contours.size() );
vector<vector<cv::Point> > ( contours.size() );
vector<cv::Rect> boundRect( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{
approxPolyDP( cv::Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( cv::Mat(contours_poly[i]) );
}
for( int i = 0; i< contours.size(); i++ )
{
cv::Scalar color = cv::Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
rectangle( drawing, boundRect[i].tl(), boundRect[i].br(), color, 2, 8, 0 );
}
// display
cv::imshow("input", input);
cv::imshow("drawing", drawing);
cv::waitKey(0);
The code you are using looks like its from this question.
It uses BinaryInv threshold because its detecting a black shape on white background.
Your example is the opposite so you should tweak your code to use Binary threshold type instead (or negate the image).
Without this fix, FindContours will detect the perimeter of the image which will be the biggest contour.
So I don't think the code is failing to detect contours, just not the "biggest contour" you expect.
Even with that fixed, the code you posted won't fit a rectangle to the rectangle in your example image, as the most obvious rectangular feature doesn't have a clean border. The approxPolyDP suggestion in the linked question might help but you'll have to improve the source image.
See this question for a comparison of this and Hough methods for finding rectangles.
Edit
You should be able to separate the rectangle in your example image from the other blob by calling Erode (3x3) twice.
You'll have to replace selecting the biggest contour with selecting the squarest.
I have a photo where a person holds a sheet of paper. I'd like to detect the rectangle of that sheet of paper.
I have tried following different tutorials from OpenCV and various SO answers and sample code for detecting squares / rectangles, but the problem is that they all rely on contours of some kind.
If I follow the squares.cpp example, I get the following results from contours:
As you can see, the fingers are part of the contour, so the algorithm does not find the square.
I, also, tried using HoughLines() approach, but I get similar results to above:
I can detect the corners, reliably though:
There are other corners in the image, but I'm limiting total corners found to < 50 and the corners for the sheet of paper are always found.
Is there some algorithm for finding a rectangle from multiple corners in an image? I can't seem to find an existing approach.
You can apply a morphological filter to close the gaps in your edge image. Then if you find the contours, you can detect an inner closed contour as shown below. Then find the convexhull of this contour to get the rectangle.
Closed edges:
Contour:
Convexhull:
In the code below I've just used an arbitrary kernel size for morphological filter and filtered out the contour of interest using an area ratio threshold. You can use your own criteria instead of those.
Code
Mat im = imread("Sh1Vp.png", 0); // the edge image
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(11, 11));
Mat morph;
morphologyEx(im, morph, CV_MOP_CLOSE, kernel);
int rectIdx = 0;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(morph, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
for (size_t idx = 0; idx < contours.size(); idx++)
{
RotatedRect rect = minAreaRect(contours[idx]);
double areaRatio = abs(contourArea(contours[idx])) / (rect.size.width * rect.size.height);
if (areaRatio > .95)
{
rectIdx = idx;
break;
}
}
// get the convexhull of the contour
vector<Point> hull;
convexHull(contours[rectIdx], hull, false, true);
// visualization
Mat rgb;
cvtColor(im, rgb, CV_GRAY2BGR);
drawContours(rgb, contours, rectIdx, Scalar(0, 0, 255), 2);
for(size_t i = 0; i < hull.size(); i++)
{
line(rgb, hull[i], hull[(i + 1)%hull.size()], Scalar(0, 255, 0), 2);
}
What I'm trying to do is measure the thickness of the eyeglasses frames. I had the idea to measure the thickness of the frame's contours (may be a better way?). I have so far outlined the frame of the glasses, but there are gaps where the lines don't meet. I thought about using HoughLinesP, but I'm not sure if this is what I need.
So far I have conducted the following steps:
Convert image to grayscale
Create ROI around the eye/glasses area
Blur the image
Dilate the image (have done this to remove any thin framed glasses)
Conduct Canny edge detection
Found contours
These are the results:
This is my code so far:
//convert to grayscale
cv::Mat grayscaleImg;
cv::cvtColor( img, grayscaleImg, CV_BGR2GRAY );
//create ROI
cv::Mat eyeAreaROI(grayscaleImg, centreEyesRect);
cv::imshow("roi", eyeAreaROI);
//blur
cv::Mat blurredROI;
cv::blur(eyeAreaROI, blurredROI, Size(3,3));
cv::imshow("blurred", blurredROI);
//dilate thin lines
cv::Mat dilated_dst;
int dilate_elem = 0;
int dilate_size = 1;
int dilate_type = MORPH_RECT;
cv::Mat element = getStructuringElement(dilate_type,
cv::Size(2*dilate_size + 1, 2*dilate_size+1),
cv::Point(dilate_size, dilate_size));
cv::dilate(blurredROI, dilated_dst, element);
cv::imshow("dilate", dilated_dst);
//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
cv::Canny(dilated_dst, dilated_dst, lowThreshold, lowThreshold*ratio, kernel_size);
//create matrix of the same type and size as ROI
Mat dst;
dst.create(eyeAreaROI.size(), dilated_dst.type());
dst = Scalar::all(0);
dilated_dst.copyTo(dst, dilated_dst);
cv::imshow("edges", dst);
//join the lines and fill in
vector<Vec4i> hierarchy;
vector<vector<Point>> contours;
cv::findContours(dilated_dst, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::imshow("contours", dilated_dst);
I'm not entirely sure what the next steps would be, or as I said above, if I should use HoughLinesP and how to implement it. Any help is very much appreciated!
I think there are 2 main problems.
segment the glasses frame
find the thickness of the segmented frame
I'll now post a way to segment the glasses of your sample image. Maybe this method will work for different images too, but you'll probably have to adjust parameters, or you might be able to use the main ideas.
Main idea is:
First, find the biggest contour in the image, which should be the glasses. Second, find the two biggest contours within the previous found biggest contour, which should be the glasses within the frame!
I use this image as input (which should be your blurred but not dilated image):
// this functions finds the biggest X contours. Probably there are faster ways, but it should work...
std::vector<std::vector<cv::Point>> findBiggestContours(std::vector<std::vector<cv::Point>> contours, int amount)
{
std::vector<std::vector<cv::Point>> sortedContours;
if(amount <= 0) amount = contours.size();
if(amount > contours.size()) amount = contours.size();
for(int chosen = 0; chosen < amount; )
{
double biggestContourArea = 0;
int biggestContourID = -1;
for(unsigned int i=0; i<contours.size() && contours.size(); ++i)
{
double tmpArea = cv::contourArea(contours[i]);
if(tmpArea > biggestContourArea)
{
biggestContourArea = tmpArea;
biggestContourID = i;
}
}
if(biggestContourID >= 0)
{
//std::cout << "found area: " << biggestContourArea << std::endl;
// found biggest contour
// add contour to sorted contours vector:
sortedContours.push_back(contours[biggestContourID]);
chosen++;
// remove biggest contour from original vector:
contours[biggestContourID] = contours.back();
contours.pop_back();
}
else
{
// should never happen except for broken contours with size 0?!?
return sortedContours;
}
}
return sortedContours;
}
int main()
{
cv::Mat input = cv::imread("../Data/glass2.png", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat inputColors = cv::imread("../Data/glass2.png"); // used for displaying later
cv::imshow("input", input);
//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
cv::Mat canny;
cv::Canny(input, canny, lowThreshold, lowThreshold*ratio, kernel_size);
cv::imshow("canny", canny);
// close gaps with "close operator"
cv::Mat mask = canny.clone();
cv::dilate(mask,mask,cv::Mat());
cv::dilate(mask,mask,cv::Mat());
cv::dilate(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::imshow("closed mask",mask);
// extract outermost contour
std::vector<cv::Vec4i> hierarchy;
std::vector<std::vector<cv::Point>> contours;
//cv::findContours(mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// find biggest contour which should be the outer contour of the frame
std::vector<std::vector<cv::Point>> biggestContour;
biggestContour = findBiggestContours(contours,1); // find the one biggest contour
if(biggestContour.size() < 1)
{
std::cout << "Error: no outer frame of glasses found" << std::endl;
return 1;
}
// draw contour on an empty image
cv::Mat outerFrame = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
cv::drawContours(outerFrame,biggestContour,0,cv::Scalar(255),-1);
cv::imshow("outer frame border", outerFrame);
// now find the glasses which should be the outer contours within the frame. therefore erode the outer border ;)
cv::Mat glassesMask = outerFrame.clone();
cv::erode(glassesMask,glassesMask, cv::Mat());
cv::imshow("eroded outer",glassesMask);
// after erosion if we dilate, it's an Open-Operator which can be used to clean the image.
cv::Mat cleanedOuter;
cv::dilate(glassesMask,cleanedOuter, cv::Mat());
cv::imshow("cleaned outer",cleanedOuter);
// use the outer frame mask as a mask for copying canny edges. The result should be the inner edges inside the frame only
cv::Mat glassesInner;
canny.copyTo(glassesInner, glassesMask);
// there is small gap in the contour which unfortunately cant be closed with a closing operator...
cv::dilate(glassesInner, glassesInner, cv::Mat());
//cv::erode(glassesInner, glassesInner, cv::Mat());
// this part was cheated... in fact we would like to erode directly after dilation to not modify the thickness but just close small gaps.
cv::imshow("innerCanny", glassesInner);
// extract contours from within the frame
std::vector<cv::Vec4i> hierarchyInner;
std::vector<std::vector<cv::Point>> contoursInner;
//cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// find the two biggest contours which should be the glasses within the frame
std::vector<std::vector<cv::Point>> biggestInnerContours;
biggestInnerContours = findBiggestContours(contoursInner,2); // find the one biggest contour
if(biggestInnerContours.size() < 1)
{
std::cout << "Error: no inner frames of glasses found" << std::endl;
return 1;
}
// draw the 2 biggest contours which should be the inner glasses
cv::Mat innerGlasses = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
for(unsigned int i=0; i<biggestInnerContours.size(); ++i)
cv::drawContours(innerGlasses,biggestInnerContours,i,cv::Scalar(255),-1);
cv::imshow("inner frame border", innerGlasses);
// since we dilated earlier and didnt erode quite afterwards, we have to erode here... this is a bit of cheating :-(
cv::erode(innerGlasses,innerGlasses,cv::Mat() );
// remove the inner glasses from the frame mask
cv::Mat fullGlassesMask = cleanedOuter - innerGlasses;
cv::imshow("complete glasses mask", fullGlassesMask);
// color code the result to get an impression of segmentation quality
cv::Mat outputColors1 = inputColors.clone();
cv::Mat outputColors2 = inputColors.clone();
for(int y=0; y<fullGlassesMask.rows; ++y)
for(int x=0; x<fullGlassesMask.cols; ++x)
{
if(!fullGlassesMask.at<unsigned char>(y,x))
outputColors1.at<cv::Vec3b>(y,x)[1] = 255;
else
outputColors2.at<cv::Vec3b>(y,x)[1] = 255;
}
cv::imshow("output", outputColors1);
/*
cv::imwrite("../Data/Output/face_colored.png", outputColors1);
cv::imwrite("../Data/Output/glasses_colored.png", outputColors2);
cv::imwrite("../Data/Output/glasses_fullMask.png", fullGlassesMask);
*/
cv::waitKey(-1);
return 0;
}
I get this result for segmentation:
the overlay in original image will give you an impression of quality:
and inverse:
There are some tricky parts in the code and it's not tidied up yet. I hope it's understandable.
The next step would be to compute the thickness of the the segmented frame. My suggestion is to compute the distance transform of the inversed mask. From this you will want to compute a ridge detection or skeletonize the mask to find the ridge. After that use the median value of ridge distances.
Anyways I hope this posting can help you a little, although it's not a solution yet.
Depending on lighting, frame color etc this may or may not work but how about simple color detection to separate the frame ? Frame color will usually be a lot darker than human skin. You'll end up with a binary image (just black and white) and by calculating the number (area) of black pixels you get the area of the frame.
Another possible way is to get better edge detection, by adjusting/dilating/eroding/both until you get better contours. You will also need to differentiate the contour from the lenses and then apply cvContourArea.