Say I have the following binary image created from the output of cv::watershed():
Now I want to find and fill the contours, so I can separate the corresponding objects from the background in the original image (that was segmented by the watershed function).
To segment the image and find the contours I use the code below:
cv::Mat bgr = cv::imread("test.png");
// Some function that provides the rough outline for the segmented regions.
cv::Mat markers = find_markers(bgr);
cv::watershed(bgr, markers);
cv::Mat_<bool> boundaries(bgr.size());
for (int i = 0; i < bgr.rows; i++) {
for (int j = 0; j < bgr.cols; j++) {
boundaries.at<bool>(i, j) = (markers.at<int>(i, j) == -1);
}
}
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(
boundaries, contours, hierarchy,
CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE
);
So far so good. However, if I pass the contours acquired above to cv::drawContours() as below:
cv::Mat regions(bgr.size(), CV_32S);
cv::drawContours(
regions, contours, -1, cv::Scalar::all(255),
CV_FILLED, 8, hierarchy, INT_MAX
);
This is what I get:
The leftmost contour was left open by cv::findContours(), and as a result it is not filled by cv::drawContours().
Now I know this is a consequence of cv::findContours() clipping off the 1-pixel border around the image (as mentioned in the documentation), but what to do then? It seems an awful waste to discard a contour just because it happened to brush off the image's border. And anyway how can I even find which contour(s) fall in this category? cv::isContourConvex() is not a solution in this case; a region can be concave but "closed" and thus not have this problem.
Edit: About the suggestion to duplicate the pixels from the borders. The problem is that my marking function is also painting all pixels in the "background", i.e. those regions I'm sure aren't part of any object:
This results in a boundary being drawn around the output. If I somehow avoid cv::findContours() to clip off that boundary:
The boundary for the background gets merged with that leftmost object:
Which results in a nice white-filled box.
Solution number 1: use image extended by one pixel in each direction:
Mat extended(bgr.size()+Size(2,2), bgr.type());
Mat markers = extended(Rect(1, 1, bgr.cols, bgr.rows));
// all your calculation part
std::vector<std::vector<Point> > contours;
findContours(boundaries, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
Mat regions(bgr.size(), CV_8U);
drawContours(regions, contours, -1, Scalar(255), CV_FILLED, 8, Mat(), INT_MAX, Point(-1,-1));
Note that contours were extracted from extended image, i.e. their x and y values are bigger by 1 from what they should be. This is why I use drawContours with (-1,-1) pixel offset.
Solution number 2: add white pixels from boundary of image to the neighbor row/column:
bitwise_or(boundaries.row(0), boundaries.row(1), boundaries.row(1));
bitwise_or(boundaries.col(0), boundaries.col(1), boundaries.col(1));
bitwise_or(boundaries.row(bgr.rows()-1), boundaries.row(bgr.rows()-2), boundaries.row(bgr.rows()-2));
bitwise_or(boundaries.col(bgr.cols()-1), boundaries.col(bgr.cols()-2), boundaries.col(bgr.cols()-2));
Both solution are half-dirty workarounds, but this is all I could think about.
Following Burdinov's suggestions I came up with the code below, which correctly fills all extracted regions while ignoring the all-enclosing boundary:
cv::Mat fill_regions(const cv::Mat &bgr, const cv::Mat &prospective) {
static cv::Scalar WHITE = cv::Scalar::all(255);
int rows = bgr.rows;
int cols = bgr.cols;
// For the given prospective markers, finds
// object boundaries on the given BGR image.
cv::Mat markers = prospective.clone();
cv::watershed(bgr, markers);
// Copies the boundaries of the objetcs segmented by cv::watershed().
// Ensures there is a minimum distance of 1 pixel between boundary
// pixels and the image border.
cv::Mat borders(rows + 2, cols + 2, CV_8U);
for (int i = 0; i < rows; i++) {
uchar *u = borders.ptr<uchar>(i + 1) + 1;
int *v = markers.ptr<int>(i);
for (int j = 0; j < cols; j++, u++, v++) {
*u = (*v == -1);
}
}
// Calculates contour vectors for the boundaries extracted above.
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(
borders, contours, hierarchy,
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE
);
int area = bgr.size().area();
cv::Mat regions(borders.size(), CV_32S);
for (int i = 0, n = contours.size(); i < n; i++) {
// Ignores contours for which the bounding rectangle's
// area equals the area of the original image.
std::vector<cv::Point> &contour = contours[i];
if (cv::boundingRect(contour).area() == area) {
continue;
}
// Draws the selected contour.
cv::drawContours(
regions, contours, i, WHITE,
CV_FILLED, 8, hierarchy, INT_MAX
);
}
// Removes the 1 pixel-thick border added when the boundaries
// were first copied from the output of cv::watershed().
return regions(cv::Rect(1, 1, cols, rows));
}
Related
I have a binary image, from which I need to consider only the white regions as contours but it also takes black region which is surrounded by white part as contour.
I don't want to use contour area, can we ignore the black regions while finding contours?
Here is the binary image and the orange color marked is also considered as contour, so do not want the black region surrounded with white to be considered as contour.
Contour image is:
My contouring code:
//contouring
vector<vector<Point> > contours;
findContours(img, contours, RETR_LIST, CHAIN_APPROX_SIMPLE);
vector<vector<Point> > contours_poly(contours.size());
vector<Rect> boundRect(contours.size());
vector<Point2f>centers(contours.size());
vector<float>radius(contours.size());
for (size_t i = 0; i < contours.size(); i++)
{
approxPolyDP(contours[i], contours_poly[i], 3, true);
boundRect[i] = boundingRect(contours_poly[i]);
minEnclosingCircle(contours_poly[i], centers[i], radius[i]);
}
Mat drawing = Mat::zeros(img.size(), CV_8UC3);
for (size_t i = 0; i < contours.size(); i++)
{
Scalar color = Scalar(rng.uniform(0, 256), rng.uniform(0, 256), rng.uniform(0, 256));
drawContours(drawing, contours_poly, (int)i, color);
}
Your code snippets do not compose a minimal reproducible example (https://stackoverflow.com/help/minimal-reproducible-example).
Therefore I could not run it for testing.
However - you might be able to achieve what you want by "playing" with the 2 last parameters of cv::findContours.
From opencv documentation of cv::findContours:
mode Contour retrieval mode, see [RetrievalModes][1]
method Contour approximation method, see [ContourApproximationModes][1]
I am using OpenCV C++ to find the contours in a video. I want to count the no of contours present in video in a a specified region or in between two lines drawn in a video. For example a stream of contours are moving in a video and I want to count them when they reach to a specific region in a video. I will decrease the count as they leave the specific region in the video. I know some basic stuffs to find contour, calculate the area etc. But I am not getting any programming tips to count the no of contours within specified region. Please help me with the related topics and some tips on programming. (I do not want to use cvBlob.h library)
Basically I am counting the number of cars entered in that region. If car is entered I will increment the count and if it leaves the region then I will decrease the count.
You can approximate the contour you found to a polygon or circle:
for( int i = 0; i < contours.size(); i++ )
{ approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( Mat(contours_poly[i]) )
}
after that use the line equation y=a and compare the coordinates of the corners of rectangle or the center+radius of the circle to determine if the contour passed the line.
1.Use a part of your Mat image.(If your ROI is rectangle)
Mat src; // suppose this is your source frame
Mat src_of_interest = src(Rect(Point(x1, y1), Point(x2, y2)));
// Do the rest of finding contour..
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(src_of_interest.clone(), contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
//now you can do your job with contour vector.
int count = contours.size();
Mat dst = Mat::zeros(src.size(), CV_8UC3);
for(int i=0; i< count; i++)
{
drawContours(dst, contours, i, CV_RGB(255, 0, 0)); // draw contour in red
}
2. If your Region of Interest is not rectangle, try this approach:
vector<Point> contour_of_interest; // this contour is where you want to check
vector<vector<Point>> contours; // this is the contours you found
Mat dst = Mat::zeros(src.size(), CV_8U);
for(int i=0; i< count; i++)
{
drawContours(dst, contours, i, Scalar(255)); // set contour area to 255
}
Mat roi = Mat::zeros(src.size(), CV_8U);
vector<vector<Point>> coi_vector;
coi_vector.push_back(contour_of_interest);
drawContours(roi, coi_vector, 0, Scalar(255));
Mat and = dst & roi; // where 2 mats are both TRUE(not zero)
findContours(and.clone(), contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
//now you can do your job with contour vector.
int count = contours.size();
3. If you want to count contours between 4 points, try this approach:
vector<Point> contour_of_interest; // this contour is the area where you want to check
Point p1(x1, y1);
Point p2(x1, y2);
Point p3(x2, y2);
Point p4(x2, y1);
contour_of_interest.push_back(p1);
contour_of_interest.push_back(p2);
contour_of_interest.push_back(p3);
contour_of_interest.push_back(p4);
vector<vector<Point>> coi_list;
coi_list.push_back(contour_of_interest);
Mat mask = Mat:zeros(src.size(), CV_8U);
drawContours(mask, coi_list, 0, Scalar(255));
Mat src; // suppose this is your source frame
Mat src_of_interest = src & mask; // remove any pixels outside mask area
// Do the rest of finding contour..
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(src_of_interest.clone(), contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
//now you can do your job with contour vector.
int count = contours.size();
Mat dst = Mat::zeros(src.size(), CV_8UC3);
for(int i=0; i< count; i++)
{
drawContours(dst, contours, i, CV_RGB(255, 0, 0)); // draw contour in red
}
I have a photo where a person holds a sheet of paper. I'd like to detect the rectangle of that sheet of paper.
I have tried following different tutorials from OpenCV and various SO answers and sample code for detecting squares / rectangles, but the problem is that they all rely on contours of some kind.
If I follow the squares.cpp example, I get the following results from contours:
As you can see, the fingers are part of the contour, so the algorithm does not find the square.
I, also, tried using HoughLines() approach, but I get similar results to above:
I can detect the corners, reliably though:
There are other corners in the image, but I'm limiting total corners found to < 50 and the corners for the sheet of paper are always found.
Is there some algorithm for finding a rectangle from multiple corners in an image? I can't seem to find an existing approach.
You can apply a morphological filter to close the gaps in your edge image. Then if you find the contours, you can detect an inner closed contour as shown below. Then find the convexhull of this contour to get the rectangle.
Closed edges:
Contour:
Convexhull:
In the code below I've just used an arbitrary kernel size for morphological filter and filtered out the contour of interest using an area ratio threshold. You can use your own criteria instead of those.
Code
Mat im = imread("Sh1Vp.png", 0); // the edge image
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(11, 11));
Mat morph;
morphologyEx(im, morph, CV_MOP_CLOSE, kernel);
int rectIdx = 0;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(morph, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
for (size_t idx = 0; idx < contours.size(); idx++)
{
RotatedRect rect = minAreaRect(contours[idx]);
double areaRatio = abs(contourArea(contours[idx])) / (rect.size.width * rect.size.height);
if (areaRatio > .95)
{
rectIdx = idx;
break;
}
}
// get the convexhull of the contour
vector<Point> hull;
convexHull(contours[rectIdx], hull, false, true);
// visualization
Mat rgb;
cvtColor(im, rgb, CV_GRAY2BGR);
drawContours(rgb, contours, rectIdx, Scalar(0, 0, 255), 2);
for(size_t i = 0; i < hull.size(); i++)
{
line(rgb, hull[i], hull[(i + 1)%hull.size()], Scalar(0, 255, 0), 2);
}
What I'm trying to do is measure the thickness of the eyeglasses frames. I had the idea to measure the thickness of the frame's contours (may be a better way?). I have so far outlined the frame of the glasses, but there are gaps where the lines don't meet. I thought about using HoughLinesP, but I'm not sure if this is what I need.
So far I have conducted the following steps:
Convert image to grayscale
Create ROI around the eye/glasses area
Blur the image
Dilate the image (have done this to remove any thin framed glasses)
Conduct Canny edge detection
Found contours
These are the results:
This is my code so far:
//convert to grayscale
cv::Mat grayscaleImg;
cv::cvtColor( img, grayscaleImg, CV_BGR2GRAY );
//create ROI
cv::Mat eyeAreaROI(grayscaleImg, centreEyesRect);
cv::imshow("roi", eyeAreaROI);
//blur
cv::Mat blurredROI;
cv::blur(eyeAreaROI, blurredROI, Size(3,3));
cv::imshow("blurred", blurredROI);
//dilate thin lines
cv::Mat dilated_dst;
int dilate_elem = 0;
int dilate_size = 1;
int dilate_type = MORPH_RECT;
cv::Mat element = getStructuringElement(dilate_type,
cv::Size(2*dilate_size + 1, 2*dilate_size+1),
cv::Point(dilate_size, dilate_size));
cv::dilate(blurredROI, dilated_dst, element);
cv::imshow("dilate", dilated_dst);
//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
cv::Canny(dilated_dst, dilated_dst, lowThreshold, lowThreshold*ratio, kernel_size);
//create matrix of the same type and size as ROI
Mat dst;
dst.create(eyeAreaROI.size(), dilated_dst.type());
dst = Scalar::all(0);
dilated_dst.copyTo(dst, dilated_dst);
cv::imshow("edges", dst);
//join the lines and fill in
vector<Vec4i> hierarchy;
vector<vector<Point>> contours;
cv::findContours(dilated_dst, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::imshow("contours", dilated_dst);
I'm not entirely sure what the next steps would be, or as I said above, if I should use HoughLinesP and how to implement it. Any help is very much appreciated!
I think there are 2 main problems.
segment the glasses frame
find the thickness of the segmented frame
I'll now post a way to segment the glasses of your sample image. Maybe this method will work for different images too, but you'll probably have to adjust parameters, or you might be able to use the main ideas.
Main idea is:
First, find the biggest contour in the image, which should be the glasses. Second, find the two biggest contours within the previous found biggest contour, which should be the glasses within the frame!
I use this image as input (which should be your blurred but not dilated image):
// this functions finds the biggest X contours. Probably there are faster ways, but it should work...
std::vector<std::vector<cv::Point>> findBiggestContours(std::vector<std::vector<cv::Point>> contours, int amount)
{
std::vector<std::vector<cv::Point>> sortedContours;
if(amount <= 0) amount = contours.size();
if(amount > contours.size()) amount = contours.size();
for(int chosen = 0; chosen < amount; )
{
double biggestContourArea = 0;
int biggestContourID = -1;
for(unsigned int i=0; i<contours.size() && contours.size(); ++i)
{
double tmpArea = cv::contourArea(contours[i]);
if(tmpArea > biggestContourArea)
{
biggestContourArea = tmpArea;
biggestContourID = i;
}
}
if(biggestContourID >= 0)
{
//std::cout << "found area: " << biggestContourArea << std::endl;
// found biggest contour
// add contour to sorted contours vector:
sortedContours.push_back(contours[biggestContourID]);
chosen++;
// remove biggest contour from original vector:
contours[biggestContourID] = contours.back();
contours.pop_back();
}
else
{
// should never happen except for broken contours with size 0?!?
return sortedContours;
}
}
return sortedContours;
}
int main()
{
cv::Mat input = cv::imread("../Data/glass2.png", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat inputColors = cv::imread("../Data/glass2.png"); // used for displaying later
cv::imshow("input", input);
//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
cv::Mat canny;
cv::Canny(input, canny, lowThreshold, lowThreshold*ratio, kernel_size);
cv::imshow("canny", canny);
// close gaps with "close operator"
cv::Mat mask = canny.clone();
cv::dilate(mask,mask,cv::Mat());
cv::dilate(mask,mask,cv::Mat());
cv::dilate(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::erode(mask,mask,cv::Mat());
cv::imshow("closed mask",mask);
// extract outermost contour
std::vector<cv::Vec4i> hierarchy;
std::vector<std::vector<cv::Point>> contours;
//cv::findContours(mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// find biggest contour which should be the outer contour of the frame
std::vector<std::vector<cv::Point>> biggestContour;
biggestContour = findBiggestContours(contours,1); // find the one biggest contour
if(biggestContour.size() < 1)
{
std::cout << "Error: no outer frame of glasses found" << std::endl;
return 1;
}
// draw contour on an empty image
cv::Mat outerFrame = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
cv::drawContours(outerFrame,biggestContour,0,cv::Scalar(255),-1);
cv::imshow("outer frame border", outerFrame);
// now find the glasses which should be the outer contours within the frame. therefore erode the outer border ;)
cv::Mat glassesMask = outerFrame.clone();
cv::erode(glassesMask,glassesMask, cv::Mat());
cv::imshow("eroded outer",glassesMask);
// after erosion if we dilate, it's an Open-Operator which can be used to clean the image.
cv::Mat cleanedOuter;
cv::dilate(glassesMask,cleanedOuter, cv::Mat());
cv::imshow("cleaned outer",cleanedOuter);
// use the outer frame mask as a mask for copying canny edges. The result should be the inner edges inside the frame only
cv::Mat glassesInner;
canny.copyTo(glassesInner, glassesMask);
// there is small gap in the contour which unfortunately cant be closed with a closing operator...
cv::dilate(glassesInner, glassesInner, cv::Mat());
//cv::erode(glassesInner, glassesInner, cv::Mat());
// this part was cheated... in fact we would like to erode directly after dilation to not modify the thickness but just close small gaps.
cv::imshow("innerCanny", glassesInner);
// extract contours from within the frame
std::vector<cv::Vec4i> hierarchyInner;
std::vector<std::vector<cv::Point>> contoursInner;
//cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// find the two biggest contours which should be the glasses within the frame
std::vector<std::vector<cv::Point>> biggestInnerContours;
biggestInnerContours = findBiggestContours(contoursInner,2); // find the one biggest contour
if(biggestInnerContours.size() < 1)
{
std::cout << "Error: no inner frames of glasses found" << std::endl;
return 1;
}
// draw the 2 biggest contours which should be the inner glasses
cv::Mat innerGlasses = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
for(unsigned int i=0; i<biggestInnerContours.size(); ++i)
cv::drawContours(innerGlasses,biggestInnerContours,i,cv::Scalar(255),-1);
cv::imshow("inner frame border", innerGlasses);
// since we dilated earlier and didnt erode quite afterwards, we have to erode here... this is a bit of cheating :-(
cv::erode(innerGlasses,innerGlasses,cv::Mat() );
// remove the inner glasses from the frame mask
cv::Mat fullGlassesMask = cleanedOuter - innerGlasses;
cv::imshow("complete glasses mask", fullGlassesMask);
// color code the result to get an impression of segmentation quality
cv::Mat outputColors1 = inputColors.clone();
cv::Mat outputColors2 = inputColors.clone();
for(int y=0; y<fullGlassesMask.rows; ++y)
for(int x=0; x<fullGlassesMask.cols; ++x)
{
if(!fullGlassesMask.at<unsigned char>(y,x))
outputColors1.at<cv::Vec3b>(y,x)[1] = 255;
else
outputColors2.at<cv::Vec3b>(y,x)[1] = 255;
}
cv::imshow("output", outputColors1);
/*
cv::imwrite("../Data/Output/face_colored.png", outputColors1);
cv::imwrite("../Data/Output/glasses_colored.png", outputColors2);
cv::imwrite("../Data/Output/glasses_fullMask.png", fullGlassesMask);
*/
cv::waitKey(-1);
return 0;
}
I get this result for segmentation:
the overlay in original image will give you an impression of quality:
and inverse:
There are some tricky parts in the code and it's not tidied up yet. I hope it's understandable.
The next step would be to compute the thickness of the the segmented frame. My suggestion is to compute the distance transform of the inversed mask. From this you will want to compute a ridge detection or skeletonize the mask to find the ridge. After that use the median value of ridge distances.
Anyways I hope this posting can help you a little, although it's not a solution yet.
Depending on lighting, frame color etc this may or may not work but how about simple color detection to separate the frame ? Frame color will usually be a lot darker than human skin. You'll end up with a binary image (just black and white) and by calculating the number (area) of black pixels you get the area of the frame.
Another possible way is to get better edge detection, by adjusting/dilating/eroding/both until you get better contours. You will also need to differentiate the contour from the lenses and then apply cvContourArea.
I found this square detection code online and I'm trying to understand it, does anyone understand what the line at the end does? it states: "gray = gray0 >= (l+1)*255/N;"
Mat pyr, timg, gray0(image.size(), CV_8U), gray;
// down-scale and upscale the image to filter out the noise
pyrDown(image, pyr, Size(image.cols/2, image.rows/2));
pyrUp(pyr, timg, image.size());
vector<vector<Point> > contours;
// find squares in every color plane of the image
for( int c = 0; c < 3; c++ )
{
int ch[] = {c, 0};
mixChannels(&timg, 1, &gray0, 1, ch, 1);
// try several threshold levels
for( int l = 0; l < N; l++ )
{
// hack: use Canny instead of zero threshold level.
// Canny helps to catch squares with gradient shading
if( l == 0 )
{
// apply Canny. Take the upper threshold from slider
// and set the lower to 0 (which forces edges merging)
Canny(gray0, gray, 0, thresh, 5);
// dilate canny output to remove potential
// holes between edge segments
dilate(gray, gray, Mat(), Point(-1,-1));
}
else
{
// apply threshold if l!=0:
// tgray(x,y) = gray(x,y) < (l+1)*255/N ? 255 : 0
gray = gray0 >= (l+1)*255/N;
}
This code snipped is a part of the git repository CVSquares. From experience, I know that this library does not work well on real images though works good in computer generated images. Plus this method of detection on RGB without converting to grayscale is very computationally expensive.
Anyways the line of code you are asking about is basically a threshold filter that makes a binary gray Mat array based on the threshold applied to the array gray0 according to the level. If the condition is true, the array contains white pixel at that location else black pixel
A more generic detection code based on the grayscale image would work better like this:
Mat binary_img(color_img.size(),CV_8UC1);
vector<Vec4i> hierarchy;
vector<vector<Point> > contours;
vector<cv::Point> approx;
cv::GaussianBlur(color_img, color_img, cv::Size(9,9), 3);
cv::medianBlur(color_img, color_img, 9);
cv::medianBlur(color_img, color_img, 9);
cvtColor(color_img,binary_img,CV_BGR2GRAY);
IplImage tempBinary=IplImage(binary_img);
cvInRangeS(&tempBinary, Scalar(20), Scalar(100), &tempBinary);
Mat imgMat=Mat(&tempBinary);
findContours( imgMat, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
for(int i = 0; i < contours.size(); i++ )
{
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.0005, true);
if(approx.size()==4)
//draw approx
}
The second half of the line, gray0 >= (l+1)*255/N, is a condition. gray is a variable, which will contain the truth-fullness of this condition, thus it will be a boolean. But it is c/c++ code, and the booleans are integers here.
About this condition:
(l+1)*255/N will move the interval (1,l) to (0,255). Thus, for l=0 it will be 255/N. For l=N it will be exactly 255.
Depending on the value of l+1 (1 added to make the range started from 1 to 256, instead of 0 to 255) multiplied by 255 and integer divided by N being less than or equal to threshold value gray0, gray is set to 0 or 1 (pure black or pure white)
gray = gray0 >= (l+1)*255/N;