OpenCV convexhull doesn't give correct output - c++

This is my input binary image:
Now I want to get its convex hull using OpenCV. For that, I wrote the following code:
cv::Mat input = cv::imread("input.jpg", CV_LOAD_IMAGE_GRAYSCALE);
cv::vector<cv::vector<cv::Point>> contours;
cv::vector<cv::Vec4i> hierarchy;
// Find contours
cv::findContours(input, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Find the convex hull
cv::vector<cv::vector<cv::Point>> hull(contours.size());
for(int i = 0; i < contours.size(); i++)
{
cv::convexHull(cv::Mat(contours[i]), hull[i], false);
}
cv::Mat drawing = cv::Mat::zeros(input.size(), CV_8UC3);
cv::Scalar color = cv::Scalar(0, 0, 255);
for (int j = 0; j < hull.size(); j++)
{
cv::drawContours(drawing, hull, j, color, 1, 8, cv::vector<cv::Vec4i>(), 0, cv::Point());
}
cv::imshow("Convex hull", drawing);
cv::waitKey();
And this is the output:
In Matlab however, when I write the following code:
input = imread('input.jpg');
[x, y] = find(input);
k = convhull(x, y);
plot(y(k), x(k), 'r-', y, x, 'b.');
This gives me exactly what I want (the red line represents the convex hull that I want):
So, how can I obtain the same result in OpenCV? What should I've done incorrectly here? Thank you.

May be this answer is late, but for those who is still in search here it is.
You don't need to take contours. Just take all non-zero point from binary image using findNonZero() method and then apply convexHull to that set of points. It will work perfectly.

Related

how to consider only the white region on the image as contour

I have a binary image, from which I need to consider only the white regions as contours but it also takes black region which is surrounded by white part as contour.
I don't want to use contour area, can we ignore the black regions while finding contours?
Here is the binary image and the orange color marked is also considered as contour, so do not want the black region surrounded with white to be considered as contour.
Contour image is:
My contouring code:
//contouring
vector<vector<Point> > contours;
findContours(img, contours, RETR_LIST, CHAIN_APPROX_SIMPLE);
vector<vector<Point> > contours_poly(contours.size());
vector<Rect> boundRect(contours.size());
vector<Point2f>centers(contours.size());
vector<float>radius(contours.size());
for (size_t i = 0; i < contours.size(); i++)
{
approxPolyDP(contours[i], contours_poly[i], 3, true);
boundRect[i] = boundingRect(contours_poly[i]);
minEnclosingCircle(contours_poly[i], centers[i], radius[i]);
}
Mat drawing = Mat::zeros(img.size(), CV_8UC3);
for (size_t i = 0; i < contours.size(); i++)
{
Scalar color = Scalar(rng.uniform(0, 256), rng.uniform(0, 256), rng.uniform(0, 256));
drawContours(drawing, contours_poly, (int)i, color);
}
Your code snippets do not compose a minimal reproducible example (https://stackoverflow.com/help/minimal-reproducible-example).
Therefore I could not run it for testing.
However - you might be able to achieve what you want by "playing" with the 2 last parameters of cv::findContours.
From opencv documentation of cv::findContours:
mode Contour retrieval mode, see [RetrievalModes][1]
method Contour approximation method, see [ContourApproximationModes][1]

OpenCV: Is it possible to detect rectangle from corners?

I have a photo where a person holds a sheet of paper. I'd like to detect the rectangle of that sheet of paper.
I have tried following different tutorials from OpenCV and various SO answers and sample code for detecting squares / rectangles, but the problem is that they all rely on contours of some kind.
If I follow the squares.cpp example, I get the following results from contours:
As you can see, the fingers are part of the contour, so the algorithm does not find the square.
I, also, tried using HoughLines() approach, but I get similar results to above:
I can detect the corners, reliably though:
There are other corners in the image, but I'm limiting total corners found to < 50 and the corners for the sheet of paper are always found.
Is there some algorithm for finding a rectangle from multiple corners in an image? I can't seem to find an existing approach.
You can apply a morphological filter to close the gaps in your edge image. Then if you find the contours, you can detect an inner closed contour as shown below. Then find the convexhull of this contour to get the rectangle.
Closed edges:
Contour:
Convexhull:
In the code below I've just used an arbitrary kernel size for morphological filter and filtered out the contour of interest using an area ratio threshold. You can use your own criteria instead of those.
Code
Mat im = imread("Sh1Vp.png", 0); // the edge image
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(11, 11));
Mat morph;
morphologyEx(im, morph, CV_MOP_CLOSE, kernel);
int rectIdx = 0;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(morph, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
for (size_t idx = 0; idx < contours.size(); idx++)
{
RotatedRect rect = minAreaRect(contours[idx]);
double areaRatio = abs(contourArea(contours[idx])) / (rect.size.width * rect.size.height);
if (areaRatio > .95)
{
rectIdx = idx;
break;
}
}
// get the convexhull of the contour
vector<Point> hull;
convexHull(contours[rectIdx], hull, false, true);
// visualization
Mat rgb;
cvtColor(im, rgb, CV_GRAY2BGR);
drawContours(rgb, contours, rectIdx, Scalar(0, 0, 255), 2);
for(size_t i = 0; i < hull.size(); i++)
{
line(rgb, hull[i], hull[(i + 1)%hull.size()], Scalar(0, 255, 0), 2);
}

Filtering image with OpenCV

I am pretty new to programming. And I want to make program which is able to filtrate image from small objects and non-convex objects so only shapes such as rectangles, triangles, circles etc. stay.
What have I done so far?
I managed to obtain image in binary by two separate ways (color detection and canny function) Then I created contours with function findContours. So that is working flawlessly.
here's the code:
vector<Point> approxShape;
vector<vector<Point>> FiltredContours;
vector<vector<Point>> TRI;
vector<vector<Point>> RECT;
vector<vector<Point>> PENTA;
Mat WSO= Mat::zeros(im.size(), CV_8UC3); //Without Small Objects
for ( int j = 0; j < contours.size(); j++)
{
if ((fabs(contourArea(contours[j]))) >100)
drawContours(WSO, contours, j,Scalar(128,128,128),2,8,hiearchy,0, Point()); // to see how it looks before it goes further
{
approxPolyDP( Mat(contours[j]), approxShape, arcLength(Mat(contours[j]), true) * 0.02, true);
if (isContourConvex(approxShape))
{
FiltredContours.push_back(approxShape);
}
}
}
///--------Show image after filtring small obj. -----
imshow("WSO",WSO);
////--------Filtred-Image-Drawing---------------------
Mat approxmat = Mat::zeros(imHSV.size(),CV_8UC3);
drawContours(approxmat, FiltredContours, -1,barva,2,8,hiearchy,0, Point());//drawContours(approxkresba, FiltredContours, -1,Scalar(255, 0, 0),2,8,hiearchy,0, Point());
namedWindow("Filtred objects",CV_WINDOW_AUTOSIZE);
imshow("Filtred objects",approxmat);
I tried to change parameters in contourArea and in approxPollyDP as well. It still doesn't work the way I thought it would.

HOW TO get corners in a contour in opencv

I am working in C++ and opencv
I am detecting the big contour in an image because I have a black area in it.
In this case, the area is only horizontally, but it can be in any place.
Mat resultGray;
cvtColor(result,resultGray, COLOR_BGR2GRAY);
medianBlur(resultGray,resultGray,3);
Mat resultTh;
Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Canny( resultGray, canny_output, 100, 100*2, 3 );
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
Vector<Point> best= contours[0];
int max_area = -1;
for( int i = 0; i < contours.size(); i++ ) {
Scalar color = Scalar( 0, 0, 0 );
if(contourArea(contours[i])> max_area)
{
max_area=contourArea(contours[i]);
best=contours[i];
}
}
Mat approxCurve;
approxPolyDP(Mat(best),approxCurve,0.01*arcLength(Mat(best),true),true);
Wiht this, i have the big contour and it approximation (in approxCurve). Now, I want to obtain the corners of this approximation and get the image inside this contour, but I dont know how can I do it.
I am using this How to remove black part from the image?
But the last part I dont understad very well.
Anyone knows how can I obtain the corners? It is another way more simple that this?
Thanks for your time,
One much simpler way you could do that is to check the image pixels and find the minimum/maximum coordinates of non-black pixels.
Something like this:
int maxx,maxy,minx,miny;
maxx=maxy=-std::numeric_limits<int>::max();
minx=miny=std::numeric_limits<int>::min();
for(int y=0; y<img.rows; ++y)
{
for(int x=0; x<img.cols; ++x)
{
const cv::Vec3b &px = img.at<cv::Vec3b>(y,x);
if(px(0)==0 && px(1)==0 && px(2)==0)
continue;
if(x<minx) minx=x;
if(x>maxx) maxx=x;
if(y<miny) miny=y;
if(y>maxy) maxy=y;
}
}
cv::Mat subimg;
img(cv::Rect(cv::Point(minx,miny),cv::Point(maxx,maxy))).copyTo(subimg);
In my opinion, this approach is more reliable since you don't have to detect any contour, which could lead to false detections depending on the input image.
In a very efficient way, you can sample the original image until you find a pixel on, and from there move along a row and along a column to find the first (0,0,0) pixel. It will work, unless in the good part of the image you can have (0,0,0) pixels. If this is the case (e.g.: dead pixel), you can add a double check checking the neighbourhood of this (0,0,0) pixel (it should contain other (0,0,0) pixels.

How to fill contours that touch the image border?

Say I have the following binary image created from the output of cv::watershed():
Now I want to find and fill the contours, so I can separate the corresponding objects from the background in the original image (that was segmented by the watershed function).
To segment the image and find the contours I use the code below:
cv::Mat bgr = cv::imread("test.png");
// Some function that provides the rough outline for the segmented regions.
cv::Mat markers = find_markers(bgr);
cv::watershed(bgr, markers);
cv::Mat_<bool> boundaries(bgr.size());
for (int i = 0; i < bgr.rows; i++) {
for (int j = 0; j < bgr.cols; j++) {
boundaries.at<bool>(i, j) = (markers.at<int>(i, j) == -1);
}
}
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(
boundaries, contours, hierarchy,
CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE
);
So far so good. However, if I pass the contours acquired above to cv::drawContours() as below:
cv::Mat regions(bgr.size(), CV_32S);
cv::drawContours(
regions, contours, -1, cv::Scalar::all(255),
CV_FILLED, 8, hierarchy, INT_MAX
);
This is what I get:
The leftmost contour was left open by cv::findContours(), and as a result it is not filled by cv::drawContours().
Now I know this is a consequence of cv::findContours() clipping off the 1-pixel border around the image (as mentioned in the documentation), but what to do then? It seems an awful waste to discard a contour just because it happened to brush off the image's border. And anyway how can I even find which contour(s) fall in this category? cv::isContourConvex() is not a solution in this case; a region can be concave but "closed" and thus not have this problem.
Edit: About the suggestion to duplicate the pixels from the borders. The problem is that my marking function is also painting all pixels in the "background", i.e. those regions I'm sure aren't part of any object:
This results in a boundary being drawn around the output. If I somehow avoid cv::findContours() to clip off that boundary:
The boundary for the background gets merged with that leftmost object:
Which results in a nice white-filled box.
Solution number 1: use image extended by one pixel in each direction:
Mat extended(bgr.size()+Size(2,2), bgr.type());
Mat markers = extended(Rect(1, 1, bgr.cols, bgr.rows));
// all your calculation part
std::vector<std::vector<Point> > contours;
findContours(boundaries, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
Mat regions(bgr.size(), CV_8U);
drawContours(regions, contours, -1, Scalar(255), CV_FILLED, 8, Mat(), INT_MAX, Point(-1,-1));
Note that contours were extracted from extended image, i.e. their x and y values are bigger by 1 from what they should be. This is why I use drawContours with (-1,-1) pixel offset.
Solution number 2: add white pixels from boundary of image to the neighbor row/column:
bitwise_or(boundaries.row(0), boundaries.row(1), boundaries.row(1));
bitwise_or(boundaries.col(0), boundaries.col(1), boundaries.col(1));
bitwise_or(boundaries.row(bgr.rows()-1), boundaries.row(bgr.rows()-2), boundaries.row(bgr.rows()-2));
bitwise_or(boundaries.col(bgr.cols()-1), boundaries.col(bgr.cols()-2), boundaries.col(bgr.cols()-2));
Both solution are half-dirty workarounds, but this is all I could think about.
Following Burdinov's suggestions I came up with the code below, which correctly fills all extracted regions while ignoring the all-enclosing boundary:
cv::Mat fill_regions(const cv::Mat &bgr, const cv::Mat &prospective) {
static cv::Scalar WHITE = cv::Scalar::all(255);
int rows = bgr.rows;
int cols = bgr.cols;
// For the given prospective markers, finds
// object boundaries on the given BGR image.
cv::Mat markers = prospective.clone();
cv::watershed(bgr, markers);
// Copies the boundaries of the objetcs segmented by cv::watershed().
// Ensures there is a minimum distance of 1 pixel between boundary
// pixels and the image border.
cv::Mat borders(rows + 2, cols + 2, CV_8U);
for (int i = 0; i < rows; i++) {
uchar *u = borders.ptr<uchar>(i + 1) + 1;
int *v = markers.ptr<int>(i);
for (int j = 0; j < cols; j++, u++, v++) {
*u = (*v == -1);
}
}
// Calculates contour vectors for the boundaries extracted above.
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(
borders, contours, hierarchy,
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE
);
int area = bgr.size().area();
cv::Mat regions(borders.size(), CV_32S);
for (int i = 0, n = contours.size(); i < n; i++) {
// Ignores contours for which the bounding rectangle's
// area equals the area of the original image.
std::vector<cv::Point> &contour = contours[i];
if (cv::boundingRect(contour).area() == area) {
continue;
}
// Draws the selected contour.
cv::drawContours(
regions, contours, i, WHITE,
CV_FILLED, 8, hierarchy, INT_MAX
);
}
// Removes the 1 pixel-thick border added when the boundaries
// were first copied from the output of cv::watershed().
return regions(cv::Rect(1, 1, cols, rows));
}