how to detect open and closed shapes in opencv.
These are simple sample shapes I want to detect. I have detected rectangle using findContours and approxPolyDP and than checking for angle between vectors.
Now I want to detect the open shape, approxPolyDP function has bool for closed shape set to true, and also there is a check for isCounterConvex on the points returned, plus contourArea limitation.
Any ideas how should I go on detecting images of these kind.
Just use findContours() in your image, then decide whether the contour is closed or not by examining the hierarchy passed to the findContours() function. From the second figure it is clearer that no contour has child contour as compared to the first image, you will get this data from hierarchy parameter which is optional output vector, containing information about the image topology. It has as many elements as the number of contours.
Here we will use hierarchy as
vector< Vec4i > hierarchy
where for an i-th contour
hierarchy[i][0] = next contour at the same hierarchical level
hierarchy[i][1] = previous contour at the same hierarchical level
hierarchy[i][2] = denotes its first child contour
hierarchy[i][3] = denotes index of its parent contour
If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative. See findContours() function for more details.
So by checking the value hierarchy[i][2] you can decide the contour belongs to closed or not, that is for a contour if the hierarchy[i][2] = -1 then no child and it belongs to opened.
And one more thing is that in findContours() function you should use CV_RETR_CCOMP which retrieves all of the contours and organizes them into a two-level hierarchy.
Here is the C++ code how to implement this.
Mat tmp,thr;
Mat src=imread("1.png",1);
cvtColor(src,tmp,CV_BGR2GRAY);
threshold(tmp,thr,200,255,THRESH_BINARY_INV);
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
findContours( thr, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through each contour.
{
Rect r= boundingRect(contours[i]);
if(hierarchy[i][2]<0) //Check if there is a child contour
rectangle(src,Point(r.x-10,r.y-10), Point(r.x+r.width+10,r.y+r.height+10), Scalar(0,0,255),2,8,0); //Opened contour
else
rectangle(src,Point(r.x-10,r.y-10), Point(r.x+r.width+10,r.y+r.height+10), Scalar(0,255,0),2,8,0); //closed contour
}
Result:
While correct for the problem posed, #Haris helpful answer should not be taken as a general solution for identifying closed contours using findContours().
One reason is that a filled object will have no internal contour and so would return hierarchy[i][2] = -1, meaning this test on its own would wrongly label such contours as 'open'.
The contour of a filled object should have no child or parent in the contour hierarchy, i.e. be at top level. So to detect for closed contours of filled objects would at least require an additional test: if(hierarchy[i][2] < 0 && hierarchy[i][3] < 0).
I think #Haris answer may have made this point obliquely but I thought it worth clarifying for people, like myself, who are learning how to use opencv.
Python implementation of the same as below.
import cv2
src = cv2.imread('test.png', cv2.IMREAD_COLOR)
#Transform source image to gray if it is not already
if len(src.shape) != 2:
gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
else:
gray = src
ret, thresh = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY_INV)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
hierarchy = hierarchy[0]
for i, c in enumerate(contours):
if hierarchy[i][2] < 0 and hierarchy[i][3] < 0:
cv2.drawContours(src, contours, i, (0, 0, 255), 2)
else:
cv2.drawContours(src, contours, i, (0, 255, 0), 2)
#write to the same directory
cv2.imwrite("result.png", src)
The answer depends on your image, more specifically, how many contours are preset, is there are other objects, noise, etc. In a simple case of a single contour flood fill started inside of the closed contour won’t spill over the whole image; if started outside it won’t get in the middle. So you would preserve some white area in both cases.
Simplified Python code from above
import cv2
# get contours from image
img = cv2.imread("image.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY_INV)
thresh, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# draw all contours to image (green if opened else red)
for i in range(len(contours)):
opened = hierarchy[0][i][2]<0 and hierarchy[0][i][3]<0
cv2.drawContours(img, contours, i, (0,255,0) if opened else (0,0,255), 2)
cv2.imshow("Contours", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Related
Few days back ive found a code which is sorting a contours inside vector by using their areas.But i could not understand the code very well especially in comparator function. Why does the contour 1&2 are convert into Mat in the contourArea parameter? Can anyone explain me the whole code in step by step. Your explanation will be usefull for my learning process
// comparison function object
bool compareContourAreas ( std::vector<cv::Point> contour1, std::vector<cv::Point> contour2 )
{
double i = fabs( contourArea(cv::Mat(contour1)) );
double j = fabs( contourArea(cv::Mat(contour2)) );
return ( i < j );
}
[...]
// find contours
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours( binary_image, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE,
cv::Point(0, 0) );
// sort contours
std::sort(contours.begin(), contours.end(), compareContourAreas);
// grab contours
std::vector<cv::Point> biggestContour = contours[contours.size()-1];
std::vector<cv::Point> smallestContour = contours[0];
Your input image is binary, so it only exists of 0 and 1. When you use cv::findContours it searches for points with the value '1' and are touching other 1's, this makes a contour. Then puts all contours in std::vectorstd::vector<cv::Point> contours.
When you would take a grid and draw the points of one contour in it, you will get a 2D array of points, this basically is a one-channel cv::Mat.
In 'compareContourAreas' you take two contours out of your vector 'contours' and compare the absolute sum of all of the points in the contour. To add all the points you use contourArea, which needs a cv::Mat as input, so you first need to convert your contour-vector of points to a cv::Mat.
Then with the sort function you sort all the contours from small to big.
I have a problem with filtering some contours by colors in it. I want to remove all contours, which has black pixels inside and keep only contours with white pixels (see pictures below).
Code to create a contours list. I've used a RETR_TREE contour retrieval mode with CHAIN_APPROX_SIMPLE points selection to avoid a lot of points inside contours.
cv::cvtColor(src_img, gray_img, cv::COLOR_BGR2GRAY);
cv::threshold(gray_img, bin_img, minRGB, maxRGB, cv::THRESH_BINARY_INV);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(bin_img, contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE);
Then, using these contours, I've built closed paths and display them on the screen.
An input image:
Current my results:
What I need. Fill only contours, which has white content.
I've tried to scale all contours to 1 pixel inside and check if all the pixels equal to dark, but it doesn't work as I've expected. See the code below.
double scaleX = (double(src_img.cols) - 2) / double(src_img.cols);
double scaleY = (double(src_img.rows) - 2) / double(src_img.rows);
for (int i = 0; i < contours.size(); i++) {
std::vector<cv::Point> contour = contours[i];
cv::Moments M = cv::moments(contour);
int cx = int(M.m10 / M.m00);
int cy = int(M.m01 / M.m00);
std::vector<cv::Point> scaledContour(contour.size());
for (int j = 0; j < contour.size(); j++) {
cv::Point point = contour[j];
point = cv::Point(point.x - cx, point.y - cy);
point = cv::Point(point.x * scaleX, point.y * scaleY);
point = cv::Point(point.x + cx, point.y + cy);
scaledContour[j] = point;
}
contours[i] = scaledContour;
}
I will be very grateful if you help with any ideas or solutions, thank you very much!
Hopefully, one thing is clear that the objects in the image should be white and the background black when finding contours that you have done by using THRESH_BINARY_INV.
So we are essentially trying to find white lines and not black. I am not providing the code as I work in python but I'll list it out how it can be done.
Create a black array of the size of the input image. Let's call it mask.
After finding the contours, draw them on mask with white i.e. 255, while providing thickness=-1. This means we are essentially filling the contour.
Now we need to remove the boundary of the contour so the only portion left is the part inside the contour. This can be achieved by again drawing the contour on mask, this time with black with a thickness of 1.
Perform bitwise_and between the image and mask. Only areas having white inside the contour will be left.
Now you just need to see whether the output is completely black or not. If it is not that means you don't need to fill that contour as it contains something inside it.
EDIT
Ohh I didn't realize that your images would be having 600 contours, yes it will take a lot of time for that and I don't know why I didn't think of using hierarchy before.
You can use RETR_TREE itself and the hierarchy values are [next, previous, first_child, parent]. So we just need to check if the value of first_child=-1, that would mean there are no contours inside and you can fill it.
I've changed the mode to RETR_CCOMP and add a region filtration by a hierarchy[contour index][3] != -1 (means, no parent), and my problem was solved.
Thank you!
How to use the Hungarian algorithm to correlate the corresponding targets between two consecutive frames, and finally realize that it can judge whether the moving target in the next frame is already existing or new? I do not how to start in the actual implementation of the program using C++ and Opencv3 library?
Now I can get the centroids of the moving target detected by kalman filter in each frame.
//This is part of the main function.
kalmanv.clear();
initKalman(0, 0);
Point s, p;//s:kalmanCorrect,p:kalmanPredict
//variable definition
vector<vector<Point>> contours;//rectangular frame position around the contour
vector<Vec4i> hierarchy;
int count = 0;
Mat frame, gray, mogMask;
char numText[8];
while (capture.read(frame)) {
...
findContours(mogMask, contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, Point(0, 0));
//definition of moment and center moment
vector<Moments> mu(contours.size());
vector<Point2f> mc(contours.size());
...
mu[t] = moments(contours[t], false);//calculate moment
mc[t] = Point2f(mu[t].m10 / mu[t].m00, mu[t].m01 / mu[t].m00);//calculate center moment
measurement(0) = mc[t].x;
measurement(1) = mc[t].y;
p = kalmanPredict();
Point center = Point(selection.x + (selection.width / 2), selection.y + (selection.height / 2));//calculate centroid
s = kalmanCorrect(center.x, center.y);
//I don't know what to do next.
}
I have got the set of centroids points for the moving target, and then how to use the data in conjunction with the Hungarian algorithm?
I have a photo where a person holds a sheet of paper. I'd like to detect the rectangle of that sheet of paper.
I have tried following different tutorials from OpenCV and various SO answers and sample code for detecting squares / rectangles, but the problem is that they all rely on contours of some kind.
If I follow the squares.cpp example, I get the following results from contours:
As you can see, the fingers are part of the contour, so the algorithm does not find the square.
I, also, tried using HoughLines() approach, but I get similar results to above:
I can detect the corners, reliably though:
There are other corners in the image, but I'm limiting total corners found to < 50 and the corners for the sheet of paper are always found.
Is there some algorithm for finding a rectangle from multiple corners in an image? I can't seem to find an existing approach.
You can apply a morphological filter to close the gaps in your edge image. Then if you find the contours, you can detect an inner closed contour as shown below. Then find the convexhull of this contour to get the rectangle.
Closed edges:
Contour:
Convexhull:
In the code below I've just used an arbitrary kernel size for morphological filter and filtered out the contour of interest using an area ratio threshold. You can use your own criteria instead of those.
Code
Mat im = imread("Sh1Vp.png", 0); // the edge image
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(11, 11));
Mat morph;
morphologyEx(im, morph, CV_MOP_CLOSE, kernel);
int rectIdx = 0;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(morph, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
for (size_t idx = 0; idx < contours.size(); idx++)
{
RotatedRect rect = minAreaRect(contours[idx]);
double areaRatio = abs(contourArea(contours[idx])) / (rect.size.width * rect.size.height);
if (areaRatio > .95)
{
rectIdx = idx;
break;
}
}
// get the convexhull of the contour
vector<Point> hull;
convexHull(contours[rectIdx], hull, false, true);
// visualization
Mat rgb;
cvtColor(im, rgb, CV_GRAY2BGR);
drawContours(rgb, contours, rectIdx, Scalar(0, 0, 255), 2);
for(size_t i = 0; i < hull.size(); i++)
{
line(rgb, hull[i], hull[(i + 1)%hull.size()], Scalar(0, 255, 0), 2);
}
I have 2 contours A and B and I want to check if they intersect. Both A and B are vectors of type cv::Point and are of different sizes
To check for intersection, I was attempting to do a bitwise_and. This is throwing an exception because the inputs are of different size. How do I fix this ?
Edit:
The attached image should give a better idea about the issue. The car is tracked by a blue contour and the obstacle by a pink contour. I need to check for the intersection.
A simple but perhaps not the most efficient (??) way would be to use drawContours to create two images: one with the contour of the car and one with the contour of the obstacle.
Then and them together, and any point that is still positive will be points of intersection.
Some pseudocode (I use the Python interface so wouldn't get the C++ syntax right, but it should be simple enough for you to convert):
import numpy as np # just for matrix manipulation, C/C++ use cv::Mat
# find contours.
contours,h = findContours( img, mode=RETR_LIST, method=CHAIN_APPROX_SIMPLE )
# Suppose this has the contours of just the car and the obstacle.
# create an image filled with zeros, single-channel, same size as img.
blank = np.zeros( img.shape[0:2] )
# copy each of the contours (assuming there's just two) to its own image.
# Just fill with a '1'.
img1 = drawContours( blank.copy(), contours, 0, 1 )
img2 = drawContours( blank.copy(), contours, 1, 1 )
# now AND the two together
intersection = np.logical_and( img1, img2 )
# OR we could just add img1 to img2 and pick all points that sum to 2 (1+1=2):
intersection2 = (img1+img2)==2
If I look at intersection I will get an image that is 1 where the contours intersect and 0 everywhere else.
Alternatively you could fill in the entire contour (not just the contour but fill in the inside too) with drawContours( blank.copy(), contours, 0, 1, thickness=-1 ) and then the intersection image will contain the area of intersection between the contours.
If you first sort your vectors, using pretty much any consistent sorting criterion that you can come up with, then you can use std::set_intersection directly on the vectors. This may be faster than the accepted answer in case the contours are short compared to the image size.
I have found the Clipper library quite useful for these kinds of purposes. (It's straightforward to transform vectors of cv::Point to Clipper Path objects.)
C++ tested code, based on mathematical.coffee's answer:
vector< Point> merge_contours(vector <Point>& contour1, vector <Point>& contour2, int type){
// get work area
Rect work_area = boundingRect( contour1 ) | boundingRect( contour2 );
Mat merged = Mat::zeros(work_area.size(), CV_8UC1);
Mat contour1_im = Mat::zeros(work_area.size(), CV_8UC1);
Mat contour2_im = Mat::zeros(work_area.size(), CV_8UC1);
//draw
vector<vector<Point> > shifted1;
shifted1.push_back(shift_contour(contour1, work_area.tl()));
drawContours( contour1_im, shifted1, -1, 255, -1);
vector<vector<Point> > shifted2;
shifted2.push_back(shift_contour(contour2, work_area.tl()));
drawContours( contour2_im, shifted2, -1, 255, -1);
//imshow("contour1 debug", contour1_im);
//imshow("contour2 debug", contour2_im);
if( type == 0 )
// intersect
bitwise_or( contour1_im, contour2_im, merged);
else
// unite
bitwise_and( contour1_im, contour2_im, merged);
//imshow("merge contour debug", merged);
// find
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(merged,contours,hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0));
if(contours.size() > 1){
printf("Warn: merge_contours has output of more than one contours.");
}
return shift_contour(contours[0], work_area.tl() * -1);
}