I am trying to detect a rectangle using find contours, but I don't get any contours from the following image.
I cant detect any contours in the image. Is find contours is bad with the following image, or should I use hough transform.
UPDATE: I have updated the source code to use approximated polygon.
but I still I get the outlier bounding rect, I cant find the smallest rectangle that is in the screenshot.
I have another case which the current solution it doesnt work even when adding erosion or dilation.
image 2
and here is the code
using namespace cm;
using namespace cv;
using namespace std;
cv::Mat input = cv::imread("heightmap.png");
RNG rng(12345);
// convert to grayscale (you could load as grayscale instead)
cv::Mat gray;
cv::cvtColor(input,gray, CV_BGR2GRAY);
// compute mask (you could use a simple threshold if the image is always as good as the one you provided)
cv::Mat mask;
cv::threshold(gray, mask, 0, 255,CV_THRESH_OTSU);
cv::namedWindow("threshold");
cv::imshow("threshold",mask);
// find contours (if always so easy to segment as your image, you could just add the black/rect pixels to a vector)
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(mask,contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
cv::Mat drawing = cv::Mat::zeros( mask.size(), CV_8UC3 );
vector<vector<cv::Point> > contours_poly( contours.size() );
vector<vector<cv::Point> > ( contours.size() );
vector<cv::Rect> boundRect( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{
approxPolyDP( cv::Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( cv::Mat(contours_poly[i]) );
}
for( int i = 0; i< contours.size(); i++ )
{
cv::Scalar color = cv::Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
rectangle( drawing, boundRect[i].tl(), boundRect[i].br(), color, 2, 8, 0 );
}
// display
cv::imshow("input", input);
cv::imshow("drawing", drawing);
cv::waitKey(0);
The code you are using looks like its from this question.
It uses BinaryInv threshold because its detecting a black shape on white background.
Your example is the opposite so you should tweak your code to use Binary threshold type instead (or negate the image).
Without this fix, FindContours will detect the perimeter of the image which will be the biggest contour.
So I don't think the code is failing to detect contours, just not the "biggest contour" you expect.
Even with that fixed, the code you posted won't fit a rectangle to the rectangle in your example image, as the most obvious rectangular feature doesn't have a clean border. The approxPolyDP suggestion in the linked question might help but you'll have to improve the source image.
See this question for a comparison of this and Hough methods for finding rectangles.
Edit
You should be able to separate the rectangle in your example image from the other blob by calling Erode (3x3) twice.
You'll have to replace selecting the biggest contour with selecting the squarest.
Related
I have image with curved line like this :
I couldn't find a technique to straighten curved line using OpenCV. It is similar to this post Straightening a curved contour, but my question is specific to coding using opencv (in C++ is better).
So far, I'm only able to find the contour of the curved line.
int main()
{
Mat src; Mat src_gray;
src = imread("D:/2.jpg");
cvtColor(src, src_gray, COLOR_BGR2GRAY);
cv::blur(src_gray, src_gray, Size(1, 15));
Canny(src_gray, src_gray, 100, 200, 3);
/// Find contours
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
RNG rng(12345);
findContours(src_gray, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
/// Draw contours
Mat drawing = Mat::zeros(src_gray.size(), CV_8UC3);
for (int i = 0; i < contours.size(); i++)
{
drawContours(drawing, contours, i, (255), 1, 8, hierarchy, 0, Point());
}
imshow("Result window", drawing);
imwrite("D:/C_Backup_Folder/Ivan_codes/VideoStitcher/result/2_res.jpg", drawing);
cv::waitKey();
return 0;
}
But I have no idea how to determine which line is curved and not, and how to straighten it. Is it possible? Any help would be appreciated.
Here is my suggestion:
Before everything, resize your image into a much bigger image (for example 5 times bigger). Then do what you did before, and get the contours. Find the right-most pixel of each contour, and then survey all pixel of that contour and count the horizontal distance of each pixels to the right-most pixel and make a shift for that row (entire row). This method makes a right shift to some rows and left shift to the others.
If you have multiple contours, calculate this shift value for every one of them in every single row and compute their "mean" value, and do the shift according to that mean value for each row.
At the end resize back your image. This is the simplest and fastest thing I could think of.
I have a photo where a person holds a sheet of paper. I'd like to detect the rectangle of that sheet of paper.
I have tried following different tutorials from OpenCV and various SO answers and sample code for detecting squares / rectangles, but the problem is that they all rely on contours of some kind.
If I follow the squares.cpp example, I get the following results from contours:
As you can see, the fingers are part of the contour, so the algorithm does not find the square.
I, also, tried using HoughLines() approach, but I get similar results to above:
I can detect the corners, reliably though:
There are other corners in the image, but I'm limiting total corners found to < 50 and the corners for the sheet of paper are always found.
Is there some algorithm for finding a rectangle from multiple corners in an image? I can't seem to find an existing approach.
You can apply a morphological filter to close the gaps in your edge image. Then if you find the contours, you can detect an inner closed contour as shown below. Then find the convexhull of this contour to get the rectangle.
Closed edges:
Contour:
Convexhull:
In the code below I've just used an arbitrary kernel size for morphological filter and filtered out the contour of interest using an area ratio threshold. You can use your own criteria instead of those.
Code
Mat im = imread("Sh1Vp.png", 0); // the edge image
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(11, 11));
Mat morph;
morphologyEx(im, morph, CV_MOP_CLOSE, kernel);
int rectIdx = 0;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(morph, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
for (size_t idx = 0; idx < contours.size(); idx++)
{
RotatedRect rect = minAreaRect(contours[idx]);
double areaRatio = abs(contourArea(contours[idx])) / (rect.size.width * rect.size.height);
if (areaRatio > .95)
{
rectIdx = idx;
break;
}
}
// get the convexhull of the contour
vector<Point> hull;
convexHull(contours[rectIdx], hull, false, true);
// visualization
Mat rgb;
cvtColor(im, rgb, CV_GRAY2BGR);
drawContours(rgb, contours, rectIdx, Scalar(0, 0, 255), 2);
for(size_t i = 0; i < hull.size(); i++)
{
line(rgb, hull[i], hull[(i + 1)%hull.size()], Scalar(0, 255, 0), 2);
}
I am a bit new to opencv and could use some help. I want to detect ASL hand signs.
For detecting hands, I can use either detection by skin color or a haar classifier. I already detect hands, but the problem is detecting the hand shape.
I can get the curent hand shape using the algorithm described here, so the problem is how do I compare this shape to my database of shapes?
I tried comparing them using the algorithm described here, which detects similar features images have. The problem is that this will match it with all the hands, since...well it detects them as hands. For instance, check this image, it should point only to V, but it detects features in W and R, too.
I want my final result to be like here, so how can I compare image shapes? Is my approach wrong?
I was thinking that detecting by convexity hull won't work, because most of the signs are closed fists. Check O, for instance, it has no open fingers, so I thought that trying to compare contours would be the best. How to compare them, though? FLANN doesn't seem to work. Or I'm doing it wrong.
Would a Haar cascade classifier work? Or would it detect two hands in different positions as hands as well?
Or is there another way to match shapes? That could solve my problem, but I couldn't find any example that does for custom shapes, only for ones like rectangles, circles and triangles.
Update
Ok, I've been playing a bit with matchShapes as berak told me. Here's my code below(it's a bit messy as I'm testing currently).
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
Mat src; Mat src_gray;
int thresh = 10;
int max_thresh = 300;
/// Function header
void thresh_callback(int, void* );
/** #function main */
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
/// Convert image to gray and blur it
cvtColor( src, src_gray, CV_BGR2GRAY );
blur( src_gray, src_gray, Size(3,3) );
/// Create Window
char* source_window = "Source";
namedWindow( source_window, CV_WINDOW_AUTOSIZE );
imshow( source_window, src );
createTrackbar( " Canny thresh:", "Source", &thresh, max_thresh, thresh_callback );
thresh_callback( 0, 0 );
waitKey(0);
return(0);
}
/** #function thresh_callback */
void thresh_callback(int, void* )
{
Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
double largest_area=0;
int largest_contour_index=0;
Rect bounding_rect;
/// Detect edges using canny
Canny( src_gray, canny_output, thresh, thresh*2, 3 );
/// Find contours
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
cout<<contours.size()<<endl;
/// Draw contours
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
vector<vector<Point> >hull( contours.size() );
for( int i = 0; i< contours.size(); i++ )
{ Scalar color = Scalar( 255,255,255 );
convexHull( Mat(contours[i]), hull[i], false );
// imshow("conturul"+to_string(i), drawing );
double a=contourArea( hull[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
bounding_rect=boundingRect(hull[i]);}
}
cout<<"zaindex "<<largest_contour_index<<endl;
Scalar color = Scalar( 255,255,255 );
drawContours( drawing, hull, largest_contour_index, color, 2, 8, hierarchy, 0, Point() );
namedWindow( "maxim", CV_WINDOW_AUTOSIZE );
imshow( "maxim", drawing );
Mat rects=imread( "scene.png", 1 );
rectangle(rects, bounding_rect, Scalar(0,255,0),1, 8,0);
imshow( "maxim2", rects );
/// Show in a window
}
The problem with it is the definition of a contour. These hand 'contours' are actually made of multiple contours themselves and that image that I showed earlier is actually made of these multiple contours but overlapped with eachother. matchShapes accepts arrays of Points as parameters, but the contours are arrays of arrays of Points.
So my question is, how can I add my contours vector with itself so I can pass it to matchShapes? In other words, how can I make a single contour from multiple overlapped contours?
I am trying to track a custom circular marker in an image, and I need to check that a circle contains a minimum number of other circles/objects. My code for finding circles is below:
void findMarkerContours( int, void* )
{
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
vector<Point> approx;
cv::Mat dst = src.clone();
cv::Mat src_gray;
cv::cvtColor(src, src_gray, CV_BGR2GRAY);
//Reduce noise with a 3x3 kernel
blur( src_gray, src_gray, Size(3,3));
//Convert to binary using canny
cv::Mat bw;
cv::Canny(src_gray, bw, thresh, 3*thresh, 3);
imshow("bw", bw);
findContours(bw.clone(), contours, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
Mat drawing = Mat::zeros( bw.size(), CV_8UC3 );
for (int i = 0; i < contours.size(); i++)
{
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
// contour
drawContours( drawing, contours, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
//Approximate the contour with accuracy proportional to contour perimeter
cv::approxPolyDP(cv::Mat(contours[i]), approx, cv::arcLength(cv::Mat(contours[i]), true) *0.02, true);
//Skip small or non-convex objects
if(fabs(cv::contourArea(contours[i])) < 100 || !cv::isContourConvex(approx))
continue;
if (approx.size() >= 8) //More than 6-8 vertices means its likely a circle
{
drawContours( dst, contours, i, Scalar(0,255,0), 2, 8);
}
imshow("Hopefully we should have circles! Yay!", dst);
}
namedWindow( "Contours", CV_WINDOW_AUTOSIZE );
imshow( "Contours", drawing );
}
As you can see the code to detect circles works quite well:
But now I need to filter out markers that I do not want. My marker is the bottom one. So once I have found a contour that is a circle, I want to check if there are other circular contours that exist within the region of the first circle and finally check the color of the smallest circle.
What method can I take to say if (circle contains 3+ smaller circles || smallest circle is [color] ) -> do stuff?
Take a look at the documentation for
findContours(InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy, int mode, int method, Point offset=Point())
You'll see that there's an optional hierarchy output vector which should be handy for your problem.
hierarchy – Optional output vector, containing information about the image topology. It has as many elements as the number of contours.
For each i-th contour contours[i] , the elements hierarchy[i][0] ,
hiearchyi , hiearchyi , and hiearchyi are set to
0-based indices in contours of the next and previous contours at the
same hierarchical level, the first child contour and the parent
contour, respectively. If for the contour i there are no next,
previous, parent, or nested contours, the corresponding elements of
hierarchy[i] will be negative.
When calling findCountours using CV_RETR_TREE you'll be getting the full hierarchy of each contour that was found.
This doc explains the hierarchy format pretty well.
You are already searching for circles of a certain size
//Skip small or non-convex objects
if(fabs(cv::contourArea(contours[i])) < 100 || !cv::isContourConvex(approx))
continue;
So you can use that to look for smaller circles than the one youve got, instead of looking for < 100 look for contours.size
I imagine there is the same for color also...
I am working in C++ and opencv
I am detecting the big contour in an image because I have a black area in it.
In this case, the area is only horizontally, but it can be in any place.
Mat resultGray;
cvtColor(result,resultGray, COLOR_BGR2GRAY);
medianBlur(resultGray,resultGray,3);
Mat resultTh;
Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Canny( resultGray, canny_output, 100, 100*2, 3 );
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
Vector<Point> best= contours[0];
int max_area = -1;
for( int i = 0; i < contours.size(); i++ ) {
Scalar color = Scalar( 0, 0, 0 );
if(contourArea(contours[i])> max_area)
{
max_area=contourArea(contours[i]);
best=contours[i];
}
}
Mat approxCurve;
approxPolyDP(Mat(best),approxCurve,0.01*arcLength(Mat(best),true),true);
Wiht this, i have the big contour and it approximation (in approxCurve). Now, I want to obtain the corners of this approximation and get the image inside this contour, but I dont know how can I do it.
I am using this How to remove black part from the image?
But the last part I dont understad very well.
Anyone knows how can I obtain the corners? It is another way more simple that this?
Thanks for your time,
One much simpler way you could do that is to check the image pixels and find the minimum/maximum coordinates of non-black pixels.
Something like this:
int maxx,maxy,minx,miny;
maxx=maxy=-std::numeric_limits<int>::max();
minx=miny=std::numeric_limits<int>::min();
for(int y=0; y<img.rows; ++y)
{
for(int x=0; x<img.cols; ++x)
{
const cv::Vec3b &px = img.at<cv::Vec3b>(y,x);
if(px(0)==0 && px(1)==0 && px(2)==0)
continue;
if(x<minx) minx=x;
if(x>maxx) maxx=x;
if(y<miny) miny=y;
if(y>maxy) maxy=y;
}
}
cv::Mat subimg;
img(cv::Rect(cv::Point(minx,miny),cv::Point(maxx,maxy))).copyTo(subimg);
In my opinion, this approach is more reliable since you don't have to detect any contour, which could lead to false detections depending on the input image.
In a very efficient way, you can sample the original image until you find a pixel on, and from there move along a row and along a column to find the first (0,0,0) pixel. It will work, unless in the good part of the image you can have (0,0,0) pixels. If this is the case (e.g.: dead pixel), you can add a double check checking the neighbourhood of this (0,0,0) pixel (it should contain other (0,0,0) pixels.