I want to detect circles in an image using OpenCV and C++. I COULD do that by referring to the official documentation and adjusting the parameters of the piece of code written by the OpenCV Team.
So, the code I'm working with is as follows: (parameters already adjusted)
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace cv;
int main(int, char** argv)
{
Mat src, src_gray;
/// Read the image
src = imread( argv[1], 1 );
if( !src.data )
{ return -1; }
/// Convert it to gray
cvtColor( src, src_gray, CV_BGR2GRAY );
/// Reduce the noise so we avoid false circle detection
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 6.0, 5, 110, 70, 3, 20 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][2]));
int radius = cvRound(circles[i][3]);
// circle center
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
/// Show your results
namedWindow( "Hough Circle Transform Demo", CV_WINDOW_AUTOSIZE );
imshow( "Hough Circle Transform Demo", src );
waitKey(0);
src.release();
src_gray.release();
return 0;
}
And the image whose circles I want to detect is the following: Test image
These are actually the contour of two blobs that I obtained using cvBlobsLib and redrew as a new image.
That algorithm is able to identify the centers of each circle, but, when I hit any key to close the program, it crashes... :( And I have to forcefully close it.
I need to adapt that algorithm to run in a camera, so I cannot proceed with the implementation while it crashes like that.
So, does anyone know what could be causing this problem?
I'm doing the development on Visual Studio 2012 and OpenCV version 2.4.2.
If someone could give me a suggestion of what it could be or maybe try running the algorithm, I would be very grateful!
I have four advices for you.
First: To see whether a Mat is empty or not, you use
if( src.empty() ) // instead of !src.data.
The chances are src.data has random (stale) value for an empty Mat.
Second: correct the indices like this:
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
(actually you don't need cvRound, but whatever).
Third: It is worth to check whether imread understood that you want to load the image in color mode, by checking its number of channels:
src.channels()==3
//or
src.type()==CV_8UC3; // that is what you are counting for, really.
Otherwise a line like CV_BGR2GRAY on a gray image could cause weird behaviour.
Fourth: you don't need to release Mat's. That's the reason they created Mat class in the first place, so that they automatically take care of releasing.
I don't see anything obvious except that you are overrunning the Vec3f bounds:
Point center(cvRound(circles[i][0]), cvRound(circles[i][2]));
int radius = cvRound(circles[i][3]);
Instead of index 2 and 3, I think you meant 1 and 2.
That wouldn't necessarily be causing the crash (by corrupting the stack or heap), but then again it is undefined behaviour...
The other thing I suggest is removing the two lines that follow the waitKey call:
src.release();
src_gray.release();
These are handled automatically by the destructor in the object, so I don't see why you need to do it manually. That might not change a thing, of course.
From there, if you are still getting crashes you should start omitting sections of your code until you can isolate the one that crashes it.
I started feeling suspicious about the environment, so I got a friend who had OpenCV all set up to try out my code and he could run it with no problem...
So I reinstalled everything, but this time I chose Microsoft Visual Studio 2010 SP1 and OpenCV 2.4.3, and it worked correctly.
If someone is having the same problem, I recommend downgrading to VS2010. Also, this video installation guide was really helpful when I was setting the environment!
Thank you :)
I was having the same problem. Please ensure that while running your application in release mode, you are using opencv release dll's. Doing this solved my problem.
Reference:
https://code.ros.org/trac/opencv/ticket/953
Related
Thought I'd try my hand at a little (auto)correlation/convolution today in openCV and make my own 2D filter kernel.
Following openCV's 2D Filter Tutorial I discovered that making your own kernels for openCV's Filter2D might not be that hard. However I'm getting unhandled exceptions when I try to use one.
Code with comments relating to the issue here:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
//Loading the source image
Mat src;
src = imread( "1.png" );
//Output image of the same size and the same number of channels as src.
Mat dst;
//Mat dst = src.clone(); //didn't help...
//desired depth of the destination image
//negative so dst will be the same as src.depth()
int ddepth = -1;
//the convolution kernel, a single-channel floating point matrix:
Mat kernel = imread( "kernel.png" );
kernel.convertTo(kernel, CV_32F); //<<not working
//normalize(kernel, kernel, 1.0, 0.0, 4, -1, noArray()); //doesn't help
//cout << kernel.size() << endl; // ... gives 11, 11
//however, the example from tutorial that does work:
//kernel = Mat::ones( 11, 11, CV_32F )/ (float)(11*11);
//default value (-1,-1) here means that the anchor is at the kernel center.
Point anchor = Point(-1,-1);
//value added to the filtered pixels before storing them in dst.
double delta = 0;
//alright, let's do this...
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
imshow("Source", src); //<< unhandled exception here
imshow("Kernel", kernel);
imshow("Destination", dst);
waitKey(1000000);
return 0;
}
As you can see, using the tutorials kernel works fine, but my image will crash the program, I've tried changing the bit-depth, normalizing, checking size and lots of commenting out blocks to see where it fails, but haven't cracked it yet.
The image is, '1.png':
And the kernel I want 'kernel.png':
I'm trying to see if I can get a hotspot in dst at the point where the eye catchlight is (the kernel I've chosen is the catchlight). I know there are other ways to do this, but I'm interested to see how effective convolving the catchlight over itself is. (autocorrelation I think that's called?)
Direct questions:
why the crash?
is the crash indicating a fundamental conceptual mistake?
or (hopefully) is it just some (silly) fault in the code?
Thanks in advance for any help :)
The assertion error should be posted which would help someone to answer you other than questioning why is the crash. Anyways, I have posted below the possible errors and solution for convolution filter2D.
Error 1:
OpenCV Error: Assertion failed (src.channels() == 1 && func != 0) in cv::countNo
nZero, file C:\builds\2_4_PackSlave-win32-vc12-shared\opencv\modules\core\src\st
at.cpp, line 549
Solution : Your input Image and the kernel should be grayscales. You can use the flag 0 in imread. (ex. cv::imread("kernel.png",0) to read the image as grayscale.) If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
I don't see anything other than the obove error that may crash. Kernel size should in odd numbers and your kernel image is 11X11 which is fine. If it stills crashes kindly provide more information in order to help you out.
I'm trying to make a program to detect an object in any shape using a video camera/webcam based on Canny filter and contour finding function. Here is my program:
int main( int argc, char** argv )
{
CvCapture *cam;
CvMoments moments;
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = NULL;
CvSeq* contours2 = NULL;
CvPoint2D32f center;
int i;
cam=cvCaptureFromCAM(0);
if(cam==NULL){
fprintf(stderr,"Cannot find any camera. \n");
return -1;
}
while(1){
IplImage *img=cvQueryFrame(cam);
if(img==NULL){return -1;}
IplImage *src_gray= cvCreateImage( cvSize(img->width,img->height), 8, 1);
cvCvtColor( img, src_gray, CV_BGR2GRAY );
cvSmooth( src_gray, src_gray, CV_GAUSSIAN, 5, 11);
cvCanny(src_gray, src_gray, 70, 200, 3);
cvFindContours( src_gray, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0));
if(contours==NULL){ contours=contours2;}
contours2=contours;
cvMoments(contours, &moments, 1);
double m_00 = cvGetSpatialMoment( &moments, 0, 0 );
double m_10 = cvGetSpatialMoment( &moments, 1, 0 );
double m_01 = cvGetSpatialMoment( &moments, 0, 1 );
float gravityX = (m_10 / m_00)-150;
float gravityY = (m_01 / m_00)-150;
if(gravityY>=0&&gravityX>=0){
printf("center point=(%.f, %.f) \n",gravityX,gravityY); }
for (; contours != 0; contours = contours->h_next){
CvScalar color = CV_RGB(250,0,0);
cvDrawContours(img,contours,color,color,-1,-1, 8, cvPoint(0,0));
}
cvShowImage( "Input", img );
cvShowImage( "Contours", src_gray );
cvClearMemStorage(storage);
if(cvWaitKey(33)>=0) break;
}
cvDestroyWindow("Contours");
cvDestroyWindow("Source");
cvReleaseCapture(&cam);
}
This program will detect all contours captured by the camera and the average coordinate of the contours will be printed. My question is how to filter out only one object/contour so I can get more precise (x,y) position of the object? If possible, can anyone show me how to mark the center of the object by using (x,y) coordinates?
Thanks in advance. Cheers
p/s:Sorry I couldn't upload a screenshot yet but if anything helps, here's the link.
Edit: To make my question more clear:
For example, if I only want to filter out only the square from my screenshot above, what should I do?
The object I want to filter out has the biggest contour area and most importantly has a shape(any shape), not a straight or a curve line
I'm still experimenting with the smooth and canny values so if anybody have the problem to detect the contours using my program please alter the values.
I think it can be solved fairly easy. I would suggest some morphological operations before contour detection. Also, I would suggest filtering "out" smaller elements, and getting the biggest element as the only one still in the image.
I suggest:
for filtering out lines (straight or curved): you have to decide what do you yourself consider a border between a "line" and a "shape". Let's say you consider all the objects of a thickness 5 pixel or more to be objects, while the ones that are less than 5 pixels across to be lines. An morphological opening that uses a 5x5 square or a 3-pixel sized diamond shape as a structuring element would take care of this.
for filtering out small objects in general: if objects are of arbitrary shapes, purely morphological opening won't do: you have to do an algebraic opening. A special type of algebraic openings is an area opening: an operation that removes all the connected components in the image that have (pixel) area smaller than a given threshold. If you have an upper bound on the size of uninteresting objects, or a lower bound on the size of interesting ones, that value should be used as a threshold. You can probably get a similar effect with a larger morphological opening, but it will not be so flexible.
for filtering out all the objects except the largest: it sounds like removing connected components from the smallest one to the largest one should work. Try labeling the connected components. On a binary (black & white image), this image transformation works by creating a greyscale image, labeling the background as 0 (black), and each component with a different, increasing grey value. In the end, pixels of each object are marked by a different value. You can now simply look at the gray level histogram, and find the grey value with the most pixels. Set all the other grey levels to 0 (black), and the only object left in the image is the biggest one.
The suggestions are written from the simplest to the most complex ones. Still, I think OpenCV can be of help with any of these. Morphological erosion, dilation, opening and closing are implemented in OpenCV. I think you might need to construct an algebraic opening operator on your own (or play with combining OpenCV basic morphology), but I'm sure OpenCV can help you with both labeling the connected components and examining the histogram of the resulting greyscale image.
In the end, when only pixels from one object are left, you do the Canny contour detection.
This is a blob processing problem that can not be solved (easily) by OpenCV itself. Have a look at cvBlobsLib. This library is extends OpenCV with functions/classes for connected component labeling.
http://opencv.willowgarage.com/wiki/cvBlobsLib
I have performed Closing morphological operation and I am getting different result with the C and C++ API (OpenCV 2.4.2)
Input:
With OpenCV 'C':
//Set ROI
//Perform Gaussian smoothing
//Perform Canny edge analysis
cvMorphologyEx( src, dst, temp, Mat(), MORPH_CLOSE, 5 );
RESULT:
http://i47.tinypic.com/33e0yfb.png
With Opencv C++
//Set ROI
//Perform Gaussian smoothing
//Perform Canny edge analysis
cv::morphologyEx( src, dst, cv::MORPH_CLOSE, cv::Mat(), cv::Point(-1,-1), 5 );
RESULT:
http://i50.tinypic.com/i5vxjo.png
As you can see, the C++ API yields an output with White/Gray border color. Hence, the results are different for both of these APIs.
I have tried different borderType with the C++ API but it always yields the same result.
How can I get the same output as C API in C++? I need it because it affects the detected contours
Thanks in advance
Thank you everybody for answering this question. I have found my error. I am going to describe it in brief below. Hope it helps others facing this problem.
1) I had executed the C and C++ commands on a ROI image. Apparently, the way OpenCV 'C' and 'C++' API treat ROI is different.
2) In 'C', a ROI is treated as a completely different image. Hence, when you execute functions such as cvSmooth, cvDilate, etc, where one needs to mentions border Pixel extrapolation methods, the 'C' API does not refer back to the original image for pixels beyond left/right/top/bottom most pixel. It actually interpolates the pixel values according to the method you mentioned.
3) But in 'C++', I have found that it always refers back to the original image for pixels beyond left/right/top/bottom most pixel. Hence, the border pixel extrapolation method mentioned doesn't affect your output if there are pixels in the original image around your ROI.
I think it applies the order pixel extrapolation method to the original image instead of the ROI unlike the 'C' API. I don't know if this a bug; I haven't completely read the OpenCV 2.4.2 C++ API documentation. (Please correct me if I am wrong)
To claim my support, I have posted input/output images below:
Output for 'C' and C++ API:
INPUT:
<--- input
OpenCV 'C' API:
IplImage *src = cvLoadImage("input.png", 0);
cvSetImageROI( src, cvRect(33,19,250,110));
cvSaveImage( "before_gauss.png", src );
cvSmooth( src, src, CV_GAUSSIAN );
cvSaveImage("after_gauss.png", src);
IplConvKernel *element = cvCreateStructuringElementEx(3,3,1,1,CV_SHAPE_RECT);
cvCanny( src, src, 140, 40 );
cvSaveImage("after_canny.png", src);
cvDilate( src, src, element, 5);
cvSaveImage("dilate.png", src);
OUTPUT:
<-- before_gauss
<--- after_gauss
<--- after_canny
<--- dilate
OpenCV 'C++' API:
cv::Mat src = cv::imread("input.png", 0);
cv::Mat src_ROI = src( cv::Rect(33,19,250,110));
cv::imwrite( "before_gauss.png", src_ROI );
cv::GaussianBlur( src_ROI, src_ROI, cv::Size(3,3),0 );
cv::imwrite( "after_gauss.png", src_ROI );
cv::Mat element = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3), cv::Point(1,1));
cv::Canny( src_ROI, src_ROI, 140, 40);
cv::imwrite( "after_canny.png", src_ROI );
cv::dilate( src_ROI, src_ROI, element, cv::Point(1,1), 5);
cv::imwrite( "dilate.png", src_ROI );
OUTPUT:
<-- before_gauss
^^^^^ after_gauss (NOTE: the borders are no more completely black, they are grayish)
^^^^^ after_canny
^^^^^ dilate
SOLUTION:
Create a separate ROI copy and use it for further analysis;
src_ROI.copyTo( new_src_ROI );
Use new_src_ROI for further analysis.
If anyone has better solution, please post below
The defaults are not the same between C and C++ - especially the structuring element.
In C: the default structuring element is:
cvCreateStructuringElementEx(3, 3, 1, 1, CV_SHAPE_RECT)
whereas in C++, the default structuring element is:
getStructuringElement(MORPH_RECT, Size(1+iterations*2,1+iterations*2));
You should specify all fields(including the anchor) if you want the same results.
Check out this sample code from the OpenCV v2.4.2 documentation. You might also want to check this code for using the Canny edge detector. These will hopefully help you track down the error :)
Also note that morphological closing is an idempotent operator, so it can be applied multiple times without changing the result beyond the initial application.
I wanted to try my hand at text recognition, so i've used opencv to trace out the edges and c++ to find slopes, curves etc, the edge algorithm works well on big and uncluttered sets of characters but when it comes against small printed text or text with a lot of background noise like embedded in captcha it struggles and looks incomplete, my guess was i hadn't set the threshold values correctly and tried different values with no success.
Here is my code :
#include "cv.h"
#include "highgui.h"
using namespace cv;
const int low_threshold = 50;
const int high_threshold = 150;
int main()
{
IplImage* newImg;
IplImage* grayImg;
IplImage* cannyImg;
newImg = cvLoadImage("ocv.bmp",1);
grayImg = cvCreateImage( cvSize(newImg->width, newImg->height), IPL_DEPTH_8U, 1 );
cvCvtColor( newImg, grayImg, CV_BGR2GRAY );
cannyImg = cvCreateImage(cvGetSize(newImg), IPL_DEPTH_8U, 1);
cvCanny(grayImg, cannyImg, low_threshold, high_threshold, 3);
cvNamedWindow ("Source", 1);
cvNamedWindow ("Destination",1);
cvShowImage ("Source", newImg );
cvShowImage ("Destination", cannyImg );
cvWaitKey(0);
cvDestroyWindow ("Source" );
cvDestroyWindow ("Destination" );
cvReleaseImage (&newImg );
cvReleaseImage (&grayImg );
cvReleaseImage (&cannyImg );
return 0;
}
I've looked across the net and have seen some complicated thresholding conditions like in this code from this site :
% Set direction to either 0, 45, -45 or 90 depending on angle.
[x,y]=size(f1);
for i=1:x-1,
for j=1:y-1,
if ((gradAngle(i,j)>67.5 && gradAngle(i,j)<=90) || (gradAngle(i,j)>=-90 && gradAngle(i,j)<=-67.5))
gradDirection(i,j)=0;
elseif ((gradAngle(i,j)>22.5 && gradAngle(i,j)<=67.5))
gradDirection(i,j)=45;
elseif ((gradAngle(i,j)>-22.5 && gradAngle(i,j)<=22.5))
gradDirection(i,j)=90;
elseif ((gradAngle(i,j)>-67.5 && gradAngle(i,j)<=-22.5))
gradDirection(i,j)=-45;
end
end
end
If this is the solution can somebody provide me the c++ equivalent of this algorithm, if it's not what else can i do ?
Canny edge detector is a multi-step detector using hysteresis thresholding (it uses two threshold instead of one), and edge tracking (your last snippet is the part of this step). I suggest reading the wikipedia entry first. One possible solution could be to choose the high threshold, so e.g. 70% of the image pixels would be classified as edge (initially - you could do this quickly using histograms), than choose the low threshold as e.g. 40% of the high threshold. It might be a good idea to try to perform edge detection on image block rather than the whole image, so your algorithm could calculate different thresholds for different areas.
Note that CAPTCHA-s are designed to be hard to segment, and adding noise that broke edge detection is one technique to achive this (you might need to smooth the image first).
I have to code an object detector (in this case, a ball) using OpenCV. The problem is, every single search on google returns me something with FACE DETECTION in it. So i need help on where to start, what to use etc..
Some info:
The ball doesn't have a fixed color, it will probably be white, but it might change.
I HAVE to use machine learning, doesn't have to be a complex and reliable one, suggestion is KNN (which is WAY simpler and easier).
After all my searching, i found that calculating the histogram of samples ball-only images and teaching it to the ML could be useful, but my main concern here is that the ball size can and will change (closer and further from the camera) and i have no idea on what to pass to the ML to classify for me, i mean.. i can't (or can I?) just test every pixel of the image for every possible size (from, lets say, 5x5 to WxH) and hope to find a positive result.
There might be a non-uniform background, like people, cloth behind the ball and etc..
As I said, i have to use a ML algorithm, that means no Haar or Viola algorithms.
Also, I thought on using contours to find circles on a Canny'ed image, just have to find a way to transform a contour into a row of data to teach the KNN.
So... suggestions?
Thanks in advance.
;)
Well, basically you need to detect circles. Have you seen cvHoughCircles()? Are you allowed to use it?
This page has good info on how detecting stuff with OpenCV. You might be more interested on section 2.5.
This is a small demo I just wrote to detect coins in this picture. Hopefully you can use some part of the code to your advantage.
Input:
Outputs:
// compiled with: g++ circles.cpp -o circles `pkg-config --cflags --libs opencv`
#include <stdio.h>
#include <cv.h>
#include <highgui.h>
#include <math.h>
int main(int argc, char** argv)
{
IplImage* img = NULL;
if ((img = cvLoadImage(argv[1]))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
CvMemStorage* storage = cvCreateMemStorage(0);
cvCvtColor(img, gray, CV_BGR2GRAY);
// This is done so as to prevent a lot of false circles from being detected
cvSmooth(gray, gray, CV_GAUSSIAN, 7, 7);
IplImage* canny = cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,1);
IplImage* rgbcanny = cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,3);
cvCanny(gray, canny, 50, 100, 3);
CvSeq* circles = cvHoughCircles(gray, storage, CV_HOUGH_GRADIENT, 1, gray->height/3, 250, 100);
cvCvtColor(canny, rgbcanny, CV_GRAY2BGR);
for (size_t i = 0; i < circles->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
cvNamedWindow("circles", 1);
cvShowImage("circles", rgbcanny);
cvSaveImage("out.png", rgbcanny);
cvWaitKey(0);
return 0;
}
The detection of the circles depend a lot on the parameters of cvHoughCircles(). Note that in this demo I used Canny as well.