OpenCV: Prevent HoughCircles method from using Canny Detection - c++

I am using HoughCircles to detect a ball in real-time, but running Canny on my gray-scale image stream isn't creating all of the edges as it should. To remedy that, I am splitting the rgb image into it's separate channels, performing Canny on each one, then using bitwise or to merge the edges together. This is working quite well, but if I feed that edge image to HoughCircles, it will perform Canny again on the edge image. Is there a way to prevent this, or to forgo the rgb split Canny detection that I am performing while still catching all of the edges?

Indeed! Canny is executed internally by HoughCircles and there's no way to call cv::HoughCircles() and prevent it from invoking Canny.
However, if you would like to stick with your current approach, one alternative is to copy the implementation of cv::HoughCircles() available on OpenCV's source code and modify it to suit your needs. This will allow you to write your own version of cv::HoughCircles().
If you follow this path, it's important to realize that the C++ API of OpenCV is built upon the C API. That means that cv::HoughCircles() is just a wrapper around cvHoughCircles(), which is implemented at opencv-2.4.7/modules/imgproc/src/hough.cpp after line 1006.
Take a look at this function (line 1006) and notice the call done to icvHoughCirclesGradient() at line 1064. This is the function responsible for invoking cvCanny(), which is done at line 817.
Another approach, if the ball is single-colored, could be implemented by using cv::inRange() to isolate a specific color, and this will provide a much faster detection. Also, this subject has been extensively discussed on this forum. One very interesting thread is:
Writing robust (color and size invariant) circle detection with OpenCV (based on Hough Transform or other features)

For people who are looking for using a custom edge detection with circle detection in Python, you can use OpenCV's Canny edge detection function and pass it to scikit-image's (skimage) hough_circle function (http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.hough_circle).
Skimage's hough_circle function doesn't internally perform Canny edge detection, thus giving you a chance to implement your own. Below is an example:
hough_results = hough_circle(cv2.Canny(IMAGE, LOWER_THRESHOLD, UPPER_THRESHOLD), np.arrange(MIN_RADIUS, MAX_RADIUS,1))

Related

circle detection: parameters for houghcricles

I want to detect an object and I tried using the Houghcirles function from OpenCV, but I could not achieve the better parameters for all the images and but by doing threasholding I could filter out for the circle. Code I have used is
int main()
{
// Load an image
src = imread("occupant/cam_000569.png");
threshold(src,binary,52,255,0);
imwrite("binary.png",binary);
canny(src,canny,50,200,3);
houghcircles(canny,circles,CV_HOUGH_GRADIENT,1,src.gray.rows/8,7,24,28);
After thresholding, I get the below image and even though there is disturbance included but for a threshold value of 52 I could see the same for all the other images where the object is clear.
After using the canny and houghcircles function with the parameters mentioned the code. I could detect the object required.
But the problem is when I use the next images the same thresholding value is applicable but using the same parameters for canny and houghcircles I am unable detect the object.
So my question is how to choice the parameters for the houghcircle or is it possible to detect the object with different OpenCV function?
I think the main problem here is lighting. Try histogram equalization followed by some smoothing, before you apply the canny edge detector. You will have to take a number of images and estimate the canny and Hough parameters that work well for most of them. It is impossible to find values that result in a 100% detection rate.
Another option is to train an object detector for the object that you want to recognize, using Haar or LBP features. This just seems kind of overkill if the object is a circle.
The better solution for this detection is use of blob detection in C++ or regionprops in MATLAB and filter out based on area and circularity calculation.

How can I segment an object using Kinect V1?

I'm working with SDK 1.8 and I'm getting the depth stream from the Kinect. Now, I want to hold a paper of A4 size in front of the camera and want to get the co-ordinates of the corners of that paper so I can project an image onto it.
How can I detect the corners of the paper and get the co-ordinates? Does Kinect SDK 1.8 provide that option?
Thanks
Kinect SDK 1.8 does not provide this feature itself (to my knowledge). Depending on the language which you use for coding, there most certainly are libraries which allow such an operation if you segment it into steps.
OpenCV for example is quite useful in image-processing. When I once worked with the Kinect for object recognition, I used AForge with C#.
I recommend to target the challenge as follows:
Edge Detection:
You will apply edge detection algorithms such as the Canny Filter onto the image. First you will probably - depending on the library - transform your depth picture into a greyscale picture. The resulting image will be greyscale as well and the intensity of a pixel correlates with the probability of it belonging to an edge. Using a threshold, you will binarize this picture to black/white.
Hough Transformation: is used to get the position and parameters of a line within a image, which allows further calculation. Hough Transformation is VERY sensistive to its parameters and you will spend a lot of time in tuning those to get good results.
Calculation of edge points: Assuming that your Hough Transformation was successful, you can now calculate all intersections or the given lines which will yield the points that you are looking for.
All of these steps (especially Edge Detection and Hough Transformation) have been asked/answered/discussed in this forum.
If you provide code and intermediate results or further question, you can get a more detailled answer.
p.s.
I remember that the kinect was not that accurate and that noise was a topic. Therefore you might consider using a filter before doing these operations.

Noise Removal From Image Using OpenCV

I have performed the thinning operation on a binary image with the code provided here. The source image which I used was this one.
And the result image which I obtained after applying thinning operation on the source image was this one
The problem I am facing is how to remove the noise in the image. Which is visible around the thinned white lines.
In such particular case, the easiest and safest solution is to label the connected component (union-find algorithm), and delete the one with a surface lower than one or two pixels.
FiReTiTi and kcc__ have already provided good answers, but I thought I'd provide another perspective. Having looked through some of your previous posts, it appears that you're trying to build software that uses vascular patterns on the hand to identify people. So at some point, you will need to build some kind of classification algorithm.
I bring this up because many such algorithms are quite robust in the presence of this kind of noise. For example, if you intend to use supervised learning to train a convolutional neural net (which would be a reasonable approach assuming you can collect a decent amount of training samples), you may find that extensive pre-processing of this sort is unnecessary, and may even degrade the performance.
Just some thoughts to consider. Cheers!
Another simple but perhaps not so robust is to use contour area to remove small connected regions, then use erode/dilate before applying thinning process.
However you can so process your thinned image directly by using cv::findContours(,) and mask about contours with small area. This is similar to what FiReTiTi answered.
You can use the findContour example from OpenCV to build a contour detection using edge detector such as Canny. The example can be ported directly as part your requirment.
Once you got the contours in vector<vector<Point> > contours;you can iterate over each contour and use cv::contourArea to find the area of each region. Using pre-defined threshold you can remove unwanted areas.
In my opinion why dont you use distance transform on the 1st image and then from the resultant image use size filter to de-speckle the image.

OpenCv Shape Dectection

I am using Opencv to detect shapes and size of material( like disc, washers, nuts and bolts of different size) on that will be held on running belt. what function would be best to distinguish between them.
I am planing to use cvFindContours( to find the shapes) and cvArcLength & cvContourArea to get their area.
Any better approach ?
This is a simple approach to shape matching:
Convert to grayscale
Smoothen the image.
Apply some morphological operations (if necessary).
Edge detect
Find contours (the same you mentioned). The contour function is hierarchical. Hence, segmenting the required (outer in most cases) contour(s) should be easy. Disc and washers can be distinguished by the hole in the contour hierarchy.
Use ApproxPolyDP to get your contour to a rough regular shape. You might be able to distinguish the shapes based on the vertex count in the contour.
Use moments to distinguish the shapes if ApproxPolyDP is not sufficient.
It works for most cases. Always provide sample images to help us assess the complexity of the problem :D.
Check for haar cascade object detection technique in opencv
here are some links....
http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html
http://www.technolabsz.com/2011/08/how-to-do-opencv-haar-training.html
For working with haar cascade u need haar kit for traing purpose..
http://kineme.net/files/haar.zip

Dynamic background separation and reliable circle detection with OpenCV

I am attempting to detect coloured tennis balls on a similar coloured background. I am using OpenCV and C++
This is the test image I am working with:
http://i.stack.imgur.com/yXmO4.jpg
I have tried using multiple edge detectors; sobel, laplace and canny. All three detect the white line, but when the threshold is at a value where it can detect the edge of the tennis ball, there is too much noise in the output.
I have also tried the Hough Circle transform but as it is based on canny, it isn't effective.
I cannot use background subtraction because the background can move. I also cannot modify the threshold values as lighting conditions may create gradients within the tennis ball.
I feel my only option is too template match or detect the white line, however I would like to avoid this if possible.
Do you have any suggestions ?
I had to tilt my screen to spot the tennisball myself. It's a hard image.
That said, the default OpenCV implementation of the Hough transform uses the Canny edge detector, but it's not the only possible implementation. For these harder cases, you might need to reimplement it yourself.
You can certainly run the Hough algorithm repeatedly with different settings for the edge detection, to generate multiple candidates. Besides comparing candidates directly, you can also check that each candidate has a dominant texture (after local shading corrections) and possibly a stripe. But that might be very tricky if those tennisballs are actually captured in flight, i.e. moving.
What are you doing to the color image BEFORE the edge detection? Simply converting it to gray?
In my experience colorful balls pop out best when you use the HSV color space. Then you would have to decide which channel gives the best results.
Perhaps transform the image to a different feature space might be better then relying on color. Maybe try LBP which responds to texture. Then do PCA on the result to reduce the feature space to 1 single channel image and try Hough Transform on that.