How to calculate the gradients of an image? - c++

The following code allows to calculate the gradients of an image using the Sobel operators, that are available in OpenCV.
IplImage* grad_x = cvCreateImage(cvGetSize(image),IPL_DEPTH_32F,1);
IplImage* grad_y = cvCreateImage(cvGetSize(image),IPL_DEPTH_32F,1);
cvSobel(image,grad_x,1,0,3);
cvSobel(image,grad_y,0,1,3);
How are the edges handled by the cvSobel() function? What is the difference between this function and this one this one? I read about available borderType options, however, I do not know which of these options would be better to use.
Moreover, what advantages could provide performing a smoothing filter before calculating the gradients of an image?
Finally, after calculating the gradients of an image, how to calculate the corresponding angles and magnitude?

If you are interested in edge maps, you should consider the Canny method in OpenCV. Sobel will return a gradient image, which you can then derive the edge map from via simple thresholding using threshold, or you can do something more complex like the Canny method.
Smoothing the image (e.g., via GaussianBlur or blur) will reduce the magnified noise levels which result from the Sobel operation.
I have a very similar answer located here, which shows how determine the magnitude and phase of the gradients.
Hope that helps!

Related

How to efficiently represent blobs in binary image with ellipse? | OpenCV

I want to represent blobs with oriented ellipse.
I have achieved this with findContours() in opencv. But I am thinking in real-time application contours or blob detector, which will perform better.
GaussianBlur(im, tempIm, Size(9, 9), 1, 1, BORDER_REFLECT);
findContours(tempIm, contours, hierarchy, RETR_TREE, CHAIN_APPROX_TC89_KCOS);
Also Blur extends the boundary of the blob, which is not-required.
Input Image
Output Image
Contours is the outline of an object and blob detector is an algorithm on top of findContours. Blob detector not only finds the boundaries, but also calculates the center and whether or not it matches certain shapes and sizes that you define. It is more heavy than findContours and you shouldn't use it if you care for performance and don't need these features.
You can use median blur instead of Gaussian, if you want to keep the image binary and do not extend contours. Also you can use dilate/erode/open/close operations and their combinations to denoise the binary image. These approaches are relatively comparable on binary images and you can test both for best performance and result and choose one for your task.
Tutorial code here describes how to represent the found contours with ellipses. cv::fitEllipse is used to create an ellipse for a contour. Then you can display it with cv::ellipse drawing function.

Preserve Color with Gaussian Blurring and Sobel Operators

I want to utilize a gaussian blur on an image as a preprocessing step to using a sobel edge detection filter.
I have previously implemented sobel and gaussian blurring operators on a greyscale image effectively, however, I have never attempted to use them on color images.
Previously, I have just been taking the red component of my pixel data as the RGB are all the same in greyscale.
Do I need to first convert my RGB images to greyscale?
If not, how can I use all 3 color channels with the kernels for each operator?
You can apply linear filters to each color channel independently. Gaussian blur and the Sobel operator are linear filters. Typically one would separate the three channels as three grey-value images, and filter them separately, then compose the three results into a new RGB image. But if your software allows applying a convolution on a multi-channel image, you won't need to do this process manually.
Do note that applying Gaussian blur and then the Sobel operator to determine the gradient is sub-optimal. You are better off directly computing the gradient using Gaussian derivatives. Here is an old blog post of mine that describes Gaussian derivatives. In short, applying a convolution with the derivative of a Gaussian (which you can determine analytically and sample) yields the exact derivative of the smoothed image. Note that the Gaussian is separable (and so is its derivative), meaning that the 2D convolution can be applied as two 1D convolutions, saving lots of computations.

circle detection: parameters for houghcricles

I want to detect an object and I tried using the Houghcirles function from OpenCV, but I could not achieve the better parameters for all the images and but by doing threasholding I could filter out for the circle. Code I have used is
int main()
{
// Load an image
src = imread("occupant/cam_000569.png");
threshold(src,binary,52,255,0);
imwrite("binary.png",binary);
canny(src,canny,50,200,3);
houghcircles(canny,circles,CV_HOUGH_GRADIENT,1,src.gray.rows/8,7,24,28);
After thresholding, I get the below image and even though there is disturbance included but for a threshold value of 52 I could see the same for all the other images where the object is clear.
After using the canny and houghcircles function with the parameters mentioned the code. I could detect the object required.
But the problem is when I use the next images the same thresholding value is applicable but using the same parameters for canny and houghcircles I am unable detect the object.
So my question is how to choice the parameters for the houghcircle or is it possible to detect the object with different OpenCV function?
I think the main problem here is lighting. Try histogram equalization followed by some smoothing, before you apply the canny edge detector. You will have to take a number of images and estimate the canny and Hough parameters that work well for most of them. It is impossible to find values that result in a 100% detection rate.
Another option is to train an object detector for the object that you want to recognize, using Haar or LBP features. This just seems kind of overkill if the object is a circle.
The better solution for this detection is use of blob detection in C++ or regionprops in MATLAB and filter out based on area and circularity calculation.

OpenCV Adaptive Thresholding a HSV image

We (my group and I) want to be able to track a hand (well the index fingertip mostly). The hand is basically the same colour as the face in the picture, but as you can see, so is a lot of the noise we get. It works very well with a black "screen" behind the hand.
Now the problem is that Adaptive thresholding is useful only on Grayscale images, and as such would not detect the hand very well.
I've tried googling HSV Adaptive Thresholding but no luck, so I figured stackoverflow had some great ideas.
EDIT: The current HSV -> Binary threshold:
inRange(hsvx, Scalar(0, 50, 0), Scalar(20, 150, 255), bina);
I suggest you use a color histogramming for your tracking. Camshift is doing it for example to good success.
There is camshift sample code in OpenCV.
See http://docs.opencv.org/master/db/df8/tutorial_py_meanshift.html (very brief explanation)
or https://github.com/Itseez/opencv/blob/master/samples/cpp/camshiftdemo.cpp (code sample)
If you want to go with your thresholding, you are already proper about not thresholding the V channel. I would still suggest to do separate adaptive thresholding on H and S.
I would suggest you using a histogram backprojection algorithm.
Back Projection is a way of recording how well the pixels of a given image fit the distribution of pixels in a histogram model. You can specify the histogram model by using manually selected set of hand-pixels.
This algorithm outputs an image where each pixel has the value of likelihood the color of this pixel is a color of the skin (is similar to the skin). You can then specify a likelihood threshold to adjust the performance.
It will let you find the skin-colured areas in the image.
For details see:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/back_projection/back_projection.html
http://docs.opencv.org/master/dc/df6/tutorial_py_histogram_backprojection.html#gsc.tab=0

Sobel Filter implementation for Harris Corner Detector

I have to implement a Harris detector and am not quite sure about the following detail regarding the Sobel filter to obtain the image derivative.
When applying the Sobel filter to a Grayscale image, I might get negative intensity values. Do I need to convert the Image back to a Matrix of only positive values before I calculate the Harris Matrix for each pixel or am I supposed to use the values as they are?
I don't think you need to restrict it to only positive values.
You can look at VLFeat's Harris corner detection implementation (Matlab/C source included). It's in the toolbox directory: vl_harris.m