I have been working on a basic hand/finger tracking code using OpenCV and the ConvexHull and ConvexityDefects method.
Basically I am able to create a contour of the hand. I now need to be able to count the number of fingers. I know that the start and the end points of the Convex Hull are the finger tips but I am unsure how to count them and also how to highlight them by drawing circles on them or something.
I want my code to perform something like this.
This is a sample part of my code so far:
cvFindContours( hsv_mask, storage, &contours, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0) );
CvSeq* contours2 = NULL;
CvRect rect = cvBoundingRect( contours2, 0 );
cvRectangle( bitImage, cvPoint(rect.x, rect.y + rect.height), cvPoint(rect.x + rect.width, rect.y), CV_RGB(200, 0, 200), 1, 8, 0 );
CvSeq* hull = cvConvexHull2( contours2, 0, CV_CLOCKWISE, 0 );
CvSeq* defect = cvConvexityDefects( contours2, hull, dftStorage );
CvBox2D box = cvMinAreaRect2( contours2, minStorage );
cvDrawContours( bg, contours2, CV_RGB( 0, 200, 0), CV_RGB( 0, 100, 0), 1, 1, 8, cvPoint(0,0));
I have played around with it and I can now draw the fingertip points using this code
for(;defect;defect = defect->h_next)
{
int nomdef = defect->total;
if(nomdef == 0)
continue;
defectArray = (CvConvexityDefect*)malloc(sizeof(CvConvexityDefect)*nomdef);
cvCvtSeqToArray (defect, defectArray, CV_WHOLE_SEQ);
for(i=0; i<nomdef;>
{
cvCircle( bg, *(defectArray[i].end), 5, CV_RGB(255,0,0), -1, 8,0);
cvCircle( bg, *(defectArray[i].start), 5, CV_RGB(0,0,255), -1, 8,0);
cvCircle( bg, *(defectArray[i].depth_point), 5, CV_RGB(0,255,255), -1, 8,0);
}
j++;
free(defectArray);
}
However I am still getting a lot of false positives. Also if anyone could suggest any methods to now count the fingers that would be wonderful.
One of the possibilities you have is to count the amount of defects there are. If you have done it right, the defects is suppose to be located in the bottom section between two fingers.http://img27.imageshack.us/img27/6532/herpz.jpg
Making sure you don't get any "unwanted" defects, you can use the 'depth' parameter from the CvConvexityDefect(); function to filter the low length defects. A better description of the "depth" parameter can be found here:
opencv.itseez.com defect description
Related
I am looking into detecting slightly bright areas (fawns from the roe deer) in thermal images with openCV.
So far I managed to get some code that works somehow, but with to many false negatives and false positives.
I basically know my way around openCV. But from the algorithmic side I a not sure what the best solution should be to result in a most perfect detection.
So far I use a cascade of something like this
gaussion blur
some sore of hysteresis thesholding
blob detection
Code snipped:
cv::GaussianBlur(gray, gray, cv::Size(gauss_size, gauss_size), 0);
Mat threshUpper, threshLower;
threshold(gray, threshUpper, mask_min, mask_max, cv::THRESH_BINARY);
threshold(gray, threshLower, mask_min-mask_thresh, mask_max, cv::THRESH_BINARY);
imshow("threshUpper", threshUpper);
imshow("threshLower", threshLower);
vector<vector<Point>> contoursUpper;
cv::findContours(threshUpper, contoursUpper, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
for(auto cnt : contoursUpper){
cv::floodFill(threshLower, cnt[0], 255, 0, 2, 2, cv::FLOODFILL_FIXED_RANGE);
}
threshold(threshLower, out, 200, 255, cv::THRESH_BINARY);
vector<vector<Point>> contours2clean;
cv::findContours(out, contours2clean, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
for(const auto& cnt : contours2clean) {
double area = cv::contourArea(cnt);
if ( area > cut_max_size || area < cut_min_size) {
cv::floodFill(out, cnt[0], 0, 0, 2, 2, cv::FLOODFILL_FIXED_RANGE);
}
else {
cv::floodFill(out, cnt[0], 255, 0, 2, 2, cv::FLOODFILL_FIXED_RANGE);
}
}
std::vector<cv::KeyPoint> points;
detector_->detect(out, points);
cv::drawKeypoints(out, points, out, cv::Scalar(0, 0, 255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
I am looking for some advice for better approaches. Two images (raw and marked) are here:
Thanks!
1.Some Information: I would like to develop a kind of circle recognition with the help of openCV. I successfully set up a connection between Swift, objc-c++, but strangely I have some problems with the circle recognition algorithm: Not all of the circles in my image gets detected!
2.Have a look at my code:
+(UIImage *)ConvertImage:(UIImage *)image {
cv::Mat matImage;
UIImageToMat(image, matImage);
cv::Mat modImage;
cv::medianBlur(matImage, matImage, 5);
cv::cvtColor(matImage, modImage, CV_RGB2GRAY);
cv::GaussianBlur(modImage, modImage, cv::Size(9, 9), 2, 2);
vector<Vec3f> circles;
cv::HoughCircles(modImage, circles, CV_HOUGH_GRADIENT, 1, 1, 100, 50, 0, 0);
for (auto i = circles.begin(); i != circles.end(); ++i)
std::cout << *i << ' ';
for( size_t i = 0; i < circles.size(); i++ )
{
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle( matImage, center, 3, Scalar(0,255,0), -1, 8, 0 );
circle( matImage, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
UIImage *binImg = MatToUIImage(matImage);
return binImg;
}
As you can see in the image [click] there appears this issue :
Only 3 of 7 circles gets detected!
So in the docs I found the parameters explanation for this line:
cv::HoughCircles(modImage, circles, CV_HOUGH_GRADIENT, 1, 1, 100, 50, 0, 0);
dp = 1: The inverse ratio of resolution.
min_dist = modImage.rows/8: Minimum distance between detected centers.
param_1 = 200: Upper threshold for the internal Canny edge detector.
param_2 = 100*: Threshold for center detection.
min_radius = 0: Minimum radio to be detected. If unknown, put zero as default.
max_radius = 0: Maximum radius to be detected. If unknown, put zero as default.
3.My question
How to get rid of the issue mentioned above?
Any help would be very appreciated :)
For issue number 2 : The outline should be colored, not white!
What color should it be? At any rate you draw that circle in your code with this line.
circle( matImage, center, radius, Scalar(0,0,255), 3, 8, 0 );
If you want to change the color you can change the values you have declared in Scalar(0,0,255).
If you dont want the circle there at all you can remove that line of code.
Your images seems to be noise free. If the image is to contain circle always, You can extract the contours and fit circles using Least Squares
You can get the circle fit equations here. It is a straightforward implementation. Create a structure for the circle parameters (center and radius), fit circle and store the parameters in the structure and use it to draw circle using OpenCV.
You can also generate points on the circle using "ellipse2poly" function.
I am newbie to openCV, but I want to create iris recognition program. Although the system with webcam can detect the eyes, it cannot, however, detect the circular iris. I am using the Hough Circle Transformation. But in case iris in an image is not circular enough, system can't detect it. Any solution for it?
the algorithm used is Hough Circle Transformation.
IplImage *capturedImg = cvLoadImage("circle.jpg",1);
IplImage *grayscaleImg = cvCreateImage(cvGetSize(capturedImg), 8, 1);
cvCvtColor(capturedImg, grayscaleImg, CV_BGR2GRAY);
// Gaussian filter for less noise
cvSmooth(grayscaleImg, grayscaleImg, CV_GAUSSIAN,9, 9 );
//Detect the circles in the image
CvSeq* circles = cvHoughCircles(grayscaleImg,
storage,
CV_HOUGH_GRADIENT,
2,
grayscaleImg->height/4,
200,
100 );
for (i = 0; i < circles->total; i++)
{
float* p = (float*)cvGetSeqElem( circles, i );
cvCircle( capturedImg, cvPoint(cvRound(p[0]),cvRound(p[1])),
3, CV_RGB(0,255,0), -1, 8, 0 );
cvCircle( capturedImg, cvPoint(cvRound(p[0]),cvRound(p[1])),
cvRound(p[2]), CV_RGB(0,0,255), 3, 8, 0 );
}
// cvCircle( img,cvPoint( r->x, r->y ),67, CV_RGB(255,0,0), 3, 8, 0 );
cvNamedWindow( "circles", 1 );
cvShowImage( "circles", capturedImg );
Add a call to cvCanny() between cvSmooth() and cvHoughCircles(). This will execute an edge detection algorithm which is going to provide a better input image for cvHoughCircles() and will probably improve your results.
There's a lot of similar questions on Stackoverflow, I suggest you use the search box.
I'm trying to develop a program that counts the number of contourAreas as function of size and display it to the user.
I was able to create drawContours to all the areas but I would like to add a text label under each contouArea and display there respective size.
This should get you started. To go through all the contours you have to use the for loop with h_next below. If you want to find out more I really recommend Gary Bradski's book Learning OpenCv. THere are some great examples on contour finding in the book.
CvMemStorage* contour_storage = cvCreateMemStorage(0);
CvSeq* contours;
CvFont font;
cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, 0.6f, 0.6f, 0, 2);
cvFindContours(sourceImage, contour_storage, &contours, sizeof (CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
for (CvSeq* d = contours; d != NULL; d = d->h_next) {
CvRect iconBox = cvBoundingRect(d, 0);
CvPoint center = cvPoint(iconBox.x + (iconBox.width / 2), iconBox.y + (iconBox.height / 2));
int area = abs(cvContourArea(d, CV_WHOLE_SEQ));
cvPutText(sourceImage,"area", center, &font, CV_RGB(255, 255, 255));
}
You can use the open cv function putText
I guess that you know how to retrieve the position of your contour center, isn' it ?
I need to detect the Sun from the space sky.
These are examples of the input images:
I've got such results after Morphologic filtering ( open operation for twice )
Here's the algorithm code of this processing:
// Color to Gray
cvCvtColor(image, gray, CV_RGB2GRAY);
// color threshold
cvThreshold(gray,gray,150,255,CV_THRESH_BINARY);
// Morphologic open for 2 times
cvMorphologyEx( gray, dst, NULL, CV_SHAPE_RECT, CV_MOP_OPEN, 2);
Isn't it too heavy processing for such a simple task? And how to find the center of the Sun? If I find white points, than I'll find white points of big Earth ( left top corner on first example image )
Please advise me please my further action to detect the Sun.
UPDATE 1:
Trying algorithm of getting centroid by formula : {x,y} = {M10/M00, M01/M00}
CvMoments moments;
cvMoments(dst, &moments, 1);
double m00, m10, m01;
m00 = cvGetSpatialMoment(&moments, 0,0);
m10 = cvGetSpatialMoment(&moments, 1,0);
m01 = cvGetSpatialMoment(&moments, 0,1);
// calculating centroid
float centroid_x = m10/m00;
float centroid_y = m01/m00;
cvCircle( image,
cvPoint(cvRound(centroid_x), cvRound(centroid_y)),
50, CV_RGB(125,125,0), 4, 8,0);
And where Earth is in the photo, I got such a result:
So, centroid is on the Earth. :(
UPDATE 2:
Trying cvHoughCircles:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* circles = cvHoughCircles(dst, storage, CV_HOUGH_GRADIENT, 12,
dst->width/2, 255, 100, 0, 35);
if ( circles->total > 0 ) {
// getting first found circle
float* circle = (float*)cvGetSeqElem( circles, 0 );
// Drawing:
// green center dot
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
3, CV_RGB(0,255,0), -1, 8, 0 );
// wrapping red circle
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
cvRound(circle[2]), CV_RGB(255,0,0), 3, 8, 0 );
}
First example: bingo, but the second - no ;(
I've tried different configuration of cvHoughCircles() - couldn't find configuration to fit every my example photo.
UPDATE3:
matchTemplate approach worked for me ( response of mevatron ). It worked with big number of tests.
How about trying a simple matchTemplate approach. I used this template image:
And, it detected the 3 out of 3 of the sun images I tried:
This should work due to the fact that circles (in your case the sun) are rotationally invariant, and since you are so far away from the sun it should be roughly scale invariant as well. So, template matching will work quite nicely here.
Finally, here is the code that I used to do this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
/// Load image and template
string inputName = "sun2.png";
string outputName = "sun2_detect.png";
Mat img = imread( inputName, 1 );
Mat templ = imread( "sun_templ.png", 1 );
/// Create the result matrix
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
Mat result( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate(img, templ, result, CV_TM_CCOEFF);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
Point maxLoc;
minMaxLoc(result, NULL, NULL, NULL, &maxLoc);
rectangle(img, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
rectangle(result, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
imshow("img", img);
imshow("result", result);
imwrite(outputName, img);
waitKey(0);
return 0;
}
Hope you find that helpful!
Color Segmentation Approach
Do a color segmentation on the images to identify objects on the black background. You may identify the sun according to its area (given this uniquely identifies it, resp. don't varies largely accross images).
A more sophisticated approach could compute image moments, e.g. hu moments of the objects. See this page for these features.
Use a classification algorithm of your choice to do the actual classification of the objects found. The most simple approach is to manually specify thresholds, resp. value ranges that turn out to work for all(most) of your object/image combinations.
You may compute the actual position from the raw moments, as for the circular sun the position is equal to the center of mass
Centroid: {x, y } = { M10/M00, M01/M00 }
Edge Map Approach
Another option would be a circle hough transformation of the edge map, this will hopefully return some candidate circles (by position and radius). You may select the sun-circle according to the radius you expect (if you are lucky there is at most one).
A simple addition to your code is to filter out objects based on their size. If you always expect the earth to be much bigger than the sun, or the sun to have almost the same area in each picture, you can filter it by area.
Try Blob detector for this task.
And note that it may be good to apply a morphological opening/closing instead of simple erode or dilate, so your sun will have almost the same area before and after processing.