Trying to implement Optical Flow for iOS with OpenCV 3.1 .
I built the basic stuff as shown in the below code and I do get features points from goodFeaturesToTrack but the thing is no point is being tracked, and status results are always zeros (not successfully tracked).
cv::Mat gray; // current gray-level image
cv::Mat gray_prev;
std::vector<cv::Point2f> features; // detected features
std::vector<cv::Point2f> newFeatures;
std::vector<uchar> status; // status of tracked features
std::vector<float> err; // error in tracking
cv::TermCriteria _termcrit = cv::TermCriteria(cv::TermCriteria::COUNT|cv::TermCriteria::EPS,20,0.03);
-(void)processImage:(cv::Mat&)image {
//-------------------- Optical Flow ---------------------
cv::cvtColor(image, gray, CV_BGR2GRAY);
if(gray_prev.empty()) {
gray.copyTo(gray_prev);
}
cv::goodFeaturesToTrack(gray, features, 20, 0.01, 10);
cv::calcOpticalFlowPyrLK(gray_prev, gray, features, newFeatures, status, err, cv::Size(10, 10), 3, _termcrit, 0, 0.001);
// draw circles for features points
for (int i = 0; i < features.size(); i++) {
circle(image, features[i], 10, cv::Scalar(250,250,250));
}
for (int y = 0; y < status.size(); y++) {
NSLog(#"Status: %d", status[y]); // always zero
}
std::swap(newFeatures, features);
cv::swap(gray_prev, gray);
}
you should call
cv::goodFeaturesToTrack(gray, features, 20, 0.01, 10);
only for the first initialisation of the features list, and not on every cycle.
What your doing is reseting the features list to match the current frame on every cycle so there is no displacement in the features.
also, if you want just the displacement between two frames you should call
cv::goodFeaturesToTrack(**gray_prev**, features ....
The status gets 0 in two cases:
the feature is outside the image roi
the minimal eigenvalue the gradient matrix of the lucas-kanade window is above the minEigThreshold, i.e. in your window is not enough texture.
However, in my experience the status flags is rather a good guess but not a significant flag to know if the feature has been tracked successfully.
Ignore the status and draw or print your feautures and newFeatures vectors. If they are the same check if gray_prev and gray image are different from each other.
Related
I'm trying to use OpenCV's FAST corner detection algorithm to get an outline of an image of a ball (Not my final project, I'm using it as a simple example). For some reason, it only works on a third of the input Mat, and stretches the Keypoints across the image. I'm not sure as to what could be going wrong here to make the FAST algorithm not apply to the entire Mat.
Code:
void featureDetection(const Mat& imgIn, std::vector<KeyPoint>& pointsOut) {
int fast_threshold = 20;
bool nonmaxSuppression = true;
FAST(imgIn, pointsOut, fast_threshold, nonmaxSuppression);
}
int main(int argc, char** argv) {
Mat out = imread("ball.jpg", IMREAD_COLOR);
// Detect features
std::vector<KeyPoint> keypoints;
featureDetection(out.clone(), keypoints);
Mat out2 = out.clone();
// Draw features (Normal, missing right side)
for(KeyPoint p : keypoints) {
drawMarker(out, Point(p.pt.x / 3, p.pt.y), Scalar(0, 255, 0));
}
imwrite("out.jpg", out, std::vector<int>(0));
// Draw features (Stretched)
for(KeyPoint p : keypoints) {
drawMarker(out2, Point(p.pt.x, p.pt.y), Scalar(127, 0, 255));
}
imwrite("out2.jpg", out2, std::vector<int>(0));
}
Input image
Output 1 (keypoint.x multiplied by a factor of 1/3, but missing right side)
Output 2 (Coordinates untouched)
I'm using OpenCV 4.5.4 on MinGW.
Most keypoint detectors use grayscale images as input.
If you interpret the memory of a bgr image as grayscale, you will have 3 times the number of pixels. Y axis is still ok if the algorithm uses the width-offset per row, which most algorithms do (because this is useful when subimaging or padding is used).
I don't know whether it is a bug or a feature, that FAST doesn't check for the number of channels snd doesnt throw an exception if the wrong number of channels ist given.
You can convert the image to grayscale by cv::cvtColor with the flag cv:: COLOR_BGR2GRAY
Firstly I integrate OpenCV framework to XCode and All the OpenCV code is on ObjectiveC and I am using in Swift Using bridging header. I am new to OpenCV Framework and trying to achieve count of vertical lines from the image.
Here is my code:
First I am converting the image to GrayScale
+ (UIImage *)convertToGrayscale:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
cv::Mat gray;
cv::cvtColor(mat, gray, CV_RGB2GRAY);
UIImage *grayscale = MatToUIImage(gray);
return grayscale;
}
Then, I am detecting edges so I can find the line of gray color
+ (UIImage *)detectEdgesInRGBImage:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
//Prepare the image for findContours
cv::threshold(mat, mat, 128, 255, CV_THRESH_BINARY);
//Find the contours. Use the contourOutput Mat so the original image doesn't get overwritten
std::vector<std::vector<cv::Point> > contours;
cv::Mat contourOutput = mat.clone();
cv::findContours( contourOutput, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
NSLog(#"Count =>%lu", contours.size());
//For Blue
/*cv::GaussianBlur(mat, gray, cv::Size(11, 11), 0); */
UIImage *grayscale = MatToUIImage(mat);
return grayscale;
}
This both Function is written on Objective C
Here, I am calling both function Swift
override func viewDidLoad() {
super.viewDidLoad()
let img = UIImage(named: "imagenamed")
let img1 = Wrapper.convert(toGrayscale: img)
self.capturedImageView.image = Wrapper.detectEdges(inRGBImage: img1)
}
I was doing this for some days and finding some useful documents(Reference Link)
OpenCV - how to count objects in photo?
How to count number of lines (Hough Trasnform) in OpenCV
OPENCV Documents
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?#findcontours
Basically, I understand the first we need to convert this image to black and white, and then using cvtColor, threshold and findContours we can find the colors or lines.
I am attaching the image that vertical Lines I want to get.
Original Image
Output Image that I am getting
I got number of lines count =>10
I am not able to get accurate count here.
Please guide me on this. Thank You!
Since you want to detect the number of the vertical lines, there is a very simple approach I can suggest for you. You already got a clear output and I used this output in my code. Here are the steps before the code:
Preprocess the input image to get the lines clearly
Check each row and check until get a pixel whose value is higher than 100(threshold value I chose)
Then increase the line counter for that row
Continue on that line until get a pixel whose value is lower than 100
Restart from step 3 and finish the image for each row
At the end, check the most repeated element in the array which you assigned line numbers for each row. This number will be the number of vertical lines.
Note: If the steps are difficult to understand, think like this way:
" I am checking the first row, I found a pixel which is higher than
100, now this is a line edge starting, increase the counter for this
row. Search on this row until get a pixel smaller than 100, and then
research a pixel bigger than 100. when row is finished, assign the
line number for this row to a big array. Do this for all image. At the
end, since some lines looks like two lines at the top and also some
noises can occur, you should take the most repeated element in the big
array as the number of lines."
Here is the code part in C++:
#include <vector>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main()
{
cv::Mat img = cv::imread("/ur/img/dir/img.jpg",cv::IMREAD_GRAYSCALE);
std::vector<int> numberOfVerticalLinesForEachRow;
cv::Rect r(0,0,img.cols-10,200);
img = img(r);
bool blackCheck = 1;
for(int i=0; i<img.rows; i++)
{
int numberOfLines = 0;
for(int j=0; j<img.cols; j++)
{
if((int)img.at<uchar>(cv::Point(j,i))>100 && blackCheck)
{
numberOfLines++;
blackCheck = 0;
}
if((int)img.at<uchar>(cv::Point(j,i))<100)
blackCheck = 1;
}
numberOfVerticalLinesForEachRow.push_back(numberOfLines);
}
// In this part you need a simple algorithm to check the most repeated element
for(int k:numberOfVerticalLinesForEachRow)
std::cout<<k<<std::endl;
cv::namedWindow("WinWin",0);
cv::imshow("WinWin",img);
cv::waitKey(0);
}
Here's another possible approach. It relies mainly on the cv::thinning function from the extended image processing module to reduce the lines at a width of 1 pixel. We can crop a ROI from this image and count the number of transitions from 255 (white) to 0 (black). These are the steps:
Threshold the image using Otsu's method
Apply some morphology to clean up the binary image
Get the skeleton of the image
Crop a ROI from the center of the image
Count the number of jumps from 255 to 0
This is the code, be sure to include the extended image processing module (ximgproc) and also link it before compiling it:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/ximgproc.hpp> // The extended image processing module
// Read Image:
std::string imagePath = "D://opencvImages//";
cv::Mat inputImage = cv::imread( imagePath+"IN2Xh.png" );
// Convert BGR to Grayscale:
cv::cvtColor( inputImage, inputImage, cv::COLOR_BGR2GRAY );
// Get binary image via Otsu:
cv::threshold( inputImage, inputImage, 0, 255, cv::THRESH_OTSU );
The above snippet produces the following image:
Note that there's a little bit of noise due to the thresholding, let's try to remove those isolated blobs of white pixels by applying some morphology. Maybe an opening, which is an erosion followed by dilation. The structuring elements and iterations, though, are not the same, and these where found by experimentation. I wanted to remove the majority of the isolated blobs without modifying too much the original image:
// Apply Morphology. Erosion + Dilation:
// Set rectangular structuring element of size 3 x 3:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
// Set the iterations:
int morphoIterations = 1;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphoIterations);
// Set rectangular structuring element of size 5 x 5:
SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(5, 5) );
// Set the iterations:
morphoIterations = 2;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphoIterations);
This combination of structuring elements and iterations yield the following filtered image:
Its looking alright. Now comes the main idea of the algorithm. If we compute the skeleton of this image, we would "normalize" all the lines to a width of 1 pixel, which is very handy, because we could reduce the image to a 1 x 1 (row) matrix and count the number of jumps. Since the lines are "normalized" we could get rid of possible overlaps between lines. Now, skeletonized images sometimes produce artifacts near the borders of the image. These artifacts resemble thickened anchors at the first and last row of the image. To prevent these artifacts we can extend borders prior to computing the skeleton:
// Extend borders to avoid skeleton artifacts, extend 5 pixels in all directions:
cv::copyMakeBorder( inputImage, inputImage, 5, 5, 5, 5, cv::BORDER_CONSTANT, 0 );
// Get the skeleton:
cv::Mat imageSkelton;
cv::ximgproc::thinning( inputImage, imageSkelton );
This is the skeleton obtained:
Nice. Before we count jumps, though, we must observe that the lines are skewed. If we reduce this image directly to a one row, some overlapping could indeed happen between to lines that are too skewed. To prevent this, I crop a middle section of the skeleton image and count transitions there. Let's crop the image:
// Crop middle ROI:
cv::Rect linesRoi;
linesRoi.x = 0;
linesRoi.y = 0.5 * imageSkelton.rows;
linesRoi.width = imageSkelton.cols;
linesRoi.height = 1;
cv::Mat imageROI = imageSkelton( linesRoi );
This would be the new ROI, which is just the middle row of the skeleton image:
Let me prepare a BGR copy of this just to draw some results:
// BGR version of the Grayscale ROI:
cv::Mat colorROI;
cv::cvtColor( imageROI, colorROI, cv::COLOR_GRAY2BGR );
Ok, let's loop through the image and count the transitions between 255 and 0. That happens when we look at the value of the current pixel and compare it with the value obtained an iteration earlier. The current pixel must be 0 and the past pixel 255. There's more than a way to loop through a cv::Mat in C++. I prefer to use cv::MatIterator_s and pointer arithmetic:
// Set the loop variables:
cv::MatIterator_<cv::Vec3b> it, end;
uchar pastPixel = 0;
int jumpsCounter = 0;
int i = 0;
// Loop thru image ROI and count 255-0 jumps:
for (it = imageROI.begin<cv::Vec3b>(), end = imageROI.end<cv::Vec3b>(); it != end; ++it) {
// Get current pixel
uchar ¤tPixel = (*it)[0];
// Compare it with past pixel:
if ( (currentPixel == 0) && (pastPixel == 255) ){
// We have a jump:
jumpsCounter++;
// Draw the point on the BGR version of the image:
cv::line( colorROI, cv::Point(i, 0), cv::Point(i, 0), cv::Scalar(0, 0, 255), 1 );
}
// current pixel is now past pixel:
pastPixel = currentPixel;
i++;
}
// Show image and print number of jumps found:
cv::namedWindow( "Jumps Found", CV_WINDOW_NORMAL );
cv::imshow( "Jumps Found", colorROI );
cv::waitKey( 0 );
std::cout<<"Jumps Found: "<<jumpsCounter<<std::endl;
The points where the jumps were found are drawn in red, and the number of total jumps printed is:
Jumps Found: 9
I'm fairly new to OpenCV, and very excited to learn more. I've been toying with the idea of outlining edges, shapes.
I've come across this code (running on an iOS device), which uses Canny. I'd like to be able to render this in color, and circle each shape. Can someone point me in the right direction?
Thanks!
IplImage *grayImage = cvCreateImage(cvGetSize(iplImage), IPL_DEPTH_8U, 1);
cvCvtColor(iplImage, grayImage, CV_BGRA2GRAY);
cvReleaseImage(&iplImage);
IplImage* img_blur = cvCreateImage( cvGetSize( grayImage ), grayImage->depth, 1);
cvSmooth(grayImage, img_blur, CV_BLUR, 3, 0, 0, 0);
cvReleaseImage(&grayImage);
IplImage* img_canny = cvCreateImage( cvGetSize( img_blur ), img_blur->depth, 1);
cvCanny( img_blur, img_canny, 10, 100, 3 );
cvReleaseImage(&img_blur);
cvNot(img_canny, img_canny);
And example might be these burger patties. OpenCV would detect the patty, and outline it.
Original Image:
Color information is often handled by conversion to HSV color space which handles "color" directly instead of dividing color into R/G/B components which makes it easier to handle same colors with different brightness etc.
if you convert your image to HSV you'll get this:
cv::Mat hsv;
cv::cvtColor(input,hsv,CV_BGR2HSV);
std::vector<cv::Mat> channels;
cv::split(hsv, channels);
cv::Mat H = channels[0];
cv::Mat S = channels[1];
cv::Mat V = channels[2];
Hue channel:
Saturation channel:
Value channel:
typically, the hue channel is the first one to look at if you are interested in segmenting "color" (e.g. all red objects). One problem is, that hue is a circular/angular value which means that the highest values are very similar to the lowest values, which results in the bright artifacts at the border of the patties. To overcome this for a particular value, you can shift the whole hue space. If shifted by 50° you'll get something like this instead:
cv::Mat shiftedH = H.clone();
int shift = 25; // in openCV hue values go from 0 to 180 (so have to be doubled to get to 0 .. 360) because of byte range from 0 to 255
for(int j=0; j<shiftedH.rows; ++j)
for(int i=0; i<shiftedH.cols; ++i)
{
shiftedH.at<unsigned char>(j,i) = (shiftedH.at<unsigned char>(j,i) + shift)%180;
}
now you can use a simple canny edge detection to find edges in the hue channel:
cv::Mat cannyH;
cv::Canny(shiftedH, cannyH, 100, 50);
You can see that the regions are a little bigger than the real patties, that might be because of the tiny reflections on the ground around the patties, but I'm not sure about that. Maybe it's just because of jpeg compression artifacts ;)
If you instead use the saturation channel to extract edges, you'll end up with something like this:
cv::Mat cannyS;
cv::Canny(S, cannyS, 200, 100);
where the contours aren't completely closed. Maybe you can combine hue and saturation within preprocessing to extract edges in the hue channel but only where saturation is high enough.
At this stage you have edges. Regard that edges aren't contours yet. If you directly extract contours from edges they might not be closed/separated etc:
// extract contours of the canny image:
std::vector<std::vector<cv::Point> > contoursH;
std::vector<cv::Vec4i> hierarchyH;
cv::findContours(cannyH,contoursH, hierarchyH, CV_RETR_TREE , CV_CHAIN_APPROX_SIMPLE);
// draw the contours to a copy of the input image:
cv::Mat outputH = input.clone();
for( int i = 0; i< contoursH.size(); i++ )
{
cv::drawContours( outputH, contoursH, i, cv::Scalar(0,0,255), 2, 8, hierarchyH, 0);
}
you can remove those small contours by checking cv::contourArea(contoursH[i]) > someThreshold before drawing. But you see the two patties on the left to be connected? Here comes the hardest part... use some heuristics to "improve" your result.
cv::dilate(cannyH, cannyH, cv::Mat());
cv::dilate(cannyH, cannyH, cv::Mat());
cv::dilate(cannyH, cannyH, cv::Mat());
Dilation before contour extraction will "close" the gaps between different objects but increase the object size too.
if you extract contours from that it will look like this:
If you instead choose only the "inner" contours it is exactly what you like:
cv::Mat outputH = input.clone();
for( int i = 0; i< contoursH.size(); i++ )
{
if(cv::contourArea(contoursH[i]) < 20) continue; // ignore contours that are too small to be a patty
if(hierarchyH[i][3] < 0) continue; // ignore "outer" contours
cv::drawContours( outputH, contoursH, i, cv::Scalar(0,0,255), 2, 8, hierarchyH, 0);
}
mind that the dilation and inner contour stuff is a little fuzzy, so it might not work for different images and if the initial edges are placed better around the object border it might 1. not be necessary to do the dilate and inner contour thing and 2. if it is still necessary, the dilate will make the object smaller in this scenario (which luckily is great for the given sample image.).
EDIT: Some important information about HSV: The hue channel will give every pixel a color of the spectrum, even if the saturation is very low ( = gray/white) or if the color is very low (value) so often it is desired to threshold the saturation and value channels to find some specific color! This might be much easier and much more stavle to handle than the dilation I've used in my code.
i want to find hwo to get diff b/w 2 similar grayscale images for implementation in system for security purposes. I want to check whether any difference has occurred between them. For object tracking, i have implementd canny detection in the program below. I get outline of structured objects easily.. which cn later be subtracted to give only the outline of the difference in the delta image....but what if there's a non structural difference such as smoke or fire in the second image? i have increased the contrast for clearer edge detection as well have modified threshold vals in the canny fn parameters..yet got no suitable results.
also canny edge detects shadows edges too. if my two similar image were taken at different times during the day, the shadows will vary, so the edges will vary and will give undesirable false alarm
how should i work around this? Can anyone help? thanks!
Using c language api in enter code hereopencv 2.4 in visual studio 2010
#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
#include "cxcore.h"
#include <math.h>
#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;
int main()
{
IplImage* img1 = NULL;
if ((img1 = cvLoadImage("libertyH1.jpg"))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray1 = cvCreateImage(cvGetSize(img1), IPL_DEPTH_8U, 1); //contains greyscale //image
CvMemStorage* storage1 = cvCreateMemStorage(0); //struct for storage
cvCvtColor(img1, gray1, CV_BGR2GRAY); //convert to greyscale
cvSmooth(gray1, gray1, CV_GAUSSIAN, 7, 7); // This is done so as to //prevent a lot of false circles from being detected
IplImage* canny1 = cvCreateImage(cvGetSize(gray1),IPL_DEPTH_8U,1);
IplImage* rgbcanny1 = cvCreateImage(cvGetSize(gray1),IPL_DEPTH_8U,3);
cvCanny(gray1, canny1, 50, 100, 3); //cvCanny( const //CvArr* image, CvArr* edges(output edge map), double threshold1, double threshold2, int //aperture_size CV_DEFAULT(3) );
cvNamedWindow("Canny before hough");
cvShowImage("Canny before hough", canny1);
CvSeq* circles1 = cvHoughCircles(gray1, storage1, CV_HOUGH_GRADIENT, 1, gray1->height/3, 250, 100);
cvCvtColor(canny1, rgbcanny1, CV_GRAY2BGR);
cvNamedWindow("Canny after hough");
cvShowImage("Canny after hough", rgbcanny1);
for (size_t i = 0; i < circles1->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles1, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny1, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny1, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
IplImage* img2 = NULL;
if ((img2 = cvLoadImage("liberty_wth_obj.jpg"))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray2 = cvCreateImage(cvGetSize(img2), IPL_DEPTH_8U, 1);
CvMemStorage* storage = cvCreateMemStorage(0);
cvCvtColor(img2, gray2, CV_BGR2GRAY);
// This is done so as to prevent a lot of false circles from being detected
cvSmooth(gray2, gray2, CV_GAUSSIAN, 7, 7);
IplImage* canny2 = cvCreateImage(cvGetSize(img2),IPL_DEPTH_8U,1);
IplImage* rgbcanny2 = cvCreateImage(cvGetSize(img2),IPL_DEPTH_8U,3);
cvCanny(gray2, canny2, 50, 100, 3);
CvSeq* circles2 = cvHoughCircles(gray2, storage, CV_HOUGH_GRADIENT, 1, gray2->height/3, 250, 100);
cvCvtColor(canny2, rgbcanny2, CV_GRAY2BGR);
for (size_t i = 0; i < circles2->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles2, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny2, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny2, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
You want code help here? This is not an easy task. There are few algorithms available in internet or you can try to invent new one. A lot of research is going on this. I have some idea about a process. You can find the edges by Y from YCbCr color system. Deduct this Y value from blurred image's Y value. Then you will get the edge. Now make an array representation. You have to divide the image in blocks. Now check the block with blocks. It may slide, rotated, twisted etc. Compare with array matching. Object tracking is difficult due to background. Take care/omit unnecessary objects carefully.
I think the way to go could be Background subtraction. It lets you cope with lighting conditions changes.
See wikipedia entry for an intro. The basic idea is you have to build a model for the scene background, then all differences are computed relative to the background.
I have done some analysis on Image Differencing but the code was written for java. Kindly look into the below link that may come to help
How to find rectangle of difference between two images
Cheers !
I have to code an object detector (in this case, a ball) using OpenCV. The problem is, every single search on google returns me something with FACE DETECTION in it. So i need help on where to start, what to use etc..
Some info:
The ball doesn't have a fixed color, it will probably be white, but it might change.
I HAVE to use machine learning, doesn't have to be a complex and reliable one, suggestion is KNN (which is WAY simpler and easier).
After all my searching, i found that calculating the histogram of samples ball-only images and teaching it to the ML could be useful, but my main concern here is that the ball size can and will change (closer and further from the camera) and i have no idea on what to pass to the ML to classify for me, i mean.. i can't (or can I?) just test every pixel of the image for every possible size (from, lets say, 5x5 to WxH) and hope to find a positive result.
There might be a non-uniform background, like people, cloth behind the ball and etc..
As I said, i have to use a ML algorithm, that means no Haar or Viola algorithms.
Also, I thought on using contours to find circles on a Canny'ed image, just have to find a way to transform a contour into a row of data to teach the KNN.
So... suggestions?
Thanks in advance.
;)
Well, basically you need to detect circles. Have you seen cvHoughCircles()? Are you allowed to use it?
This page has good info on how detecting stuff with OpenCV. You might be more interested on section 2.5.
This is a small demo I just wrote to detect coins in this picture. Hopefully you can use some part of the code to your advantage.
Input:
Outputs:
// compiled with: g++ circles.cpp -o circles `pkg-config --cflags --libs opencv`
#include <stdio.h>
#include <cv.h>
#include <highgui.h>
#include <math.h>
int main(int argc, char** argv)
{
IplImage* img = NULL;
if ((img = cvLoadImage(argv[1]))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
CvMemStorage* storage = cvCreateMemStorage(0);
cvCvtColor(img, gray, CV_BGR2GRAY);
// This is done so as to prevent a lot of false circles from being detected
cvSmooth(gray, gray, CV_GAUSSIAN, 7, 7);
IplImage* canny = cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,1);
IplImage* rgbcanny = cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,3);
cvCanny(gray, canny, 50, 100, 3);
CvSeq* circles = cvHoughCircles(gray, storage, CV_HOUGH_GRADIENT, 1, gray->height/3, 250, 100);
cvCvtColor(canny, rgbcanny, CV_GRAY2BGR);
for (size_t i = 0; i < circles->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
cvNamedWindow("circles", 1);
cvShowImage("circles", rgbcanny);
cvSaveImage("out.png", rgbcanny);
cvWaitKey(0);
return 0;
}
The detection of the circles depend a lot on the parameters of cvHoughCircles(). Note that in this demo I used Canny as well.