HOG features for object detection using GPU in OpenCV - c++

In my project I am calculating HOG features on GPU for different levels in the same image. My aim is to detect the following objects.
1. Truck
2. Car
3. Person
Most important question is the selection of window size in case of multi class object detector. This post provide a very good base but it does not provide an answer for the selection of window size in case of multi class feature.
To solve this problem I calculated the HOG features of each positive image at different levels/resolution keeping the window size(48*96) same but the file for each image is around 600 MB which is too large.
Please let me know how to select the window size, block size and cell size in case of multi class object detection. Here is my code that I used to calculate the HOG features.
void App::run()
{
unsigned int count = 1;
FileStorage fs;
running = true;
//int width;
//int height;
Size win_size(args.win_width, args.win_width * 2);
Size win_stride(args.win_stride_width, args.win_stride_height);
cv::gpu::HOGDescriptor gpu_hog(win_size, Size(16, 16), Size(8, 8), Size(8, 8), 9,
cv::gpu::HOGDescriptor::DEFAULT_WIN_SIGMA, 0.2, gamma_corr,
cv::gpu::HOGDescriptor::DEFAULT_NLEVELS);
VideoCapture vc("/home/ubuntu/Desktop/getdescriptor/images/image%d.jpg");
Mat frame;
Mat Left;
Mat img_aux, img, img_to_show, img_new;
cv::Mat temp;
gpu::GpuMat gpu_img, descriptors, new_img;
char cbuff[20];
while (running)
{
vc.read(frame);
if (!frame.empty())
{
workBegin();
width = frame.rows;
height = frame.cols;
sprintf (cbuff, "%04d", count);
// Change format of the image
if (make_gray) cvtColor(frame, img_aux, CV_BGR2GRAY);
else if (use_gpu) cvtColor(frame, img_aux, CV_BGR2BGRA);
else Left.copyTo(img_aux);
// Resize image
if (args.resize_src) resize(img_aux, img, Size(args.width, args.height));
else img = img_aux;
img_to_show = img;
gpu_hog.nlevels = nlevels;
hogWorkBegin();
if (use_gpu)
{
gpu_img.upload(img);
new_img.upload(img_new);
fs.open(cbuff, FileStorage::WRITE);
for(int levels = 0; levels < nlevels; levels++)
{
gpu_hog.getDescriptors(gpu_img, win_stride, descriptors, cv::gpu::HOGDescriptor::DESCR_FORMAT_ROW_BY_ROW);
descriptors.download(temp);
//printf("size %d %d\n", temp.rows, temp.cols);
fs <<"level" << levels;
fs << "features" << temp;
cout<<"("<<width<<","<<height<<")"<<endl;
width = round(width/scale);
height = round(height/scale);
if( width < win_size.width || height < win_size.height )
break;
cout<<"Levels "<<levels<<endl;
resize(img,img_new,Size(width,height));
scale *= scale;
}
cout<<count<< " Image feature calculated !"<<endl;
count++;
//width = 640; height = 480;
scale = 1.05;
}
hogWorkEnd();
fs.release();
}
else running = false;
}
}

The window size should be chosen, s.t. the object(s) you want to detect fit into the window. If you want to have different window sizes for different types this might become tricky.
Usually what you do is the following
Take training data for each type of objects, and train [number of object types] many models using the features extracted at the known position of the objects.
Then you take each test image and use a sliding window approach to extract features at each location. These features are then compared to each model. If one of the models lead to a score higher than a certain threshold you have found this object. If more than one model scores higher than the threshold simply take the one scoring highest.
If you want to use differently sized detection windows you will get feature vectors of different size (by nature of the HoG features). The tricky thing is, that in the testing phase you have to use as many sliding windows as object types you use. This would definitely work, but you have to process each testing image several times leading to higher processing time)
To answer your question of the sizes: There is no value I can give you, it always depends on your images. Using an image pyramid as you mentioned above is a good way to deal with differently scaled objects.
window size: the whole object should fit in; has to be divisible by block size
block size has to be divisible by cell size
Sample code for visualization of HoG features can be found here. This also helps understand how the feature vectors look like.
EDIT: Found out the hard way, that only cv::Size(8,8) is allowed for cell size. See documentation.

Related

OCR & OpenCV: Difference between two frames on high resolution images

According to this post OCR: Difference between two frames, I now know how to find pixel differences between two images with OpenCV.
I would like to improve this solution and use it with high resolution images (from a video) with rich content. The example above is not applicable with big images because the process is to slow (too much differences found, the "findCountours method" fills the tab with 250k elements which takes a huge time to process).
My application uses a RLE decoder to decode the compressed frames of the video. Once the frame is decoded, I would like to compare the current frame with the previous one in order to store the differences between the two frames in a "Mat" tab for example.
The goal of all of this is to be able to perform an analysis on the different pixels and to check if there is any latin character. This allows me to reduce the amount of pixels to analyze and to save precious time.
If anyone has other ideas instead of this one to perform such operations, feel free to propose it please.
Thank you for your help.
EDIT 1:
Example of two high resolution images of a computer screen. These are for the moment the perfect example of what I'm trying to analyse. As we can see there is just a window as difference between the two big images and I would like to analyze just the new "Challenge" window for any character.
EDIT 2:
I'm trying to tune the algorithm depending on the data analyzed. Typically on the two following pictures I only get the green lines as differences and no text at all (which is what is the most interesting). I'm trying to understand better how things work for this.
1st image:
2nd image:
3rd image:
As you can see I only have those green lines and never the text (at the best I can have just ONE letter when decreasing the countours[i].size())
In addition to the post you mentioned, you need to:
When you binarize the mask, use a threshold higher then 0 to remove small differences.
Remove some noise. You can find all connected components, and remove smaller ones.
Find the area of the bigger connected components. You can use convexHull and fillConvexPoly to get the mask of the different objects on screen
Copy the second image to a new image, with the given mask.
The result will look like:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
Mat3b img1 = imread("path_to_image_1");
Mat3b img2 = imread("path_to_image_2");
Mat3b diff;
absdiff(img1, img2, diff);
// Split each channel
vector<Mat1b> masks;
split(diff, masks);
// Create a black mask
Mat1b mask(diff.rows, diff.cols, uchar(0));
// OR with each channel of the N channels mask
for (int i = 0; i < masks.size(); ++i)
{
mask |= masks[i];
}
// Binarize mask
mask = mask > 100;
// Results images
vector<Mat3b> difference_images;
// Remove small blobs
//Mat kernel = getStructuringElement(MORPH_RECT, Size(5,5));
//morphologyEx(mask, mask, MORPH_OPEN, kernel);
// Find connected components
vector<vector<Point>> contours;
findContours(mask.clone(), contours, CV_RETR_EXTERNAL, CHAIN_APPROX_NONE);
for (int i = 0; i < contours.size(); ++i)
{
if (contours[i].size() > 1000)
{
Mat1b mm(mask.rows, mask.cols, uchar(0));
vector<Point> hull;
convexHull(contours[i], hull);
fillConvexPoly(mm, hull, Scalar(255));
Mat3b difference_img(img2.rows, img2.cols, Vec3b(0,0,0));
img2.copyTo(difference_img, mm);
difference_images.push_back(difference_img.clone());
}
}
return 0;
}

Template Matching for Coins with OpenCV

I am undertaking a project that will automatically count values of coins from an input image. So far I have segmented the coins using some pre-processing with edge detection and using the Hough-Transform.
My question is how do I proceed from here? I need to do some template matching on the segmented images based on some previously stored features. How can I go about doing this.
I have also read about something called K-Nearest Neighbours and I feel it is something I should be using. But I am not too sure how to go about using it.
Research articles I have followed:
Coin
Detector
Coin
Recognition
One way of doing pattern matching is using cv::matchTemplate.
This takes an input image and a smaller image which acts as template. It compares the template against overlapped image regions computing the similarity of the template with the overlapped region. Several methods for computing the comparision are available.
This methods does not directly support scale or orientation invariance. But it is possible to overcome that by scaling candidates to a reference size and by testing against several rotated templates.
A detailed example of this technique is shown to detect pressence and location of 50c coins. The same procedure can be applied to the other coins.
Two programs will be built. One to create templates from the big image template for the 50c coin. And another one which will take as input those templates as well as the image with coins and will output an image where the 50c coin(s) are labelled.
Template Maker
#define TEMPLATE_IMG "50c.jpg"
#define ANGLE_STEP 30
int main()
{
cv::Mat image = loadImage(TEMPLATE_IMG);
cv::Mat mask = createMask( image );
cv::Mat loc = locate( mask );
cv::Mat imageCS;
cv::Mat maskCS;
centerAndScale( image, mask, loc, imageCS, maskCS);
saveRotatedTemplates( imageCS, maskCS, ANGLE_STEP );
return 0;
}
Here we load the image which will be used to construct our templates.
Segment it to create a mask.
Locate the center of masses of said mask.
And we rescale and copy that mask and the coin so that they ocupy a square of fixed size where the edges of the square are touching the circunference of mask and coin. That is, the side of the square has the same lenght in pixels as the diameter of the scaled mask or coin image.
Finally we save that scaled and centered image of the coin. And we save further copies of it rotated in fixed angle increments.
cv::Mat loadImage(const char* name)
{
cv::Mat image;
image = cv::imread(name);
if ( image.data==NULL || image.channels()!=3 )
{
std::cout << name << " could not be read or is not correct." << std::endl;
exit(1);
}
return image;
}
loadImage uses cv::imread to read the image. Verifies that data has been read and the image has three channels and returns the read image.
#define THRESHOLD_BLUE 130
#define THRESHOLD_TYPE_BLUE cv::THRESH_BINARY_INV
#define THRESHOLD_GREEN 230
#define THRESHOLD_TYPE_GREEN cv::THRESH_BINARY_INV
#define THRESHOLD_RED 140
#define THRESHOLD_TYPE_RED cv::THRESH_BINARY
#define CLOSE_ITERATIONS 5
cv::Mat createMask(const cv::Mat& image)
{
cv::Mat channels[3];
cv::split( image, channels);
cv::Mat mask[3];
cv::threshold( channels[0], mask[0], THRESHOLD_BLUE , 255, THRESHOLD_TYPE_BLUE );
cv::threshold( channels[1], mask[1], THRESHOLD_GREEN, 255, THRESHOLD_TYPE_GREEN );
cv::threshold( channels[2], mask[2], THRESHOLD_RED , 255, THRESHOLD_TYPE_RED );
cv::Mat compositeMask;
cv::bitwise_and( mask[0], mask[1], compositeMask);
cv::bitwise_and( compositeMask, mask[2], compositeMask);
cv::morphologyEx(compositeMask, compositeMask, cv::MORPH_CLOSE,
cv::Mat(), cv::Point(-1, -1), CLOSE_ITERATIONS );
/// Next three lines only for debugging, may be removed
cv::Mat filtered;
image.copyTo( filtered, compositeMask );
cv::imwrite( "filtered.jpg", filtered);
return compositeMask;
}
createMask does the segmentation of the template. It binarizes each of the BGR channels, does the AND of those three binarized images and performs a CLOSE morphologic operation to produce the mask.
The three debug lines copy the original image into a black one using the computed mask as a mask for the copy operation. This helped in chosing the proper values for the threshold.
Here we can see the 50c image filtered by the mask created in createMask
cv::Mat locate( const cv::Mat& mask )
{
// Compute center and radius.
cv::Moments moments = cv::moments( mask, true);
float area = moments.m00;
float radius = sqrt( area/M_PI );
float xCentroid = moments.m10/moments.m00;
float yCentroid = moments.m01/moments.m00;
float m[1][3] = {{ xCentroid, yCentroid, radius}};
return cv::Mat(1, 3, CV_32F, m);
}
locate computes the center of mass of the mask and its radius. Returning those 3 values in a single row mat in the form { x, y, radius }.
It uses cv::moments which calculates all of theĀ momentsĀ up to the third order of a polygon or rasterized shape. A rasterized shape in our case. We are not interested in all of those moments. But three of them are useful here. M00 is the area of the mask. And the centroid can be calculated from m00, m10 and m01.
void centerAndScale(const cv::Mat& image, const cv::Mat& mask,
const cv::Mat& characteristics,
cv::Mat& imageCS, cv::Mat& maskCS)
{
float radius = characteristics.at<float>(0,2);
float xCenter = characteristics.at<float>(0,0);
float yCenter = characteristics.at<float>(0,1);
int diameter = round(radius*2);
int xOrg = round(xCenter-radius);
int yOrg = round(yCenter-radius);
cv::Rect roiOrg = cv::Rect( xOrg, yOrg, diameter, diameter );
cv::Mat roiImg = image(roiOrg);
cv::Mat roiMask = mask(roiOrg);
cv::Mat centered = cv::Mat::zeros( diameter, diameter, CV_8UC3);
roiImg.copyTo( centered, roiMask);
cv::imwrite( "centered.bmp", centered); // debug
imageCS.create( TEMPLATE_SIZE, TEMPLATE_SIZE, CV_8UC3);
cv::resize( centered, imageCS, cv::Size(TEMPLATE_SIZE,TEMPLATE_SIZE), 0, 0 );
cv::imwrite( "scaled.bmp", imageCS); // debug
roiMask.copyTo(centered);
cv::resize( centered, maskCS, cv::Size(TEMPLATE_SIZE,TEMPLATE_SIZE), 0, 0 );
}
centerAndScale uses the centroid and radius computed by locate to get a region of interest of the input image and a region of interest of the mask such that the center of the such regions is also the center of the coin and mask and the side length of the regions are equal to the diameter of the coin/mask.
These regions are later scaled to a fixed TEMPLATE_SIZE. This scaled region will be our reference template. When later on in the matching program we want to check if a detected candidate coin is this coin we will also take a region of the candidate coin, center and scale that candidate coin in the same way before performing template matching. This way we achieve scale invariance.
void saveRotatedTemplates( const cv::Mat& image, const cv::Mat& mask, int stepAngle )
{
char name[1000];
cv::Mat rotated( TEMPLATE_SIZE, TEMPLATE_SIZE, CV_8UC3 );
for ( int angle=0; angle<360; angle+=stepAngle )
{
cv::Point2f center( TEMPLATE_SIZE/2, TEMPLATE_SIZE/2);
cv::Mat r = cv::getRotationMatrix2D(center, angle, 1.0);
cv::warpAffine(image, rotated, r, cv::Size(TEMPLATE_SIZE, TEMPLATE_SIZE));
sprintf( name, "template-%03d.bmp", angle);
cv::imwrite( name, rotated );
cv::warpAffine(mask, rotated, r, cv::Size(TEMPLATE_SIZE, TEMPLATE_SIZE));
sprintf( name, "templateMask-%03d.bmp", angle);
cv::imwrite( name, rotated );
}
}
saveRotatedTemplates saves the previous computed template.
But it saves several copies of it, each one rotated by an angle, defined in ANGLE_STEP. The goal of this is to provide orientation invariance. The lower that we define stepAngle the better orientation invariance we get but it also implies a higher computational cost.
You may download the whole template maker program here.
When run with ANGLE_STEP as 30 I get the following 12 templates :
Template Matching.
#define INPUT_IMAGE "coins.jpg"
#define LABELED_IMAGE "coins_with50cLabeled.bmp"
#define LABEL "50c"
#define MATCH_THRESHOLD 0.065
#define ANGLE_STEP 30
int main()
{
vector<cv::Mat> templates;
loadTemplates( templates, ANGLE_STEP );
cv::Mat image = loadImage( INPUT_IMAGE );
cv::Mat mask = createMask( image );
vector<Candidate> candidates;
getCandidates( image, mask, candidates );
saveCandidates( candidates ); // debug
matchCandidates( templates, candidates );
for (int n = 0; n < candidates.size( ); ++n)
std::cout << candidates[n].score << std::endl;
cv::Mat labeledImg = labelCoins( image, candidates, MATCH_THRESHOLD, false, LABEL );
cv::imwrite( LABELED_IMAGE, labeledImg );
return 0;
}
The goal here is to read the templates and the image to be examined and determine the location of coins which match our template.
First we read into a vector of images all the template images we produced in the previous program.
Then we read the image to be examined.
Then we binarize the image to be examined using exactly the same function as in the template maker.
getCandidates locates the groups of points which are toghether forming a polygon. Each of these polygons is a candidate for coin. And all of them are rescaled and centered in a square of size equal to that of our templates so that we can perform matching in a way invariant to scale.
We save the candidate images obtained for debugging and tuning purposes.
matchCandidates matches each candidate with all the templates storing for each the result of the best match. Since we have templates for several orientations this provides invariance to orientation.
Scores of each candidate are printed so we can decide on a threshold to separate 50c coins from non 50c coins.
labelCoins copies the original image and draws a label over the ones which have a score greater than (or lesser than for some methods) the threshold defined in MATCH_THRESHOLD.
And finally we save the result in a .BMP
void loadTemplates(vector<cv::Mat>& templates, int angleStep)
{
templates.clear( );
for (int angle = 0; angle < 360; angle += angleStep)
{
char name[1000];
sprintf( name, "template-%03d.bmp", angle );
cv::Mat templateImg = cv::imread( name );
if (templateImg.data == NULL)
{
std::cout << "Could not read " << name << std::endl;
exit( 1 );
}
templates.push_back( templateImg );
}
}
loadTemplates is similar to loadImage. But it loads several images instead of just one and stores them in a std::vector.
loadImage is exactly the same as in the template maker.
createMask is also exactly the same as in the tempate maker. This time we apply it to the image with several coins. It should be noted that binarization thresholds were chosen to binarize the 50c and those will not work properly to binarize all the coins in the image. But that is of no consequence since the program objective is only to identify 50c coins. As long as those are properly segmented we are fine. It actually works in our favour if some coins are lost in this segmentation since we will save time evaluating them (as long as we only lose coins which are not 50c).
typedef struct Candidate
{
cv::Mat image;
float x;
float y;
float radius;
float score;
} Candidate;
void getCandidates(const cv::Mat& image, const cv::Mat& mask,
vector<Candidate>& candidates)
{
vector<vector<cv::Point> > contours;
vector<cv::Vec4i> hierarchy;
/// Find contours
cv::Mat maskCopy;
mask.copyTo( maskCopy );
cv::findContours( maskCopy, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point( 0, 0 ) );
cv::Mat maskCS;
cv::Mat imageCS;
cv::Scalar white = cv::Scalar( 255 );
for (int nContour = 0; nContour < contours.size( ); ++nContour)
{
/// Draw contour
cv::Mat drawing = cv::Mat::zeros( mask.size( ), CV_8UC1 );
cv::drawContours( drawing, contours, nContour, white, -1, 8, hierarchy, 0, cv::Point( ) );
// Compute center and radius and area.
// Discard small areas.
cv::Moments moments = cv::moments( drawing, true );
float area = moments.m00;
if (area < CANDIDATES_MIN_AREA)
continue;
Candidate candidate;
candidate.radius = sqrt( area / M_PI );
candidate.x = moments.m10 / moments.m00;
candidate.y = moments.m01 / moments.m00;
float m[1][3] = {
{ candidate.x, candidate.y, candidate.radius}
};
cv::Mat characteristics( 1, 3, CV_32F, m );
centerAndScale( image, drawing, characteristics, imageCS, maskCS );
imageCS.copyTo( candidate.image );
candidates.push_back( candidate );
}
}
The heart of getCandidates is cv::findContours which finds the contours of areas present in its input image. Which here is the mask previously computed.
findContours returns a vector of contours. Each contour itself being a vector of points which form the outer line of the detected polygon.
Each polygon delimites the region of each candidate coin.
For each contour we use cv::drawContours to draw the filled polygon over a black image.
With this drawn image we use the same procedure earlier explained to compute centroid and radius of the polygon.
And we use centerAndScale, the same function used in the template maker, to center and scale the image contained in that poligon in an image which will have the same size as our templates. This way we will later on be able to perform a proper matching even for coins from photos of different scales.
Each of these candidate coins is copied in a Candidate structure which contains :
Candidate image
x and y for centroid
radius
score
getCandidates computes all these values except for score.
After composing the candidate it is put in a vector of candidates which is the result we get from getCandidates.
These are the 4 candidates obtained :
void saveCandidates(const vector<Candidate>& candidates)
{
for (int n = 0; n < candidates.size( ); ++n)
{
char name[1000];
sprintf( name, "Candidate-%03d.bmp", n );
cv::imwrite( name, candidates[n].image );
}
}
saveCandidates saves the computed candidates for debugging purpouses. And also so that I may post those images here.
void matchCandidates(const vector<cv::Mat>& templates,
vector<Candidate>& candidates)
{
for (auto it = candidates.begin( ); it != candidates.end( ); ++it)
matchCandidate( templates, *it );
}
matchCandidates just calls matchCandidate for each candidate. After completion we will have the score for all candidates computed.
void matchCandidate(const vector<cv::Mat>& templates, Candidate& candidate)
{
/// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better
candidate.score;
if (MATCH_METHOD == CV_TM_SQDIFF || MATCH_METHOD == CV_TM_SQDIFF_NORMED)
candidate.score = FLT_MAX;
else
candidate.score = 0;
for (auto it = templates.begin( ); it != templates.end( ); ++it)
{
float score = singleTemplateMatch( *it, candidate.image );
if (MATCH_METHOD == CV_TM_SQDIFF || MATCH_METHOD == CV_TM_SQDIFF_NORMED)
{
if (score < candidate.score)
candidate.score = score;
}
else
{
if (score > candidate.score)
candidate.score = score;
}
}
}
matchCandidate has as input a single candidate and all the templates. It's goal is to match each template against the candidate. That work is delegated to singleTemplateMatch.
We store the best score obtained, which for CV_TM_SQDIFF and CV_TM_SQDIFF_NORMED is the smallest one and for the other matching methods is the biggest one.
float singleTemplateMatch(const cv::Mat& templateImg, const cv::Mat& candidateImg)
{
cv::Mat result( 1, 1, CV_8UC1 );
cv::matchTemplate( candidateImg, templateImg, result, MATCH_METHOD );
return result.at<float>( 0, 0 );
}
singleTemplateMatch peforms the matching.
cv::matchTemplate uses two imput images, the second smaller or equal in size to the first one.
The common use case is for a small template (2nd parameter) to be matched against a larger image (1st parameter) and the result is a bidimensional Mat of floats with the matching of the template along the image. Locating the maximun (or minimun depending on the method) of this Mat of floats we get the best candidate position for our template in the image of the 1st parameter.
But we are not interested in locating our template in the image, we already have the coordinates of our candidates.
What we want is to get a measure of similitude between our candidate and template. Which is why we use cv::matchTemplate in a way which is less usual; we do so with a 1st parameter image of size equal to the 2nd parameter template. In this situation the result is a Mat of size 1x1. And the single value in that Mat is our score of similitude (or dissimilitude).
for (int n = 0; n < candidates.size( ); ++n)
std::cout << candidates[n].score << std::endl;
We print the scores obtained for each of our candidates.
In this table we can see the scores for each of the methods available for cv::matchTemplate. The best score is in green.
CCORR and CCOEFF give a wrong result, so those two are discarded. Of the remaining 4 methods the two SQDIFF methods are the ones with higher relative difference between the best match (which is a 50c) and the 2nd best (which is not a 50c). Which is why I have choosen them.
I have chosen SQDIFF_NORMED but there is no strong reason for that. In order to really chose a method we should test with a higher ammount of samples, not just one.
For this method a working threshold could be 0.065. Selection of a proper threshold also requires many samples.
bool selected(const Candidate& candidate, float threshold)
{
/// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better
if (MATCH_METHOD == CV_TM_SQDIFF || MATCH_METHOD == CV_TM_SQDIFF_NORMED)
return candidate.score <= threshold;
else
return candidate.score>threshold;
}
void drawLabel(const Candidate& candidate, const char* label, cv::Mat image)
{
int x = candidate.x - candidate.radius;
int y = candidate.y;
cv::Point point( x, y );
cv::Scalar blue( 255, 128, 128 );
cv::putText( image, label, point, CV_FONT_HERSHEY_SIMPLEX, 1.5f, blue, 2 );
}
cv::Mat labelCoins(const cv::Mat& image, const vector<Candidate>& candidates,
float threshold, bool inverseThreshold, const char* label)
{
cv::Mat imageLabeled;
image.copyTo( imageLabeled );
for (auto it = candidates.begin( ); it != candidates.end( ); ++it)
{
if (selected( *it, threshold ))
drawLabel( *it, label, imageLabeled );
}
return imageLabeled;
}
labelCoins draws a label string at the location of candidates with a score bigger than ( or lesser than depending on the method) the threshold.
And finally the result of labelCoins is saved with
cv::imwrite( LABELED_IMAGE, labeledImg );
The result being :
The whole code for the coin matcher can be downloaded here.
Is this a good method?
That is hard to tell.
The method is consistent. It correctly detects the 50c coin for the sample and input image provided.
But we have no idea if the method is robust because it has not been tested with a proper sample size. And even more important is to test it against samples which were not available when the program was being coded, that is the true measure of robustness when done with a large enough sample size.
I am rather confident in the method not having false positives from silver coins. But I am not so sure about other copper coins like the 20c. As we can see from the scores obtained the 20c coin gets a score very similar to the 50c.
It is also quite possible that false negatives will happen under varying lighting conditions. Which is something that can and should be avoided if we have control over lighting conditions such as when we are designing a machine to take photos of coins and count them.
If the method works the same method can be repeated for each type of coin leading to full detection of all coins.
Code in this answer is also available under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
If you detect all coins correctly Its better to use size(radial) and RGB features to recognize its value. Its not a good idea that concatenate these features because their number are not equal ( size is one number and number of RGB features are much larger than one). I recommend you to use two classifier for this purpose. One for size and another for RGB features.
You have to classify all coins into for example 3 (It depends on type
of your coins) size class. You can do this with a simple 1NN
classifier (just calculate the radial of test coin and classify it to
nearest predefined radial)
Then you should have some templates in each size and use template matching to recognize its value.(all templates and detected coins should be resize to a particular size. e.g. (100,100) ) For template
matching you can use matchtemplate function. I thing that the CV_TM_CCOEFF method may be the best one, but you can test all methods
to get a good result. (Note you don't need to search on image for coin because you detect the coin previously as you mentioned in your
question. You just need to use this function to get one number as a similarity/difference between two image and classify the test coin to a class which the similarity is maximized or difference is minimized)
EDIT1: You should have all rotations in your templates in each class to compensate the rotation of test coin.
EDIT2: If all coins are in different sizes the first step is enough. Otherwise you should patch the similar sizes to one class and classify the test coin using the second step (RGB features).
(1) Find the coins edge, using Hough Transform Algorithm.
(2) Determine the origin dot of the coins. I don't know how you'll do this.
(3) You can use k from KNN Algorithm for comparing the diameter or of the coins. Don't forget to set the bias value.
You could try and set up a training set of coin images and generate SIFT/SURF etc. descriptors of it. (EDIT: OpenCV feature detectors
Using these data you could set up a kNN classifier, using the coins values as training labels.
Once you perform kNN classification on you segmented coin images, your classification result would yield the coins value.

How do I update this Neural Net to use image pixel data

I'm learning Neural Networks from this bytefish machine learning guide and code. I understand it well but I would like to update the code at the previous link to use image pixel data instead of random values as the input data. In this section of the aforementioned code:
cv::randu(trainingData,0,1);
cv::randu(testData,0,1);
the training and test matrices are filled with random data. Then label data is added to the classes matrices here:
cv::Mat trainingClasses = labelData(trainingData, eq);
cv::Mat testClasses = labelData(testData, eq);
using this function:
// label data with equation
cv::Mat labelData(cv::Mat points, int equation) {
cv::Mat labels(points.rows, 1, CV_32FC1);
for(int i = 0; i < points.rows; i++) {
float x = points.at<float>(i,0);
float y = points.at<float>(i,1);
labels.at<float>(i, 0) = f(x, y, equation);
// the f() function used above
//is only a case statement with 5
//switches in it eg on of the switches is:
//case 0:
//return y > sin(x*10) ? -1 : 1;
//break;
}
return labels;
}
Then points are plotted in a window here:
plot_binary(trainingData, trainingClasses, "Training Data");
plot_binary(testData, testClasses, "Test Data");
with this function:
;; Plot Data and Class function
void plot_binary(cv::Mat& data, cv::Mat& classes, string name) {
cv::Mat plot(size, size, CV_8UC3);
plot.setTo(cv::Scalar(255.0,255.0,255.0));
for(int i = 0; i < data.rows; i++) {
float x = data.at<float>(i,0) * size;
float y = data.at<float>(i,1) * size;
if(classes.at<float>(i, 0) > 0) {
cv::circle(plot, Point(x,y), 2, CV_RGB(255,0,0),1);
} else {
cv::circle(plot, Point(x,y), 2, CV_RGB(0,255,0),1);
}
}
imshow(name, plot);
}
The plotted points, as I understand it, represent the input data multiplied by the equations in the f() function and is used by the predict functions to predict which point to plot in the mlp, knn, svm etc. functions. How do I update what is going on here to do something with Image pixel data. Any advice to get me farther would be appreciated.
"How do I update what is going on here to do something with Image pixel data" is a broad and generic question. May I ask in exchange: what do you want to do with "Image pixel data"?
Do you want an answer to: what can be done with "Image pixel data" on machine learning algorithms like ANN, SVM etc. ?
The answer is a loooong list of things encompassing thousands of research papers and hundreds of PhD theses. Some examples include: supervised and/or un-supervised classification of images into labels/tags/categories based on features like image content, objects in image, patterns in image etc. The possibilities are endless. You may perhaps want to take a look at this: http://stuff.mit.edu/afs/athena/course/urop/profit/PDFS/EdwardTolson.pdf
Now, coming back to you original objective: "I would like to update the code at the previous link to use image pixel data instead of random values as the input data"...
The implementation technique would depend largely on what you want to do. I can cite one/two easy techniques for extracting feature vectors from image, which can be fed into any machine learning algorithm of your choice...
Example 1:
You may start with using pixel intensity data as a feature vector. Here's how you may go ahead with it:
Load image using
Mat image = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
Resize image into a smaller area using resize. You may want to begin with small image sizes, like 8x8 or 10x10 pixels.
Loop through the image matrix, somewhat like this:
for(int row = 0; row < img.rows; ++row)
{
uchar* p = img.ptr(row);
for(int col = 0; col < img.cols; ++col)
{
*p++ //points to each pixel value in turn assuming a CV_8UC1 greyscale image
}
}
A collection of all the pixel values will give you a feature vector for that image.
Now suppose you have two classes of image. For each set of feature vector you generate, you'll have to prepare (for supervised classification) a corresponding label Mat (somewhat like the example you've mentioned). It needs to contain the class label (say, 0 and 1) for all the feature vectors present in your feature Mat.
Now feed the feature vectors and label Mat to your machine learning code and see what happens.
However, the ability of image classification based on image pixel data alone is quite limited. There are thousands of techniques for extracting image features, most of which are dependent on the application area.
Example 2:
I'll finish off with one more example for extracting feature vectors, which, in some cases, will prove to be more effective than simple image pixel values.
You may use the Histogram of Oriented Gradients descriptor for slightly better results, use this:
cv::HOGDescriptor hog;
vector<float> descriptors;
hog.compute(mat, descriptors);
The vector descriptors is your feature vector.
HOGDescriptors, when used with SVM, provides a decent classification mechanism.
You can put the pixel data of an image into a Mat called trainingData using something similar to this:
cv::Mat labelData(cv::Mat points, int equation)
{
cv::Mat labels(points.rows, 1, CV_32FC1);
for(int i = 0; i < points.rows; i++)
{
float x = points.at<float>(i,0);
float y = points.at<float>(i,1);
labels.at<float>(i, 0) = f(x, y, equation);
}
return labels;
}
Now, instead of labelData, we're going to return a Mat of pixel data. One obvious way is to use the image itself as a feature vector. However, some machine learning algorithms in openCV, including ANN, SVM etc., required special formatting of input data.
You may try something like this:
cv::Mat trainingData(cv::Mat image)
{
cv::Mat trainingVector(image.rows*image.cols, 1, CV_32FC1);
for(int i = 0; i < image.rows; i++)
{
for(int j = 0; j < image.cols; j++)
{
float valueOfPixel = image.at<float>(i,j);
trainingVector.at<float>((i*image.cols)+j, 0) = valueOfPixel;
}
}
return trainingVector;
}
(Please recheck the syntax of the code before using, I just typed it out here)
So, what the above block effectively does is change the 2D matrix of the image into a 1D array. Now, how and where you use it depends on your requirements.
Please make necessary modifications before invoking the machine learning modules.
Hope this answers your question.
Thanks.

How to detect object from video using SVM

This is my code for training the dataset of for example vehicles , when it train fully , i want it to predict the data(vehicle) from video(.avi) , how to predict trained data from video and how to add that part in it ? , i want that when the vehicle is shown in the video it count it as 1 and cout that the object is detected and if second vehicle come it increment the count as 2
IplImage *img2;
cout<<"Vector quantization..."<<endl;
collectclasscentroids();
vector<Mat> descriptors = bowTrainer.getDescriptors();
int count=0;
for(vector<Mat>::iterator iter=descriptors.begin();iter!=descriptors.end();iter++)
{
count += iter->rows;
}
cout<<"Clustering "<<count<<" features"<<endl;
//choosing cluster's centroids as dictionary's words
Mat dictionary = bowTrainer.cluster();
bowDE.setVocabulary(dictionary);
cout<<"extracting histograms in the form of BOW for each image "<<endl;
Mat labels(0, 1, CV_32FC1);
Mat trainingData(0, dictionarySize, CV_32FC1);
int k = 0;
vector<KeyPoint> keypoint1;
Mat bowDescriptor1;
//extracting histogram in the form of bow for each image
for(j = 1; j <= 4; j++)
for(i = 1; i <= 60; i++)
{
sprintf( ch,"%s%d%s%d%s","train/",j," (",i,").jpg");
const char* imageName = ch;
img2 = cvLoadImage(imageName, 0);
detector.detect(img2, keypoint1);
bowDE.compute(img2, keypoint1, bowDescriptor1);
trainingData.push_back(bowDescriptor1);
labels.push_back((float) j);
}
//Setting up SVM parameters
CvSVMParams params;
params.kernel_type = CvSVM::RBF;
params.svm_type = CvSVM::C_SVC;
params.gamma = 0.50625000000000009;
params.C = 312.50000000000000;
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 0.000001);
CvSVM svm;
printf("%s\n", "Training SVM classifier");
bool res = svm.train(trainingData, labels, cv::Mat(), cv::Mat(), params);
cout<<"Processing evaluation data..."<<endl;
Mat groundTruth(0, 1, CV_32FC1);
Mat evalData(0, dictionarySize, CV_32FC1);
k = 0;
vector<KeyPoint> keypoint2;
Mat bowDescriptor2;
Mat results(0, 1, CV_32FC1);;
for(j = 1; j <= 4; j++)
for(i = 1; i <= 60; i++)
{
sprintf( ch, "%s%d%s%d%s", "eval/", j, " (",i,").jpg");
const char* imageName = ch;
img2 = cvLoadImage(imageName,0);
detector.detect(img2, keypoint2);
bowDE.compute(img2, keypoint2, bowDescriptor2);
evalData.push_back(bowDescriptor2);
groundTruth.push_back((float) j);
float response = svm.predict(bowDescriptor2);
results.push_back(response);
}
//calculate the number of unmatched classes
double errorRate = (double) countNonZero(groundTruth- results) / evalData.rows;
The question isThis code is not predicting from video , i want to know how to predict it from the video , mean like i want to detect the vehicle from movie , like it should show 1 when it find the vehicle from movie
For those who didn't understand the question :
I want to play a movie in above code
VideoCapture cap("movie.avi"); //movie.avi is with deleted background
Suppose i have a trained data which contain vehicle's , and "movie.avi" contain 5 vehicles , so it should detect that vehicles from the movie.avi and give me 5 as output
How to do this part in the above code
From looking at your code setup
params.svm_type = CvSVM::C_SVC;
it appears that you train your classifier with more than two classes. A typical example in traffic scenario could be cars/pedestrians/bikes/... However, you were asking for a way to detect cars only. Without a description of your training data and your video it's hard to tell, if your idea makes sense. I guess what the previous answers are assuming is the following:
You loop through each frame and want to output the number of cars in that frame. Thus, a frame may contain multiple cars, say 5. If you take the whole frame as input to the classifier, it might respond "car", even if the setup might be a little off, conceptually. You cannot retrieve the number of cars reliably with this approach.
Instead, the suggestion is to try a sliding-window approach. This means, for example, you loop over each pixel of the frame and take the region around the pixel (called sub-window or region-of-interest) as input to the classifier. Assuming a fixed scale, the sub-window could have a size of 150x50px as well as your training data would. You might fixate the scale of the cars in your training data, but in real-world videos, the cars will be of different size. In order to find a car of different scale, let's say it's two-times as large as in the training data, the typical approach is to scale the image (say with a factor of 2) and repeat the sliding-window approach.
By repeating this for all relevant scales you end up with an algorithm that gives you for each pixel location and each scale the result of your classifier. This means you have three loops, or, in other words, there are three dimensions (image width, image height, scale). This is best understood as a three-dimensional pyramid. "Why a pyramid?" you might ask. Because each time the image is scaled (say 2) the image gets smaller (/larger) and the next scale is an image of different size (for eample half the size).
The pixel locations indicate the position of the car and the scale indicates the size of it. Now, if you have an N-class classifier, each slot in this pyramid will contain a number (1,...,N) indicating the class. If you had a binary classifier (car/no car), then you would end up with each slot containing 0 or 1. Even in this simple case, where you would be tempted to simply count the number of 1 and output the count as the number of cars, you still have the problem that there could be multiple responses for the same car. Thus, it would be better if you had a car detector that gives continous responses between 0 and 1 and then you could find maxima in this pyramid. Each maximum would indicate a single car. This kind of detection is successfully used with corner features, where you detect corners of interest in a so-called scale-space pyramid.
To summarize, no matter if you are simplifying the problem to a binary classification problem ("car"/"no car"), or if you are sticking to the more difficult task of distinguishing between multiple classes ("car"/"animal"/"pedestrian"/...), you still have the problem of scale and location in each frame to solve.
The code you have for using images is written with OpenCV's C interface so it's probably easy to stick with that rather than use the C++ video interface.
In which case somthing along these lines should work:
CvCapture *capture = cvCaptureFromFile("movie.avi");
IplImage *img = 0;
while(img = cvQueryFrame(capture))
{
// Process image
...
}
You should implement a sliding window approach. In each window, you should apply the SVM to get candidates. Then, once you've done it for the whole image, you should merge the candidates (if you detected an object, then it is very likely that you'll detect it again in shift of a few pixels - that's the meaning of candidates).
Take a look at the V&J code at openCV or the latentSVM code (detection by parts) to see how it's done there.
By the way, I would use the LatentSVM code (detection by parts) to detect vehicles. It has trained models for cars and for buses.
Good luck.
You need detector, not classifier. Take a look at Haar cascades, LBP cascades, latentSVM, as mentioned before or HOG detector.
I'll explain why. Detector usually scan image by sliding window, line by line. In several scales. In every window detector solve problem: "object/not object". It may give you rough results but very fast. Classifiers such as BOW, works very slow for this task. Then you should apply classifiers to regions, found by detector.

Reducing lag during blob detection of real-time, binary b/w webcam feed using cvblobslib and opencv(c++)

I'm building a skin-detection algorithm that takes constant, real-time feed with a webcam, converts it to a binary image (based on the skin color of the person's face), and filters out the noise by only showing focusing on the largest blobs (using CvBlobsLib). The output of my code, however, shows a lot of lag, and I'm not sure what to change to make it faster.
Here's (the important part of) my code:
Mat frame;
IplImage ipl, *res = new IplImage;
CBlobResult blobs;
CBlob *currentBlob;
cvNamedWindow("output");
for(;;){
cap >> frame; //get a new frame from camera
cvtColor(frame, lab, CV_BGR2Lab);//frame now in L*a*b*
inRange(lab, BW_MIN, BW_MAX, bw);//frame now only shows "skin values"...BW_MIN/BW_MAX determined earlier
ipl = bw; //IplImage header
blobs = CBlobResult(&ipl, NULL, 0);
blobs.Filter(blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 10000);
res = cvCreateImage(cvGetSize(&ipl), IPL_DEPTH_8U, 3);
cvMerge(&ipl, &ipl, &ipl, NULL, res);
cvShowImage("output", res);
if(waitKey(5) >= 0) break;
}
cvDestroyWindow("output");
I convert Mat to IplImage because CvBlobsLib only works with the IplImage type.
Does anyone see a way that I could make this faster? I've just recently heard other blob detection libraries do a better job with real-time video, but I'd be interested to see if there's something I'm simply overlooking in my code.
You can decrease the resolution of the camera capture using set method
set(CV_CAP_PROP_FRAME_WIDTH , double width)
and
set(CV_CAP_PROP_FRAME_HEIGHT , double height)
If your default capture resolution is too high, this can increase the detection speed considerably.