Matching thermographic / non-thermographic images with OpenCV feature detectors - c++

I’m currently working on building software which can match infrared and non-infrared images taken from a fixed point using a thermographic camera.
The use case is the following: A picture is taken from using a tripod of a fixed point using an infrared thermographic camera and a standard camera. After taking the pictures, the photographer wants to match images from each camera. There will be some scenarios where an image is taken with only one camera as the other image type is unnecessary. Yes, it may be possible for the images to be matched using timestamps, but the end-user demands they be matched using computer vision.
I've looked at other image matching posts on StackOverflow -- they have often focused on using histogram matching and feature detectors. Histogram matching is not an option here, as we cannot match colors between the two image types. As a result, I've developed an application which does feature detection. In addition to standard feature detection, I’ve also added some logic which says that two keypoints cannot be matching if they are not within a certain margin of each other (a keypoint on the far left of the query image cannot match a keypoint on the far right of the candidate image) -- this process occurs in stage 3 of the code below.
To give you an idea of the current output, here is a valid and invalid match produced -- note the thermographic image is on the left. My objective is to improve the accuracy of the matching process.
Valid match:
Invalid match:
Here is the code:
// for each candidate image specified on the command line, compare it against the query image
Mat img1 = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE); // loading query image
for(int candidateImage = 0; candidateImage < (argc - 2); candidateImage++) {
Mat img2 = imread(argv[candidateImage + 2], CV_LOAD_IMAGE_GRAYSCALE); // loading candidate image
if(img1.empty() || img2.empty())
{
printf("Can't read one of the images\n");
return -1;
}
// detecting keypoints
SiftFeatureDetector detector;
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
// computing descriptors
SiftDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
// matching descriptors
BFMatcher matcher(NORM_L1);
vector< vector<DMatch> > matches_stage1;
matcher.knnMatch(descriptors1, descriptors2, matches_stage1, 2);
// use nndr to eliminate weak matches
float nndrRatio = 0.80f;
vector< DMatch > matches_stage2;
for (size_t i = 0; i < matches_stage1.size(); ++i)
{
if (matches_stage1[i].size() < 2)
continue;
const DMatch &m1 = matches_stage1[i][0];
const DMatch &m2 = matches_stage1[i][3];
if(m1.distance <= nndrRatio * m2.distance)
matches_stage2.push_back(m1);
}
// eliminate points which are too far away from each other
vector<DMatch> matches_stage3;
for(int i = 0; i < matches_stage2.size(); i++) {
Point queryPt = keypoints1.at(matches_stage2.at(i).queryIdx).pt;
Point trainPt = keypoints2.at(matches_stage2.at(i).trainIdx).pt;
// determine the lowest number here
int lowestXAxis;
int greaterXAxis;
if(queryPt.x <= trainPt.x) { lowestXAxis = queryPt.x; greaterXAxis = trainPt.x; }
else { lowestXAxis = trainPt.x; greaterXAxis = queryPt.x; }
int lowestYAxis;
int greaterYAxis;
if(queryPt.y <= trainPt.y) { lowestYAxis = queryPt.y; greaterYAxis = trainPt.y; }
else { lowestYAxis = trainPt.y; greaterYAxis = queryPt.y; }
// determine if these points are acceptable
bool acceptable = true;
if( (lowestXAxis + MARGIN) < greaterXAxis) { acceptable = false; }
if( (lowestYAxis + MARGIN) < greaterYAxis) { acceptable = false; }
if(acceptable == false) { continue; }
//// it's acceptable -- provide details, perform input
matches_stage3.push_back(matches_stage2.at(i));
}
// output how many individual matches were found for this training image
cout << "good matches found for candidate image # " << (candidateImage+1) << " = " << matches_stage3.size() << endl;
I used this sites code as an example. The problem I’m having is that the feature detection is not reliable, and I seem to be missing the purpose of the NNDR ratio. I understand that I am finding K possible matches for each point within the query image and that I have K = 2. But I don’t understand the purpose of this part within the example code:
vector< DMatch > matches_stage2;
for (size_t i = 0; i < matches_stage1.size(); ++i)
{
if (matches_stage1[i].size() < 2)
continue;
const DMatch &m1 = matches_stage1[i][0];
const DMatch &m2 = matches_stage1[i][1];
if(m1.distance <= nndrRatio * m2.distance)
matches_stage2.push_back(m1);
}
Any ideas on how I can improve this further? Any advice would be appreciated as always.

The validation you currently use
First stage
First of all, let's talk about the part of the code that you don't understand. The idea is to keep only "strong matches". Actually, your call of knnMatch finds, for each descriptor, the best two correspondences with respect to the Euclidean distance "L2"(*). This does not mean at all that these are good matches in reality, but only that those feature points are quite similar.
Let me try to explain your validation now, considering only one feature point in image A (it generalizes to all of them):
You match the descriptor of this point against image B
You get the two best correspondences with respect to the Euclidean distance (i.e. you get the two most similar points in image B)
If the distance from your point to the best correspondence is much smaller than the one from your point to the second-best correspondence, then you assume that it is a good match. In other words, there was only one point in image B that was really similar (i.e. had a small Euclidean distance) to the point in image A.
If both matches are too similar (i.e. !(m1.distance <= nndrRatio * m2.distance)), then you cannot really discriminate between them and you don't consider the match.
This validation has some major weaknesses, as you have probably observed:
Firstly, if the best matches you get from knnMatch are both terribly bad, then the best of those might be accepted anyway.
It does not take the geometry into account. Therefore, a point far on the left of the image might be similar to a point far on the right, even though in reality they clearly don't match.
* EDIT: Using SIFT, you describe each feature point in your image using a floating-point vector. By computing the Euclidean distance between two vectors, you know how similar they are. If both vectors are exactly the same, then the distance is zero. The smaller the distance, the more similar the points. But this is not geometric: a point on the left-hand side of your image might look similar to a point in the right-hand side. So you first find the pairs of points that look similar (i.e. "This point in A looks similar to this point in B because the Euclidean distance between their feature vectors is small") and then you need to verify that this match is coherent (i.e. "It is possible that those similar points are actually the same because they both are on the left-hand side of my image" or "They look similar, but that is incoherent because I know that they must lie on the same side of the image and they don't").
Second stage
What you do in your second stage is interesting since it considers the geometry: knowing that both images were taken from the same point (or almost the same point?), you eliminate matches that are not in the same region in both images.
The problem I see with this is that if both images weren't taken at the exact same position with the very same angle, then it won't work.
Proposition to improve your validation further
I would personally work on the second stage. Even though both images aren't necessarily exactly the same, they describe the same scene. And you can take advantage of the geometry of it.
The idea is that you should be able to find a transformation from the first image to the second one (i.e. the way in which a point moved from image A to image B is actually linked to the way all of the points moved). And in your situation, I would bet that a simple homography is adapted.
Here is my proposition:
Compute the matches using knnMatch, and keep stage 1 (you might want to try removing it later and observe the consequences)
Compute the best possible homography transform between those matches using cv::findHomography (choose the RANSAC algorithm).
findHomography has a mask output that will give you the inliers (i.e. the matches that were used to compute the homography transform).
The inliers will most probably be good matches since there will be coherent geometrically.
EDIT: I just found an example using findHomography here.

Haven't tried it with infrared/visible light photography, but mutual information metrics usually do a reasonable job when you have very different histograms for similar images.
Depending on how fast you need this to be and how many candidates there are, one way to leverage this would be to register the images using a mutual information metric and find the image pair where you end up with the lowest error. It would probably be a good idea to downsample the images to speed things up and reduce noise-sensitivity.

After extracting keypoints,forming descriptors and matching use some outlier removal algorithm like RANSAC. Opencv provide RANSAC with findHomography function.you can see the implementation.I have used this with SURF and it gave me reasonably good results.

Ideas:
a) use the super-resolution module to improve your input (OpenCV245).
b) use maximally stable local color regions as matching features (MSER).

Related

How to fix the number of SIFT keypoints?

I am trying to use SIFT descriptors that are directly used for image classification. The SIFT is defined by: Ptr<SIFT> sift = SIFT::create(100). Then I expect there would be 100 keypoints to be extracted. But the number of actually detected keypoints (sift->detect(img_resiz,keypoints)) is not always 100 (sometimes exceeding the preset value). How could that happen?
I want to have the fixed number of keypoints per image so as to produce the consistent length of descriptors (after being reshaped into a row vector) among different images (alternatively I may need more processing based on the bag-of-word to represent the sift descriptors into the same dimension).
There was an error in the function KeyPointsFilter::retainBest(std::vector<KeyPoint>& keypoints, int n_points) as you can see here: https://github.com/opencv/opencv/commit/3f3c8823ac22e34a37d74bc824e00a807535b91b.
I could reproduce the error with an older version of OpenCV (3.4.5) and sometimes you had 1 more KeyPoint than expected e.g. 101 instead of 100 because of that marked line.
If you don't want to switch to a newer OpenCV version you could do something like:
// Detect SIFT keypoints
std::vector<cv::KeyPoint> keypoints_sift, keypoints_sift_100;
cv::Ptr<cv::xfeatures2d::SiftFeatureDetector> sift = cv::xfeatures2d::SiftFeatureDetector::create(100);
sift->detect(img, keypoints_sift);
std::cout << keypoints_sift.size() << std::endl;
for (size_t i = 0; i < 100; ++i) {
keypoints_sift_100.push_back(keypoints_sift[i]);
}
So you would keep the 100 best keypoints after detection since they are ranked by their scores https://docs.opencv.org/4.1.0/d5/d3c/classcv_1_1xfeatures2d_1_1SIFT.html.

OpenCV Dense feature detector

I am using openCV to do some dense feature extraction. For example, The code
DenseFeatureDetector detector(12.f, 1, 0.1f, 10);
I don't really understand the parameters in the above constructor. What does it mean ? Reading the opencv documentation about it does not help much either. In the documentation the arguments are:
DenseFeatureDetector( float initFeatureScale=1.f, int featureScaleLevels=1,
float featureScaleMul=0.1f,
int initXyStep=6, int initImgBound=0,
bool varyXyStepWithScale=true,
bool varyImgBoundWithScale=false );
What are they supposed to do ? i.e. what is the meaning of scale, initFeatureScale, featureScaleLevels etc ? How do you know the grid or grid spacing etc for the dense sampling.
I'm using opencv with dense detector too and I think I can help you with something. I'm not sure about what I'm going to say but the experience learnt me that.
When I use Dense detector I pass there the gray scale image. The detector makes some threshold filters where opencv uses a gray minimum value with is used to transform the image. The píxels where have a more gray level than the threshold will be made like black points and the others are white point. This action is repeated in a loop where the threshold will be bigger and bigger. So the parameter initFeatureScale determine the first threshold you put to do this loop, the featureScaleLevels parameter indicates how much this threshold is bigger between one loop iteration and the next one and featureScaleMul is a multiply factor to calculate the next threshold.
Anyway if you are looking for a your optimal parameters to use Dense detector to detect any particular points You would offer a program I made for that. It is liberated in github. This is a program where you can test some detectors (Dense detector is one of them) and check how it works if you change their parameters thanks to a user interface that let you change the detectors parameters as long as you are executing the program. You will see how the detected points will be change. For try it just click on the link, and download the files. You might need almost all the files to execute the program.
Apologies in advance, i'm predominantly using Python so i'll avoid embarressing myself by referring to C++.
DenseFeatureDetector populates a vector with KeyPoints to pass to compute feature descriptors. These keypoints have a point vector and their scale set. In the documentation, scale is the pixel radius of the keypoint.
KeyPoints are evenly spaced across the width and height of the image matrix passed to DenseFeatureVector.
Now to the arguments:
initFeatureScale
Set the initial KeyPoint feature radius in pixels (as far as I am aware this has no effect)
featureScaleLevels
Number of scales overwhich we wish to make keypoints
featureScaleMuliplier
Scale adjustment for initFeatureScale over featureScaleLevels, this scale adjustment can also be applied to the border (initImgBound) and the step size (initxystep). So when we set featureScaleLevels>1 then this multiplier will be applied to successive scales, to adjust feature scale, step and the boundary around the image.
initXyStep
moving column and row step in pixels. Self explanatory I hope.
initImgBound
row/col bounding region to ignore around the image (pixels), So a 100x100 image, with an initImgBound of 10, would create keypoints in the central 80x80 portion of the image.
varyXyStepWithScale
Boolean, if we have multiple featureScaleLevels do we want to adjust the step size using featureScaleMultiplier.
varyImgBoundWithScale
Boolean,as varyXyStepWithScale, but applied to the border.
Here is the DenseFeatureDetector source code from detectors.cpp in the OpenCV 2.4.3 source, which will probably explain better than my words:
DenseFeatureDetector::DenseFeatureDetector( float _initFeatureScale, int _featureScaleLevels,
float _featureScaleMul, int _initXyStep,
int _initImgBound, bool _varyXyStepWithScale,
bool _varyImgBoundWithScale ) :
initFeatureScale(_initFeatureScale), featureScaleLevels(_featureScaleLevels),
featureScaleMul(_featureScaleMul), initXyStep(_initXyStep), initImgBound(_initImgBound),
varyXyStepWithScale(_varyXyStepWithScale), varyImgBoundWithScale(_varyImgBoundWithScale)
{}
void DenseFeatureDetector::detectImpl( const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask ) const
{
float curScale = static_cast<float>(initFeatureScale);
int curStep = initXyStep;
int curBound = initImgBound;
for( int curLevel = 0; curLevel < featureScaleLevels; curLevel++ )
{
for( int x = curBound; x < image.cols - curBound; x += curStep )
{
for( int y = curBound; y < image.rows - curBound; y += curStep )
{
keypoints.push_back( KeyPoint(static_cast<float>(x), static_cast<float>(y), curScale) );
}
}
curScale = static_cast<float>(curScale * featureScaleMul);
if( varyXyStepWithScale ) curStep = static_cast<int>( curStep * featureScaleMul + 0.5f );
if( varyImgBoundWithScale ) curBound = static_cast<int>( curBound * featureScaleMul + 0.5f );
}
KeyPointsFilter::runByPixelsMask( keypoints, mask );
}
You might expect a call to compute would calculate additional KeyPoint characteristics using the relevant keypoint detection algorithm (e.g. angle), based on the KeyPoints generated by DenseFeatureDetector. Unfortunately this isn't the case for SIFT under Python - i've not looked at at the other feature detectors, nor looked at the behaviour in C++.
Also note that DenseFeatureDetector is not in OpenCV 3.2 (unsure at which release it was removed).

Matching small grayscale images

I want to test whether two images match. Partial matches also interest me.
The problem is that the images suffer from strong noise. Another problem is that the images might be rotated with an unknown angle. The objects shown in the images will roughly always have the same scale!
The images show area scans from a top-shot perspective. "Lines" are mostly walls and other objects are mostly trees and different kinds of plants.
Another problem was, that the left image was very blurry and the right one's lines were very thin.
To compensate for this difference I used dilation. The resulting images are the ones I uploaded.
Although It can easily be seen that these images match almost perfectly I cannot convince my algorithm of this fact.
My first idea was a feature based matching, but the matches are horrible. It only worked for a rotation angle of -90°, 0° and 90°. Although most descriptors are rotation invariant (in past projects they really were), the rotation invariance seems to fail for this example.
My second idea was to split the images into several smaller segments and to use template matching. So I segmented the images and, again, for the human eye they are pretty easy to match. The goal of this step was to segment the different walls and trees/plants.
The upper row are parts of the left, and the lower are parts of the right image. After the segmentation the segments were dilated again.
As already mentioned: Template matching failed, as did contour based template matching and contour matching.
I think the dilation of the images was very important, because it was nearly impossible for the human eye to match the segments without dilation before the segmentation. Another dilation after the segmentation made this even less difficult.
Your first job should be to fix the orientation. I am not sure what is the best algorithm to do that but here is an approach I would use: fix one of the images and start rotating the other. For each rotation compute a histogram for the color intense on each of the rows/columns. Compute some distance between the resulting vectors(e.g. use cross product). Choose the rotation that results in smallest cross product. It may be good idea to combine this approach with hill climbing.
Once you have the images aligned in approximately the same direction, I believe matching should be easier. As the two images are supposed to be at the same scale, compute something analogous to the geometrical center for both images: compute weighted sum of all pixels - a completely white pixel would have a weight of 1, and a completely black - weight 0, the sum should be a vector of size 2(x and y coordinate). After that divide those values by the dimensions of the image and call this "geometrical center of the image". Overlay the two images in a way that the two centers coincide and then once more compute cross product for the difference between the images. I would say this should be their difference.
You can also try following methods to find rotation and similarity.
Use image moments to get the rotation as shown here.
Once you rotate the image, use cross-correlation to evaluate the similarity.
EDIT
I tried this with OpenCV and C++ for the two sample images. I'm posting the code and results below as it seems to work well at least for the given samples.
Here's the function to calculate the orientation vector using image moments:
Mat orientVec(Mat& im)
{
Moments m = moments(im);
double cov[4] = {m.mu20/m.m00, m.mu11/m.m00, m.mu11/m.m00, m.mu02/m.m00};
Mat covMat(2, 2, CV_64F, cov);
Mat evals, evecs;
eigen(covMat, evals, evecs);
return evecs.row(0);
}
Rotate and match sample images:
Mat im1 = imread(INPUT_FOLDER_PATH + string("WojUi.png"), 0);
Mat im2 = imread(INPUT_FOLDER_PATH + string("XbrsV.png"), 0);
// get the orientation vector
Mat v1 = orientVec(im1);
Mat v2 = orientVec(im2);
double angle = acos(v1.dot(v2))*180/CV_PI;
// rotate im2. try rotating with -angle and +angle. here using -angle
Mat rot = getRotationMatrix2D(Point(im2.cols/2, im2.rows/2), -angle, 1.0);
Mat im2Rot;
warpAffine(im2, im2Rot, rot, Size(im2.rows, im2.cols));
// add a border to rotated image
int borderSize = im1.rows > im2.cols ? im1.rows/2 + 1 : im1.cols/2 + 1;
Mat im2RotBorder;
copyMakeBorder(im2Rot, im2RotBorder, borderSize, borderSize, borderSize, borderSize,
BORDER_CONSTANT, Scalar(0, 0, 0));
// normalized cross-correlation
Mat& image = im2RotBorder;
Mat& templ = im1;
Mat nxcor;
matchTemplate(image, templ, nxcor, CV_TM_CCOEFF_NORMED);
// take the max
double max;
Point maxPt;
minMaxLoc(nxcor, NULL, &max, NULL, &maxPt);
// draw the match
Mat rgb;
cvtColor(image, rgb, CV_GRAY2BGR);
rectangle(rgb, maxPt, Point(maxPt.x+templ.cols-1, maxPt.y+templ.rows-1), Scalar(0, 255, 255), 2);
cout << "max: " << max << endl;
With -angle rotation in code, I get max = 0.758. Below is the rotated image in this case with the matching region.
Otherwise max = 0.293

Best algorithm for video stabilization

I am creating a program to stabilize the video stream. At the moment, my program works based on the phase correlation algorithm. I'm calculating an offset between two images - base and current. Next I correct the current image according to the new coordinates. This program works, but the result is not satisfactory. The related links you may find that the treated video appears undesirable and shake the whole video is becoming worse.
Orininal video
Unshaked video
There is my current realisation:
Calculating offset between images:
Point2d calculate_offset_phase_optimized(Mat one, Mat& two) {
if(two.type() != CV_64F) {
cvtColor(two, two, CV_BGR2GRAY);
two.convertTo(two, CV_64F);
}
cvtColor(one, one, CV_BGR2GRAY);
one.convertTo(one, CV_64F);
return phaseCorrelate(one, two);
}
Shifting image according this coordinate:
void move_image_roi_alt(Mat& img, Mat& trans, const Point2d& offset) {
trans = Mat::zeros(img.size(), img.type());
img(
Rect(
_0(static_cast<int>(offset.x)),
_0(static_cast<int>(offset.y)),
img.cols-abs(static_cast<int>(offset.x)),
img.rows-abs(static_cast<int>(offset.y))
)
).copyTo(trans(
Rect(
_0ia(static_cast<int>(offset.x)),
_0ia(static_cast<int>(offset.y)),
img.cols-abs(static_cast<int>(offset.x)),
img.rows-abs(static_cast<int>(offset.y))
)
));
}
int _0(const int x) {
return x < 0 ? 0 : x;
}
int _0ia(const int x) {
return x < 0 ? abs(x) : 0;
}
I was looking through the document authors stabilizer YouTube and algorithm based on corner detection seemed attractive, but I'm not entirely clear how it works.
So my question is how to effectively solve this problem.
One of the conditions - the program will run on slower computers, so heavy algorithms may not be suitable.
Thanks!
P.S.
I apologize for any mistakes in the text - it is an automatic translation.
You can use image descriptors such as SIFT in each frame and calculate robust matches between the frames. Then you can calculate homography between the frames and use that to align them. Using sparse features can lead to faster implementation than using a dense correlation.
Alternately, if you know the camera parameters you can calculate 3D positions of the points and of the cameras and reproject the images onto a stable projection plane. In the result, you also get a sparse 3D reconstruction of the scene (somewhat imprecise, usually it needs to be optimized to be usable). This is what e.g. Autostitch would do, but it is quite difficult to implement, however.
Note that the camera parameters can also be calculated, but that is even more difficult.
OpenCV can do it for you in 3 lines of code (it is definitely shortest way, may be even the best):
t = estimateRigidTransform(newFrame, referenceFrame, 0); // 0 means not all transformations (5 of 6)
if(!t.empty()){
warpAffine(newFrame, stableFrame, t, Size(newFrame.cols, newFrame.rows)); // stableFrame should be stable now
}
You can turn off some kind of transformations by modifying matrix t, it can lead to more stable result. It is just core idea, then you can modify it in the way you want: change referenceFrame, smooth set of transformation parameters from matrix t etc.

How to detect object from video using SVM

This is my code for training the dataset of for example vehicles , when it train fully , i want it to predict the data(vehicle) from video(.avi) , how to predict trained data from video and how to add that part in it ? , i want that when the vehicle is shown in the video it count it as 1 and cout that the object is detected and if second vehicle come it increment the count as 2
IplImage *img2;
cout<<"Vector quantization..."<<endl;
collectclasscentroids();
vector<Mat> descriptors = bowTrainer.getDescriptors();
int count=0;
for(vector<Mat>::iterator iter=descriptors.begin();iter!=descriptors.end();iter++)
{
count += iter->rows;
}
cout<<"Clustering "<<count<<" features"<<endl;
//choosing cluster's centroids as dictionary's words
Mat dictionary = bowTrainer.cluster();
bowDE.setVocabulary(dictionary);
cout<<"extracting histograms in the form of BOW for each image "<<endl;
Mat labels(0, 1, CV_32FC1);
Mat trainingData(0, dictionarySize, CV_32FC1);
int k = 0;
vector<KeyPoint> keypoint1;
Mat bowDescriptor1;
//extracting histogram in the form of bow for each image
for(j = 1; j <= 4; j++)
for(i = 1; i <= 60; i++)
{
sprintf( ch,"%s%d%s%d%s","train/",j," (",i,").jpg");
const char* imageName = ch;
img2 = cvLoadImage(imageName, 0);
detector.detect(img2, keypoint1);
bowDE.compute(img2, keypoint1, bowDescriptor1);
trainingData.push_back(bowDescriptor1);
labels.push_back((float) j);
}
//Setting up SVM parameters
CvSVMParams params;
params.kernel_type = CvSVM::RBF;
params.svm_type = CvSVM::C_SVC;
params.gamma = 0.50625000000000009;
params.C = 312.50000000000000;
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 0.000001);
CvSVM svm;
printf("%s\n", "Training SVM classifier");
bool res = svm.train(trainingData, labels, cv::Mat(), cv::Mat(), params);
cout<<"Processing evaluation data..."<<endl;
Mat groundTruth(0, 1, CV_32FC1);
Mat evalData(0, dictionarySize, CV_32FC1);
k = 0;
vector<KeyPoint> keypoint2;
Mat bowDescriptor2;
Mat results(0, 1, CV_32FC1);;
for(j = 1; j <= 4; j++)
for(i = 1; i <= 60; i++)
{
sprintf( ch, "%s%d%s%d%s", "eval/", j, " (",i,").jpg");
const char* imageName = ch;
img2 = cvLoadImage(imageName,0);
detector.detect(img2, keypoint2);
bowDE.compute(img2, keypoint2, bowDescriptor2);
evalData.push_back(bowDescriptor2);
groundTruth.push_back((float) j);
float response = svm.predict(bowDescriptor2);
results.push_back(response);
}
//calculate the number of unmatched classes
double errorRate = (double) countNonZero(groundTruth- results) / evalData.rows;
The question isThis code is not predicting from video , i want to know how to predict it from the video , mean like i want to detect the vehicle from movie , like it should show 1 when it find the vehicle from movie
For those who didn't understand the question :
I want to play a movie in above code
VideoCapture cap("movie.avi"); //movie.avi is with deleted background
Suppose i have a trained data which contain vehicle's , and "movie.avi" contain 5 vehicles , so it should detect that vehicles from the movie.avi and give me 5 as output
How to do this part in the above code
From looking at your code setup
params.svm_type = CvSVM::C_SVC;
it appears that you train your classifier with more than two classes. A typical example in traffic scenario could be cars/pedestrians/bikes/... However, you were asking for a way to detect cars only. Without a description of your training data and your video it's hard to tell, if your idea makes sense. I guess what the previous answers are assuming is the following:
You loop through each frame and want to output the number of cars in that frame. Thus, a frame may contain multiple cars, say 5. If you take the whole frame as input to the classifier, it might respond "car", even if the setup might be a little off, conceptually. You cannot retrieve the number of cars reliably with this approach.
Instead, the suggestion is to try a sliding-window approach. This means, for example, you loop over each pixel of the frame and take the region around the pixel (called sub-window or region-of-interest) as input to the classifier. Assuming a fixed scale, the sub-window could have a size of 150x50px as well as your training data would. You might fixate the scale of the cars in your training data, but in real-world videos, the cars will be of different size. In order to find a car of different scale, let's say it's two-times as large as in the training data, the typical approach is to scale the image (say with a factor of 2) and repeat the sliding-window approach.
By repeating this for all relevant scales you end up with an algorithm that gives you for each pixel location and each scale the result of your classifier. This means you have three loops, or, in other words, there are three dimensions (image width, image height, scale). This is best understood as a three-dimensional pyramid. "Why a pyramid?" you might ask. Because each time the image is scaled (say 2) the image gets smaller (/larger) and the next scale is an image of different size (for eample half the size).
The pixel locations indicate the position of the car and the scale indicates the size of it. Now, if you have an N-class classifier, each slot in this pyramid will contain a number (1,...,N) indicating the class. If you had a binary classifier (car/no car), then you would end up with each slot containing 0 or 1. Even in this simple case, where you would be tempted to simply count the number of 1 and output the count as the number of cars, you still have the problem that there could be multiple responses for the same car. Thus, it would be better if you had a car detector that gives continous responses between 0 and 1 and then you could find maxima in this pyramid. Each maximum would indicate a single car. This kind of detection is successfully used with corner features, where you detect corners of interest in a so-called scale-space pyramid.
To summarize, no matter if you are simplifying the problem to a binary classification problem ("car"/"no car"), or if you are sticking to the more difficult task of distinguishing between multiple classes ("car"/"animal"/"pedestrian"/...), you still have the problem of scale and location in each frame to solve.
The code you have for using images is written with OpenCV's C interface so it's probably easy to stick with that rather than use the C++ video interface.
In which case somthing along these lines should work:
CvCapture *capture = cvCaptureFromFile("movie.avi");
IplImage *img = 0;
while(img = cvQueryFrame(capture))
{
// Process image
...
}
You should implement a sliding window approach. In each window, you should apply the SVM to get candidates. Then, once you've done it for the whole image, you should merge the candidates (if you detected an object, then it is very likely that you'll detect it again in shift of a few pixels - that's the meaning of candidates).
Take a look at the V&J code at openCV or the latentSVM code (detection by parts) to see how it's done there.
By the way, I would use the LatentSVM code (detection by parts) to detect vehicles. It has trained models for cars and for buses.
Good luck.
You need detector, not classifier. Take a look at Haar cascades, LBP cascades, latentSVM, as mentioned before or HOG detector.
I'll explain why. Detector usually scan image by sliding window, line by line. In several scales. In every window detector solve problem: "object/not object". It may give you rough results but very fast. Classifiers such as BOW, works very slow for this task. Then you should apply classifiers to regions, found by detector.