Hi i've been trying to train a svm with features, but i don't understand what to do with the descriptors that are computed of the keypoints using ORB. I know that svm needs a data matrix and a label matrix, but i don't know how can i pass the descriptors Mat to a valid format.
I've read about the BoF (Bag of Words/Features) but i don't know how to use it.
Thanks for any help.
The code below allows me to get the descriptors of an image. What's the next step?
std::vector<KeyPoint> kp;
Mat desc;
// Default parameters of ORB
int nfeatures = 128;
float scaleFactor = 1.2f;
int nlevels = 8;
int edgeThreshold = 15; // Changed default (31);
int firstLevel = 0;
int WTA_K = 2;
int scoreType = ORB::HARRIS_SCORE;
int patchSize = 31;
int fastThreshold = 20;
Ptr<ORB> myORB = ORB::create(nfeatures, scaleFactor, nlevels, edgeThreshold, firstLevel, WTA_K, scoreType,
patchSize, fastThreshold);
myORB->detectAndCompute(src, Mat(), kp, desc);
features.push_back(desc);
I would highly recommend you to use python with OpenCV that will save you a lot of time. In python, this will be just 10 lines of code.
You can refer to this link for ORB. And once you get features then you can use scikit-learn svm for training an SVM classifier.
Related
I am play with OpenCV and SVM to make a classifier to predict facial expression. I have no problem to classify test dadaset, but when I try to predict a new image, I get this:
OpenCV Error: Assertion failed (samples.cols == var_count && samples.type() == CV_32F) in cv::ml::SVMImpl::predict
Error is pretty clear and I have a different number of columns, but of the same type.
I do not know how to achieve that, because I have a matrix of dimensions 1xnumber_of_features, but numbers_of_features is not the same of the trained and tested samples. How can I extract the same number of features from another image? Am I missing something?
To train classifier I did:
Detect face and save ROI;
Sift to extract features;
kmeans to cluster them;
bag of words to get the same numbers of features for each image;
pca to reduce;
train on train dadaset;
predict on test dadaset;
On the new image I did the same thing.
I tried to resize the new image to the same size, but nothing, same error ( and different number of columns, aka features). Vectors are of the same type (CF_32F).
[EDIT 1] Let's try to be more specific.
After succesfuly trained my classifier, I save SVM model in this way
svmClassifier->save(baseDatabasePath);
Then I load it when I need to do real time prediction in this way
cv::Ptr<cv::ml::SVM> svmClassifier;
svmClassifier = cv::ml::StatModel::load<ml::SVM>(path);
Then loop,
while (true)
{
getOneImage();
cv::Mat feature = extractFeaturesFromSingleImage();
float labelPredicted = svmClassifier->predict(feature);
cout << "Label predicted is: " << labelPredicted << endl;
}
But predict returns the error. feature dimension is 1x66, for example. As you can see below, I need like 140 features
<?xml version="1.0"?>
<opencv_storage>
<opencv_ml_svm>
<format>3</format>
<svmType>C_SVC</svmType>
<kernel>
<type>RBF</type>
<gamma>5.0625000000000009e-01</gamma></kernel>
<C>1.2500000000000000e+01</C>
<term_criteria><epsilon>1.1920928955078125e-07</epsilon>
<iterations>1000</iterations></term_criteria>
<var_count>140</var_count>
<class_count>7</class_count>
<class_labels type_id="opencv-matrix">
<rows>7</rows>
<cols>1</cols>
<dt>i</dt>
<data>
0 1 2 3 4 5 6</data></class_labels>
<sv_total>172</sv_total>
I do not know how achieve 140 features, when SIFT, FAST or SURF just give me around 60 features. What am I missing?
EDIT 2: I am going to try to be more formal: how can I put my real time sample on the same dimension of train and test dataset?
EDIT 3:
Extract features with sift and push on a vector of mat.
std::vector<cv::Mat> featuresVector;
for (int i = 0; i < numberImages; ++i)
{
cv::Mat face = cv::imread(facePath, CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat featuresExtracted = runExtractFeature(face, featuresExtractionAlgorithm);
featuresVector.push_back(featuresExtracted);
}
Get total features extracted from all images.
int numberFeatures = 0;
for (int i = 0; i < featuresVector.size(); ++i)
{
numberFeatures += featuresVector[i].rows;
}
Prepare a mat to cluster features (I tried to follow this example)
cv::Mat featuresData = cv::Mat::zeros(numberFeatures, featuresVector[0].cols, CV_32FC1);
int currentIndex = 0;
for (int i = 0; i < featuresVector.size(); ++i)
{
featuresVector[i].copyTo(featuresData.rowRange(currentIndex, currentIndex + featuresVector[i].rows));
currentIndex += featuresVector[i].rows;
}
Perform clustering (I do not know how this parameter suite my case, my I think can be ok for now)
cv::Mat labels;
cv::Mat centers;
int binSize = 1000;
kmeans(featuresData, binSize, labels, cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 100, 1.0), 3, KMEANS_PP_CENTERS, centers);
Prepare a mat to perform bow.
cv::Mat featuresDataHist = cv::Mat::zeros(numberImages, binSize, CV_32FC1);
for (int i = 0; i < numberImages; ++i)
{
cv::Mat feature = cv::Mat::zeros(1, binSize, CV_32FC1);
int numberImageFeatures = featuresVector[i].rows;
for (int j = 0; j < numberImageFeatures; ++j)
{
int bin = labels.at<int>(currentIndex + j);
feature.at<float>(0, bin) += 1;
}
cv::normalize(feature, feature);
feature.copyTo(featuresDataHist.row(i));
currentIndex += featuresVector[i].rows;
}
PCA to try to reduce dimension.
cv::PCA pca(featuresDataHist, cv::Mat(), CV_PCA_DATA_AS_ROW, 50/*0.90*/);
cv::Mat feature;
for (int i = 0; i < numberImages; ++i)
{
feature = pca.project(featuresDataHist.row(i));
}
im learning about SVM, so im making a sample program that trains an SVM to detect if a symbol is in an image or if its not. All the images are black and white (the symbols would be black and the background white). I have 12 training images, 6 positives (with the symbol) and 6 negatives (without it). Im using hu moments to get the descriptors of every image and then i construct the training matrix with those descriptors. also i have a Labels matrix, which contains a label for each image: 1 if its positive and 0 if its negative. but im getting an error (something like a segmentation fault) at the line where i train the SVM. here is my code:
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
//arrays where the labels and the features will be stored
float labels[12] ;
float trainingData[12][7] ;
Moments moment;
double hu[7];
//===============extracting the descriptos for each positive image=========
for ( int i = 0; i <= 5; i++){
//the images are called t0.png ... t5.png and are in the folder train
std::string path("train/t");
path += std::to_string(i);
path += ".png";
Mat input = imread(path, 0); //read the images
bitwise_not(input, input); //invert black and white
Mat BinaryInput;
threshold(input, BinaryInput, 100, 255, cv::THRESH_BINARY); //apply theshold
moment = moments(BinaryInput, true); //calculate the moments of the current image
HuMoments(moment, hu); //calculate the hu moments (this will be our descriptor)
//setting the row i of the training data as the hu moments
for (int j = 0; j <= 6; j++){
trainingData[i][j] = (float)hu[j];
}
labels[i] = 1; //label=1 because is a positive image
}
//===============extracting the descriptos for each negative image=========
for (int i = 0; i <= 5; i++){
//the images are called tn0.png ... tn5.png and are in the folder train
std::string path("train/tn");
path += std::to_string(i);
path += ".png";
Mat input = imread(path, 0); //read the images
bitwise_not(input, input); //invert black and white
Mat BinaryInput;
threshold(input, BinaryInput, 100, 255, cv::THRESH_BINARY); //apply theshold
moment = moments(BinaryInput, true); //calculate the moments of the current image
HuMoments(moment, hu); //calculate the hu moments (this will be our descriptor)
for (int j = 0; j <= 6; j++){
trainingData[i + 6][j] = (float)hu[j];
}
labels[i + 6] = 0; //label=0 because is a negative image
}
//===========================training the SVM================
//we convert the labels and trainingData matrixes to Mat objects
Mat labelsMat(12, 1, CV_32FC1, labels);
Mat trainingDataMat(12, 7, CV_32FC1, trainingData);
//create the SVM
Ptr<ml::SVM> svm = ml::SVM::create();
//set the parameters of the SVM
svm->setType(ml::SVM::C_SVC);
svm->setKernel(ml::SVM::LINEAR);
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
svm->setTermCriteria(criteria);
//Train the SVM !!!!!HERE OCCURS THE ERROR!!!!!!
svm->train(trainingDataMat, ml::ROW_SAMPLE, labelsMat);
//Testing the SVM...
Mat test = imread("train/t1.png", 0); //this should be a positive test
bitwise_not(test, test);
Mat testBin;
threshold(test, testBin, 100, 255, cv::THRESH_BINARY);
Moments momentP = moments(testBin, true); //calculate the moments of the test image
double huP[7];
HuMoments(momentP, huP);
Mat testMat(1, 7, CV_32FC1, huP); //setting the hu moments to the test matrix
double resp = svm->predict(testMat); //pretiction of the SVM
printf("%f", resp); //Response
getchar();
}
i know that the program is running fine until that line because i printed labelsMat and trainingDataMat and the values inside them are ok. Even in the console i can see that the program is running fine until that exact line executes. the console then shows this message:
OpenCV error: Bad argument (in the case of classification problem the responses must be categorical; either specify varType when creating TrainDatam or pass integer responses)
i dont really know what this means. any idea of what could be causing the problem? if you need any other details please tell me.
EDIT
for future readers:
the problem was in the way i defined the labels array as an array of float and the LabelsMat as a Mat of CV_32FC1. the array that contains the labels needs to have integers inside, so i changed:
float labels[12];
to
int labels[12];
and also changed
Mat labelsMat(12, 1, CV_32FC1, labels);
to
Mat labelsMat(12, 1, CV_32SC1, labels);
and that solved the error. Thank you
Trying changing:
Mat labelsMat(12, 1, CV_32FC1, labels);
to
Mat labelsMat(12, 1, CV_32SC1, labels);
From: http://answers.opencv.org/question/63715/svm-java-opencv-3/
If that doesn't work, hopefully one of these posts will help you:
Opencv 3.0 SVM train classification issues
OpenCV SVM Training Data
I am trying to classify an Image using support vector machines (SVM) of OpenCV and I get this error (OpenCV Error: Bad argument (train data must be floating-point matrix) in cvCheck
TrainData, file ......\modules\ml\src\inner_functions.cpp, line 857) during SVM training.
So far I converted the data to rows of floating data type but still get an error. I would also like to predict and save the final classes of the image. Kindly advice me, my code is as below.
Is there a multiclass SVM classification code example somewhere I can follow?
Thanks.
int main()
{
//Read image;
register int iii, jjj;
Mat image = imread("E:\\DATA\\Dummy\\Input\\ImageFeatures.jpg");
const int rows = image.rows, cols = image.cols, bands = image.channels();
Mat reshapedImage;
reshapedImage = image.reshape(bands, image.rows * image.cols);
reshapedImage.convertTo(reshapedImage,CV_32FC1);
// Set up labels
Mat imagelabels = imread("E:\\DATA\\Dummy\\Input\\TrainingSet.bmp", CV_8UC1);
Mat labels;
labels = imagelabels.reshape(1, rows * cols);
labels.convertTo(labels, CV_32FC1);
// Set up SVM's parameters
CvSVMParams params;
params.svm_type = CvSVM::C_SVC;
params.kernel_type = CvSVM::RBF; //RBF
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
params.C = 10;
params.gamma = 0.9;
// Train the SVM
CvSVM SVM;
SVM.train(reshapedImage, labels, Mat(), Mat(), params);
//Vec3b green(0,255,0), blue (255,0,0), red(0,0,255), gray(230,230,230);
float response = SVM.predict(reshapedImage);
Mat solution;
for (iii = 0; iii < image.rows; iii++)
for (jjj = 0; jjj < image.cols; jjj++)
{
float response = SVM.predict(reshapedImage);
solution.at<float>(iii,jjj) = response;
}
// Show the training data
imwrite("result.bmp", solution); // save the image
solution.convertTo(solution,CV_8UC1);
imshow("SVM Simple Example", solution); // show it to the user
waitKey(0);
}
In my case I have a 1000 by 1000 pixel image and a corresponding fully labelled 1000 by 1000 pixel image with five classes (with labels from 0 to 4). How best do I deal with this scenario?
I'm beginner in opencv. I have not gotten main concepts of opencv in details.
So maybe my code it's too dumb;
Out of my curiosity I want to try machine learning functions like a KNN, ANN.
I have the set of images with size 28*28 pixels. I want to do train cassifier for digit recognition. So first I need to assemble train set and train classes;
Mat train_data = Mat(rows, cols, CV_32FC1);
Mat train_classes = Mat(rows, 1, CV_32SC1);
Mat img = imread(image);
Mat float_data(1, cols, CV_32FC1);
img.convertTo(float_data, CV_32FC1);
How to fill train_data with float_data ?
I thought It was smth like this:
for (int i = 0; i < train_data.rows; ++i)
{
... // image is a string which contains next image path
image = imread(image);
img.convertTo(float_data, CV_32FC1);
for( int x = 0; x < train_data.cols; x++ ){
train_data.at<float> (i, x) = float_data.at<float>( x);;
}
}
KNearest knn;
knn.train(train_data, train_classes);
but it's of course doesn't work . . .
Please, tell me how to do it right. Or at least suggest the books for dummies :)
Mat train_data; // initially empty
Mat train_labels; // empty, too.
// for each img in the train set :
Mat img = imread("image_path");
Mat float_data;
img.convertTo(float_data, CV_32FC1); // to float
train_data.push_back( float_data.reshape(1,1) ); // add 1 row (flattened image)
train_labels.push_back( label_for_image ); // add 1 item
KNearest knn;
knn.train(train_data, train_labels);
it's all the same for other ml algos !
I'm having problems getting PCA and Eigenfaces working using the latest C++ syntax with the Mat and PCA classes. The older C syntax took an array of IplImage* as a parameter to perform its processing and the current API only takes a Mat that is formatted by Column or Row. I took the Row approach using the reshape function to fit my image's matrix to fit in a single row. I eventually want to take this data and then use the SVM algorithm to perform detection, but when I do that all my data is just a stream of 0s. Can someone please help me out? What am I doing wrong? Thanks!
I saw this question and it's somewhat related, but I'm not sure what the solution is.
This is basically what I have:
vector<Mat> images; //This variable will be loaded with a set of images to perform PCA on.
Mat values(images.size(), 1, CV_32SC1); //Values are the corresponding values to each of my images.
int nEigens = images.size() - 1; //Number of Eigen Vectors.
//Load the images into a Matrix
Mat desc_mat(images.size(), images[0].rows * images[0].cols, CV_32FC1);
for (int i=0; i<images.size(); i++) {
desc_mat.row(i) = images[i].reshape(1, 1);
}
Mat average;
PCA pca(desc_mat, average, CV_PCA_DATA_AS_ROW, nEigens);
Mat data(desc_mat.rows, nEigens, CV_32FC1); //This Mat will contain all the Eigenfaces that will be used later with SVM for detection
//Project the images onto the PCA subspace
for(int i=0; i<images.size(); i++) {
Mat projectedMat(1, nEigens, CV_32FC1);
pca.project(desc_mat.row(i), projectedMat);
data.row(i) = projectedMat.row(0);
}
CvMat d1 = (CvMat)data;
CvMat d2 = (CvMat)values;
CvSVM svm;
svm.train(&d1, &d2);
svm.save("svmdata.xml");
What etarion said is correct.
To copy a column or row you always have to write:
Mat B = mat.col(i);
A.copyTo(B);
The following program shows how to perform a PCA in OpenCV. It'll show the mean image and the first three Eigenfaces. The images I used in there are available from http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html:
#include "cv.h"
#include "highgui.h"
using namespace std;
using namespace cv;
Mat normalize(const Mat& src) {
Mat srcnorm;
normalize(src, srcnorm, 0, 255, NORM_MINMAX, CV_8UC1);
return srcnorm;
}
int main(int argc, char *argv[]) {
vector<Mat> db;
// load greyscale images (these are from http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html)
db.push_back(imread("s1/1.pgm",0));
db.push_back(imread("s1/2.pgm",0));
db.push_back(imread("s1/3.pgm",0));
db.push_back(imread("s2/1.pgm",0));
db.push_back(imread("s2/2.pgm",0));
db.push_back(imread("s2/3.pgm",0));
db.push_back(imread("s3/1.pgm",0));
db.push_back(imread("s3/2.pgm",0));
db.push_back(imread("s3/3.pgm",0));
db.push_back(imread("s4/1.pgm",0));
db.push_back(imread("s4/2.pgm",0));
db.push_back(imread("s4/3.pgm",0));
int total = db[0].rows * db[0].cols;
// build matrix (column)
Mat mat(total, db.size(), CV_32FC1);
for(int i = 0; i < db.size(); i++) {
Mat X = mat.col(i);
db[i].reshape(1, total).col(0).convertTo(X, CV_32FC1, 1/255.);
}
// Change to the number of principal components you want:
int numPrincipalComponents = 12;
// Do the PCA:
PCA pca(mat, Mat(), CV_PCA_DATA_AS_COL, numPrincipalComponents);
// Create the Windows:
namedWindow("avg", 1);
namedWindow("pc1", 1);
namedWindow("pc2", 1);
namedWindow("pc3", 1);
// Mean face:
imshow("avg", pca.mean.reshape(1, db[0].rows));
// First three eigenfaces:
imshow("pc1", normalize(pca.eigenvectors.row(0)).reshape(1, db[0].rows));
imshow("pc2", normalize(pca.eigenvectors.row(1)).reshape(1, db[0].rows));
imshow("pc3", normalize(pca.eigenvectors.row(2)).reshape(1, db[0].rows));
// Show the windows:
waitKey(0);
}
and if you want to build the matrix by row (like in your original question above) use this instead:
// build matrix
Mat mat(db.size(), total, CV_32FC1);
for(int i = 0; i < db.size(); i++) {
Mat X = mat.row(i);
db[i].reshape(1, 1).row(0).convertTo(X, CV_32FC1, 1/255.);
}
and set the flag in the PCA to:
CV_PCA_DATA_AS_ROW
Regarding machine learning. I wrote a document on machine learning with the OpenCV C++ API that has examples for most of the classifiers, including Support Vector Machines. Maybe you can get some inspiration there: http://www.bytefish.de/pdf/machinelearning.pdf.
data.row(i) = projectedMat.row(0);
This will not work. operator= is a shallow copy, meaning no data is actually copied. Use
cv::Mat sample = data.row(i); // also a shallow copy, points to old data!
projectedMat.row(0).copyTo(sample);
The same also for:
desc_mat.row(i) = images[i].reshape(1, 1);
I would suggest looking at the newly checked in tests in svn head
modules/core/test/test_mat.cpp
online here : https://code.ros.org/svn/opencv/trunk/opencv/modules/core/test/test_mat.cpp
has examples for PCA in the old c and new c++
Hope that helps!