C++ OpenCV svm train crashes - c++

I have two vectors:
vector<int> features;
vector<int> labels;
And in some point into my program I fill them with some values. (both vectors same size) Then, when I want to train the svm I copy the vectors into 2 new cv::Mat like this:
Mat trainMat(features.size(), 1, CV_32FC1);
Mat labelsMat(labels.size(), 1, CV_32FC1);
for (int i = 0; i < features.size(); i++) {
trainMat.at<int>(i, 1) = features.at(i);
labelsMat.at<int>(i, 1) = labels.at(i);
}
Then I create the svm and it's params:
cv::SVMParams params;
params.svm_type = cv::SVM::C_SVC;
params.kernel_type = cv::SVM::POLY;
params.gamma = 3;
cv::SVM svm;
And finally I train it:
svm.train(trainMat, labelsMat, Mat(), Mat(), params);
But, the program crashes and gives this error:
Unhandled exception at 0x7484D928 in cvtest.exe: Microsoft C++ exception: cv::Exception at memory location 0x0017F04.
At first, I thought the problem was the size of the data(because I compile it at 32bit). So, I used only 20, even 4 samples just to test it. But, still crashing. What else could result a memory error?

Finally, I found the problem. svm.train() accepts only float type features and not int. I just changed vector<int> features; to vector<float> features; and it works.

You are creating trainMat and labelsMat as float matrices with CV_32FC1 but setting the values with trainMat.at<int> which is wrong.
It has to be trainMat.at<float>.

Related

Access violation reading location when push_back Mat in another Mat

I want write simple code to extract HOG features and then train SVM. but this exception occur, i try different OpenCV versions like 3.4.5 and 4.0 but not differ.
cv::HOGDescriptor hogDetector = cv::HOGDescriptor();
hogDetector.winSize = cv::Size(256, 256);
hogDetector.blockSize = cv::Size(64, 64);
hogDetector.blockStride = cv::Size(192, 192);
hogDetector.cellSize = cv::Size(32, 32);
and function return HOG features :
cv::Mat computeHOG(cv::Mat img)
{
std::vector<float> descriptors;
std::vector<cv::Point> locations;
hogDetector.compute(img, descriptors, cv::Size(8, 8), cv::Size(0, 0), locations);
cv::Mat row = cv::Mat(descriptors);
return row;
}
and main code for extract features :
cv::Mat trainFeatures;
cv::Mat trainLables;
while (!PFile.eof())
{
std::string name; std::getline(PFile, name);
std::vector<std::string> parts = splitString(name, ' ');
cv::Mat img = cv::imread(basePath + parts[0]);
cv::cvtColor(img, img, cv::COLOR_BGR2GRAY);
cv::resize(img, img,cv::Size(1250, 320));
cv::Mat f = computeHOG(img);
trainFeatures.push_back(f);
trainLables.push_back(std::stoi(parts[1]));
}
exception occur in line : trainFeatures.push_back(f);, and f shape is 1 * 1 * 162000
full exception :
Exception thrown at 0x00007FFF5A9C17E5 (opencv_world345d.dll) in vehicleRecognition.exe: 0xC0000005: Access violation reading location 0x000002A830658140.
in debugging i found f Mat (HOG features) is FLOAT32 but trainFeatures is UINT8, first i change cv::Mat trainFeatures; to cv::Mat trainFeatures = cv::Mat1f(); but not differ and again change it to cv::Mat trainFeatures = cv::Mat(1, 162000,CV_32FC1); and work, fixed issue.
and also change row with : row = row.reshape(1, 1);
I don't know why fixed issue and it's weird why OpenCV automatically can't detect its. if you have better solution please write it.

OpenCV Assertion failed - convertTo

I'm trying to convert my matrix into CV_32FC1 to train my SVM.I always get the error msg:
OpenCV Error: Assertion failed (func != 0) in convertTo, file /opt/opencv/modules/core/src/convert.cpp, line 1115
/eropt/opencv/modules/core/src/convert.cpp:1115: error: (-215) func != 0 in function convtTo
Basically I'm trying to
Mat eyes_train_data = Mat::zeros(Eyes.features.size(), CV_32FC1);
Eyes.features.copyTo(eyes_train_data);
eyes_train_data.convertTo(eyes_train_data, CV_32FC1);
I already tried to get the depth() of the matrix which returns 7. I'm not sure what that means. the Eyes.features matrix is a (or should be) a floating-point matrix
to get the Eyes.features i use a gotHogFeatures-Method with
vector<float> descriptorsValues;
vector<Point> location;
for( Mat patch : patches) {
hog.compute( patch, descriptorsValues, Size(0,0), Size(0,0), location);
features.push_back(descriptorsValues);
}
descriptorValues represents a row vector and features should than look like:
features:
{
descriptorValues0
descriptorValues1
...
}
thanks for any help.
Your conversion code doesn't seems right.
It should be something like:
Mat eyes_train_data;
eyes_train_data.convertTo(eyes_train_data, CV_32FC1);
What's the type of Eyes.features?
It seems that it should be already a Mat1f. However, are you sure that features.push_back works as expected? It seems that push_back needs a const Mat& m.
You can get a row matrix from a vector:
Mat1f m;
vector<float> v1 = {1.f, 1.5f, 2.1f};
vector<float> v2 = {3.f, 3.5f, 4.1f};
Mat temp1(Mat1f(v1).t());
Mat temp2(Mat1f(v2).t());
m.push_back(temp1);
m.push_back(temp2);

SVM on OpenCV Mat Error (train data must be floating-point matrix)

i'm trying to do a training using OpenCV and SVM.
But i have a problem, precisely this error:
OpenCV Error: Bad argument (train data must be floating-point matrix) in cvCheckTrainData
I have to do a train in a dataSet of images where every images has 68 point (X,Y) that i use to do SVM.
In the beginning this was my code :
//for each image
fin_land.open(str_app); assert(fin_land);
for (int i=(0+(68*count_FOR)); i<(num_landCKplus+(68*count_FOR)); i++) {
fin_land >> f1;
fin_land >> f2;
int data[2] = {(int)f1, (int)f2};
Mat actual(1, 2, CV_32FC1, &data);
trainData.push_back(actual);
}
// SVM
CvSVMParams params;
params.svm_type = CvSVM::NU_SVC;
params.kernel_type = CvSVM::POLY;
trainData = trainData.reshape(1, #numImage);
SVM.train(trainData, trainLabels, Mat(), Mat(), params);
The problem with this code was that i thought to use a Mat for the test with 68 row and 2 columns, because every training-class in my SVM has 2 columns, but i received this error :
OpenCV Error: Incorrect size of input array (Input sample must be 1-dimensional vector) in cvPreparePredictData
If I have correctly understood, the problem was that the dimension of the test Mat need to be only one dimensional. So, i thought to modify my code like this:
//for each image
fin_land.open(str_app); assert(fin_land);
for (int i=(0+(68*count_FOR)); i<(num_landCKplus+(68*count_FOR)); i++) {
fin_land >> f1;
fin_land >> f1;
int data = (int)f1;
trainData.push_back(&data);
data = (int)f2;
trainData.push_back(&data);
}
Now every training-class has only one column, so even the Mat of test, but I have a new error and it says:
OpenCV Error: Bad argument (train data must be floating-point matrix) in cvCheckTrainData
The problem is that the type of the new mat of trainingSet is wrong?
I don't know how to fix it...
you need float data (and integer labels)
1 row per feature, 1 label per row.
float f1,f2;
for (int i=(0+(68*count_FOR)); i<(num_landCKplus+(68*count_FOR)); i++) {
fin_land >> f1;
fin_land >> f1;
trainData.push_back(f1); // pushing the 1st thing will determine the type of trainData
trainData.push_back(f2);
}
trainData = trainData.reshape(1, numItems);
SVM.train(trainData, trainLabels, Mat(), Mat(), params);

Unwieldy CvMat* in ANN using OpenCV

I'm trying to use OpenCV to train a neural network in C++.
I can't convert between cv::Mat* (or Mat*, if namespace cv is used) to CvMat*, and I would appreciate some help with this.
Let me elaborate:
I've got two data structures of cv::Mat* type. The first is the set of feature vectors and the second is the set of expected output.
cv::Mat *feat = new cv::Mat(3000, 100, CV_32F, featureData);
cv::Mat *op = new cv::Mat(3000, 2, CV_32F, expectedOutput);
(These are 3000 data points of feature vector length = 100 and output state = 2)
These two matrices had been populated with data of correct dimensions and seem to be working fine when sample data were printed on the console.
The neural network has been initialized as:
int layers_array[] = {100,200,2}; //hidden layer nodes = 200
CvMat* layer = cvCreateMatHeader(1, 3, CV_32SC1);
cvInitMatHeader(layer, 1,3,CV_32SC1, layers_array);
CvANN_MLP nnetwork;
nnetwork.create(layer, CvANN_MLP::SIGMOID_SYM, SIGMOID_ALPHA, SIGMOID_BETA);
Now, the train method of ANN is of the following template:
virtual int train( const CvMat* inputs, const CvMat* outputs,
const CvMat* sampleWeights, const CvMat* sampleIdx=0,
CvANN_MLP_TrainParams params = CvANN_MLP_TrainParams(),
int flags=0 );
I tried to convert between cv::Mat * and CvMat * using the following code:
CvMat featMat,opMat;
(&featMat)->cols = feat->cols;
(&featMat)->rows = feat->rows;
(&featMat)->type = CV_32F;
(&featMat)->data.fl = (float *)feat->data;
(&opMat)->cols = op->cols;
(&opMat)->rows = op->rows;
(&opMat)->type = CV_32F;
(&opMat)->data.fl = (float *)op->data;
//setting up the ANN training parameters
int iterations = network.train(&featMat, &opMat, NULL, NULL, trainingParams);
When I run this code, I get the following error message in my console:
**OpenCV Error: Bad argument (input training data should be a floating-point matrix withthe number of rows equal to the number of training samples and the number
of columns equal to the size of 0-th (input) layer) in CvANN_MLP::prepare_to_train, file ..\..\OpenCV-2.3.0-win-src\OpenCV-2.3.0\modules\ml\src\ann_mlp.cpp,
line 694**
I understand the error message. However, to the best of my knowledge, I believe I haven't made a mess of the number of nodes in the input/output layer.
Can you please help me understand what is going wrong?
please try to avoid pointers to cv::Mat as well as CvMat* in general.
luckily, there's an overload to CvANN_MLP::train that takes cv::Mat as args, so use that instead:
cv::Mat feat = cv::Mat(3000, 100, CV_32F, featureData);
cv::Mat op = cv::Mat(3000, 2, CV_32F, expectedOutput);
int layers_array[] = {100,200,2}; //hidden layer nodes = 200
cv::Mat layers = cv::Mat (3, 1, CV_32SC1, layers_array );
CvANN_MLP nnetwork;
nnetwork.create(layers, CvANN_MLP::SIGMOID_SYM, SIGMOID_ALPHA, SIGMOID_BETA);
int iterations = nnetwork.train(feat, op, cv::Mat(), cv::Mat(), CvANN_MLP_TrainParams());

PCA + SVM using C++ Syntax in OpenCV 2.2

I'm having problems getting PCA and Eigenfaces working using the latest C++ syntax with the Mat and PCA classes. The older C syntax took an array of IplImage* as a parameter to perform its processing and the current API only takes a Mat that is formatted by Column or Row. I took the Row approach using the reshape function to fit my image's matrix to fit in a single row. I eventually want to take this data and then use the SVM algorithm to perform detection, but when I do that all my data is just a stream of 0s. Can someone please help me out? What am I doing wrong? Thanks!
I saw this question and it's somewhat related, but I'm not sure what the solution is.
This is basically what I have:
vector<Mat> images; //This variable will be loaded with a set of images to perform PCA on.
Mat values(images.size(), 1, CV_32SC1); //Values are the corresponding values to each of my images.
int nEigens = images.size() - 1; //Number of Eigen Vectors.
//Load the images into a Matrix
Mat desc_mat(images.size(), images[0].rows * images[0].cols, CV_32FC1);
for (int i=0; i<images.size(); i++) {
desc_mat.row(i) = images[i].reshape(1, 1);
}
Mat average;
PCA pca(desc_mat, average, CV_PCA_DATA_AS_ROW, nEigens);
Mat data(desc_mat.rows, nEigens, CV_32FC1); //This Mat will contain all the Eigenfaces that will be used later with SVM for detection
//Project the images onto the PCA subspace
for(int i=0; i<images.size(); i++) {
Mat projectedMat(1, nEigens, CV_32FC1);
pca.project(desc_mat.row(i), projectedMat);
data.row(i) = projectedMat.row(0);
}
CvMat d1 = (CvMat)data;
CvMat d2 = (CvMat)values;
CvSVM svm;
svm.train(&d1, &d2);
svm.save("svmdata.xml");
What etarion said is correct.
To copy a column or row you always have to write:
Mat B = mat.col(i);
A.copyTo(B);
The following program shows how to perform a PCA in OpenCV. It'll show the mean image and the first three Eigenfaces. The images I used in there are available from http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html:
#include "cv.h"
#include "highgui.h"
using namespace std;
using namespace cv;
Mat normalize(const Mat& src) {
Mat srcnorm;
normalize(src, srcnorm, 0, 255, NORM_MINMAX, CV_8UC1);
return srcnorm;
}
int main(int argc, char *argv[]) {
vector<Mat> db;
// load greyscale images (these are from http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html)
db.push_back(imread("s1/1.pgm",0));
db.push_back(imread("s1/2.pgm",0));
db.push_back(imread("s1/3.pgm",0));
db.push_back(imread("s2/1.pgm",0));
db.push_back(imread("s2/2.pgm",0));
db.push_back(imread("s2/3.pgm",0));
db.push_back(imread("s3/1.pgm",0));
db.push_back(imread("s3/2.pgm",0));
db.push_back(imread("s3/3.pgm",0));
db.push_back(imread("s4/1.pgm",0));
db.push_back(imread("s4/2.pgm",0));
db.push_back(imread("s4/3.pgm",0));
int total = db[0].rows * db[0].cols;
// build matrix (column)
Mat mat(total, db.size(), CV_32FC1);
for(int i = 0; i < db.size(); i++) {
Mat X = mat.col(i);
db[i].reshape(1, total).col(0).convertTo(X, CV_32FC1, 1/255.);
}
// Change to the number of principal components you want:
int numPrincipalComponents = 12;
// Do the PCA:
PCA pca(mat, Mat(), CV_PCA_DATA_AS_COL, numPrincipalComponents);
// Create the Windows:
namedWindow("avg", 1);
namedWindow("pc1", 1);
namedWindow("pc2", 1);
namedWindow("pc3", 1);
// Mean face:
imshow("avg", pca.mean.reshape(1, db[0].rows));
// First three eigenfaces:
imshow("pc1", normalize(pca.eigenvectors.row(0)).reshape(1, db[0].rows));
imshow("pc2", normalize(pca.eigenvectors.row(1)).reshape(1, db[0].rows));
imshow("pc3", normalize(pca.eigenvectors.row(2)).reshape(1, db[0].rows));
// Show the windows:
waitKey(0);
}
and if you want to build the matrix by row (like in your original question above) use this instead:
// build matrix
Mat mat(db.size(), total, CV_32FC1);
for(int i = 0; i < db.size(); i++) {
Mat X = mat.row(i);
db[i].reshape(1, 1).row(0).convertTo(X, CV_32FC1, 1/255.);
}
and set the flag in the PCA to:
CV_PCA_DATA_AS_ROW
Regarding machine learning. I wrote a document on machine learning with the OpenCV C++ API that has examples for most of the classifiers, including Support Vector Machines. Maybe you can get some inspiration there: http://www.bytefish.de/pdf/machinelearning.pdf.
data.row(i) = projectedMat.row(0);
This will not work. operator= is a shallow copy, meaning no data is actually copied. Use
cv::Mat sample = data.row(i); // also a shallow copy, points to old data!
projectedMat.row(0).copyTo(sample);
The same also for:
desc_mat.row(i) = images[i].reshape(1, 1);
I would suggest looking at the newly checked in tests in svn head
modules/core/test/test_mat.cpp
online here : https://code.ros.org/svn/opencv/trunk/opencv/modules/core/test/test_mat.cpp
has examples for PCA in the old c and new c++
Hope that helps!