Unwieldy CvMat* in ANN using OpenCV - c++

I'm trying to use OpenCV to train a neural network in C++.
I can't convert between cv::Mat* (or Mat*, if namespace cv is used) to CvMat*, and I would appreciate some help with this.
Let me elaborate:
I've got two data structures of cv::Mat* type. The first is the set of feature vectors and the second is the set of expected output.
cv::Mat *feat = new cv::Mat(3000, 100, CV_32F, featureData);
cv::Mat *op = new cv::Mat(3000, 2, CV_32F, expectedOutput);
(These are 3000 data points of feature vector length = 100 and output state = 2)
These two matrices had been populated with data of correct dimensions and seem to be working fine when sample data were printed on the console.
The neural network has been initialized as:
int layers_array[] = {100,200,2}; //hidden layer nodes = 200
CvMat* layer = cvCreateMatHeader(1, 3, CV_32SC1);
cvInitMatHeader(layer, 1,3,CV_32SC1, layers_array);
CvANN_MLP nnetwork;
nnetwork.create(layer, CvANN_MLP::SIGMOID_SYM, SIGMOID_ALPHA, SIGMOID_BETA);
Now, the train method of ANN is of the following template:
virtual int train( const CvMat* inputs, const CvMat* outputs,
const CvMat* sampleWeights, const CvMat* sampleIdx=0,
CvANN_MLP_TrainParams params = CvANN_MLP_TrainParams(),
int flags=0 );
I tried to convert between cv::Mat * and CvMat * using the following code:
CvMat featMat,opMat;
(&featMat)->cols = feat->cols;
(&featMat)->rows = feat->rows;
(&featMat)->type = CV_32F;
(&featMat)->data.fl = (float *)feat->data;
(&opMat)->cols = op->cols;
(&opMat)->rows = op->rows;
(&opMat)->type = CV_32F;
(&opMat)->data.fl = (float *)op->data;
//setting up the ANN training parameters
int iterations = network.train(&featMat, &opMat, NULL, NULL, trainingParams);
When I run this code, I get the following error message in my console:
**OpenCV Error: Bad argument (input training data should be a floating-point matrix withthe number of rows equal to the number of training samples and the number
of columns equal to the size of 0-th (input) layer) in CvANN_MLP::prepare_to_train, file ..\..\OpenCV-2.3.0-win-src\OpenCV-2.3.0\modules\ml\src\ann_mlp.cpp,
line 694**
I understand the error message. However, to the best of my knowledge, I believe I haven't made a mess of the number of nodes in the input/output layer.
Can you please help me understand what is going wrong?

please try to avoid pointers to cv::Mat as well as CvMat* in general.
luckily, there's an overload to CvANN_MLP::train that takes cv::Mat as args, so use that instead:
cv::Mat feat = cv::Mat(3000, 100, CV_32F, featureData);
cv::Mat op = cv::Mat(3000, 2, CV_32F, expectedOutput);
int layers_array[] = {100,200,2}; //hidden layer nodes = 200
cv::Mat layers = cv::Mat (3, 1, CV_32SC1, layers_array );
CvANN_MLP nnetwork;
nnetwork.create(layers, CvANN_MLP::SIGMOID_SYM, SIGMOID_ALPHA, SIGMOID_BETA);
int iterations = nnetwork.train(feat, op, cv::Mat(), cv::Mat(), CvANN_MLP_TrainParams());

Related

C++ OpenCV svm train crashes

I have two vectors:
vector<int> features;
vector<int> labels;
And in some point into my program I fill them with some values. (both vectors same size) Then, when I want to train the svm I copy the vectors into 2 new cv::Mat like this:
Mat trainMat(features.size(), 1, CV_32FC1);
Mat labelsMat(labels.size(), 1, CV_32FC1);
for (int i = 0; i < features.size(); i++) {
trainMat.at<int>(i, 1) = features.at(i);
labelsMat.at<int>(i, 1) = labels.at(i);
}
Then I create the svm and it's params:
cv::SVMParams params;
params.svm_type = cv::SVM::C_SVC;
params.kernel_type = cv::SVM::POLY;
params.gamma = 3;
cv::SVM svm;
And finally I train it:
svm.train(trainMat, labelsMat, Mat(), Mat(), params);
But, the program crashes and gives this error:
Unhandled exception at 0x7484D928 in cvtest.exe: Microsoft C++ exception: cv::Exception at memory location 0x0017F04.
At first, I thought the problem was the size of the data(because I compile it at 32bit). So, I used only 20, even 4 samples just to test it. But, still crashing. What else could result a memory error?
Finally, I found the problem. svm.train() accepts only float type features and not int. I just changed vector<int> features; to vector<float> features; and it works.
You are creating trainMat and labelsMat as float matrices with CV_32FC1 but setting the values with trainMat.at<int> which is wrong.
It has to be trainMat.at<float>.

Training an SVM using hu moments

im learning about SVM, so im making a sample program that trains an SVM to detect if a symbol is in an image or if its not. All the images are black and white (the symbols would be black and the background white). I have 12 training images, 6 positives (with the symbol) and 6 negatives (without it). Im using hu moments to get the descriptors of every image and then i construct the training matrix with those descriptors. also i have a Labels matrix, which contains a label for each image: 1 if its positive and 0 if its negative. but im getting an error (something like a segmentation fault) at the line where i train the SVM. here is my code:
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
//arrays where the labels and the features will be stored
float labels[12] ;
float trainingData[12][7] ;
Moments moment;
double hu[7];
//===============extracting the descriptos for each positive image=========
for ( int i = 0; i <= 5; i++){
//the images are called t0.png ... t5.png and are in the folder train
std::string path("train/t");
path += std::to_string(i);
path += ".png";
Mat input = imread(path, 0); //read the images
bitwise_not(input, input); //invert black and white
Mat BinaryInput;
threshold(input, BinaryInput, 100, 255, cv::THRESH_BINARY); //apply theshold
moment = moments(BinaryInput, true); //calculate the moments of the current image
HuMoments(moment, hu); //calculate the hu moments (this will be our descriptor)
//setting the row i of the training data as the hu moments
for (int j = 0; j <= 6; j++){
trainingData[i][j] = (float)hu[j];
}
labels[i] = 1; //label=1 because is a positive image
}
//===============extracting the descriptos for each negative image=========
for (int i = 0; i <= 5; i++){
//the images are called tn0.png ... tn5.png and are in the folder train
std::string path("train/tn");
path += std::to_string(i);
path += ".png";
Mat input = imread(path, 0); //read the images
bitwise_not(input, input); //invert black and white
Mat BinaryInput;
threshold(input, BinaryInput, 100, 255, cv::THRESH_BINARY); //apply theshold
moment = moments(BinaryInput, true); //calculate the moments of the current image
HuMoments(moment, hu); //calculate the hu moments (this will be our descriptor)
for (int j = 0; j <= 6; j++){
trainingData[i + 6][j] = (float)hu[j];
}
labels[i + 6] = 0; //label=0 because is a negative image
}
//===========================training the SVM================
//we convert the labels and trainingData matrixes to Mat objects
Mat labelsMat(12, 1, CV_32FC1, labels);
Mat trainingDataMat(12, 7, CV_32FC1, trainingData);
//create the SVM
Ptr<ml::SVM> svm = ml::SVM::create();
//set the parameters of the SVM
svm->setType(ml::SVM::C_SVC);
svm->setKernel(ml::SVM::LINEAR);
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
svm->setTermCriteria(criteria);
//Train the SVM !!!!!HERE OCCURS THE ERROR!!!!!!
svm->train(trainingDataMat, ml::ROW_SAMPLE, labelsMat);
//Testing the SVM...
Mat test = imread("train/t1.png", 0); //this should be a positive test
bitwise_not(test, test);
Mat testBin;
threshold(test, testBin, 100, 255, cv::THRESH_BINARY);
Moments momentP = moments(testBin, true); //calculate the moments of the test image
double huP[7];
HuMoments(momentP, huP);
Mat testMat(1, 7, CV_32FC1, huP); //setting the hu moments to the test matrix
double resp = svm->predict(testMat); //pretiction of the SVM
printf("%f", resp); //Response
getchar();
}
i know that the program is running fine until that line because i printed labelsMat and trainingDataMat and the values inside them are ok. Even in the console i can see that the program is running fine until that exact line executes. the console then shows this message:
OpenCV error: Bad argument (in the case of classification problem the responses must be categorical; either specify varType when creating TrainDatam or pass integer responses)
i dont really know what this means. any idea of what could be causing the problem? if you need any other details please tell me.
EDIT
for future readers:
the problem was in the way i defined the labels array as an array of float and the LabelsMat as a Mat of CV_32FC1. the array that contains the labels needs to have integers inside, so i changed:
float labels[12];
to
int labels[12];
and also changed
Mat labelsMat(12, 1, CV_32FC1, labels);
to
Mat labelsMat(12, 1, CV_32SC1, labels);
and that solved the error. Thank you
Trying changing:
Mat labelsMat(12, 1, CV_32FC1, labels);
to
Mat labelsMat(12, 1, CV_32SC1, labels);
From: http://answers.opencv.org/question/63715/svm-java-opencv-3/
If that doesn't work, hopefully one of these posts will help you:
Opencv 3.0 SVM train classification issues
OpenCV SVM Training Data

opencv calcHist results are not what expected

In openCV, I have a matrix of integers (a 4000x1 Mat). Each time I read different ranges of this matrix: Mat labelsForHist = labels(Range(from,to),Range(0,1));
The size of the ranges is variable. Then I convert the labelsForHist matrix to float(because calcHist doesnt accept int values!) by using:
labelsForHist.convertTo(labelsForHistFloat, CV_32F);
After this I call calcHist with these parameters:
Mat hist;
int histSize = 4000;
float range[] = { 0, 4000 } ;
int channels[] = {0};
const float* histRange = { range };
bool uniform = true; bool accumulate = false;
calcHist(&labelsForHistFloat,1,channels,Mat(),hist,1,&histSize,&histRange,uniform,accumulate);
The results are normalized by using:
normalize(hist,hist,1,0,NORM_L1,-1,Mat());
The problem is that my histograms doesn't look like what I was expecting. Any idea on what I am doing wrong or does the problem come from other part of the code (and not calculation of histograms)?
I expect this sparse histogram:
while I get this flat histogram, for same data:
The first hist was calculated in python, but I want to do the same in c++
There is a clustering process before calculating histograms, so if there is no problem with creating histograms then deffinitly the problem comes from before that in clustering part!

Multi-Channel Back Projection Assertion (j < nimages)

Attempting to do histogram back-projection on a three-channel image results in the following error:
OpenCV Error: Assertion failed (j < nimages) in histPrepareImages, file ../modules/imgproc/src/histogram.cpp, line 148
The code which fails:
cv::Mat _refImage; //contains reference image of type CV_8UC3
cv::Mat output; //contains image data of type CV_8UC3
int histSize[] = {16, 16, 16};
int channels[] = {0, 1, 2};
const float hRange[] = {0.f, 256.f};
const float* ranges[] = {hRange, hRange, hRange};
int nChannels = 3;
cv::Mat hist;
cv::calcHist(&_refImage, 1, channels, cv::noArray(), hist, nChannels, histSize, ranges);
cv::calcBackProject(&output, 1, channels, hist, output, ranges); //This line causes assertion failure
Running nearly identical code on a single-channeled image works. According to the documentation, multi-channel images are also supported. Why won't this code work?
The short answer is that cv::calcBackProject() does not support in-place operation, although this is not mentioned in the documentation.
Explanation
Digging into the OpenCV source yields the following snippet:
void calcBackProject( const Mat* images, int nimages, const int* channels,
InputArray _hist, OutputArray _backProject,
const float** ranges, double scale, bool uniform )
{
//Some code...
_backProject.create( images[0].size(), images[0].depth() );
Mat backProject = _backProject.getMat();
assert(backProject.type() == CV_8UC1);
histPrepareImages( images, nimages, channels, backProject, dims, hist.size, ranges,
uniform, ptrs, deltas, imsize, uniranges );
//More code...
}
The line which causes the problem is:
_backProject.create( images[0].size(), images[0].depth() );
which, if the source and destination are the same, reallocates the input image data. images[0].depth() evaluates to CV_8U, which is numerically equivalent to the type specifier CV_8UC1. Thus, the data is created as a single-channel image.
This is a problem because histPrepareImages still expects the input image to have 3 channels, and the assertion is thrown.
Solution
Fortunately, the workaround is simple. The output parameter must be different from the input, like so:
cv::Mat result;
cv::calcBackProject(&output, 1, channels, hist, result, ranges);

PCA + SVM using C++ Syntax in OpenCV 2.2

I'm having problems getting PCA and Eigenfaces working using the latest C++ syntax with the Mat and PCA classes. The older C syntax took an array of IplImage* as a parameter to perform its processing and the current API only takes a Mat that is formatted by Column or Row. I took the Row approach using the reshape function to fit my image's matrix to fit in a single row. I eventually want to take this data and then use the SVM algorithm to perform detection, but when I do that all my data is just a stream of 0s. Can someone please help me out? What am I doing wrong? Thanks!
I saw this question and it's somewhat related, but I'm not sure what the solution is.
This is basically what I have:
vector<Mat> images; //This variable will be loaded with a set of images to perform PCA on.
Mat values(images.size(), 1, CV_32SC1); //Values are the corresponding values to each of my images.
int nEigens = images.size() - 1; //Number of Eigen Vectors.
//Load the images into a Matrix
Mat desc_mat(images.size(), images[0].rows * images[0].cols, CV_32FC1);
for (int i=0; i<images.size(); i++) {
desc_mat.row(i) = images[i].reshape(1, 1);
}
Mat average;
PCA pca(desc_mat, average, CV_PCA_DATA_AS_ROW, nEigens);
Mat data(desc_mat.rows, nEigens, CV_32FC1); //This Mat will contain all the Eigenfaces that will be used later with SVM for detection
//Project the images onto the PCA subspace
for(int i=0; i<images.size(); i++) {
Mat projectedMat(1, nEigens, CV_32FC1);
pca.project(desc_mat.row(i), projectedMat);
data.row(i) = projectedMat.row(0);
}
CvMat d1 = (CvMat)data;
CvMat d2 = (CvMat)values;
CvSVM svm;
svm.train(&d1, &d2);
svm.save("svmdata.xml");
What etarion said is correct.
To copy a column or row you always have to write:
Mat B = mat.col(i);
A.copyTo(B);
The following program shows how to perform a PCA in OpenCV. It'll show the mean image and the first three Eigenfaces. The images I used in there are available from http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html:
#include "cv.h"
#include "highgui.h"
using namespace std;
using namespace cv;
Mat normalize(const Mat& src) {
Mat srcnorm;
normalize(src, srcnorm, 0, 255, NORM_MINMAX, CV_8UC1);
return srcnorm;
}
int main(int argc, char *argv[]) {
vector<Mat> db;
// load greyscale images (these are from http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html)
db.push_back(imread("s1/1.pgm",0));
db.push_back(imread("s1/2.pgm",0));
db.push_back(imread("s1/3.pgm",0));
db.push_back(imread("s2/1.pgm",0));
db.push_back(imread("s2/2.pgm",0));
db.push_back(imread("s2/3.pgm",0));
db.push_back(imread("s3/1.pgm",0));
db.push_back(imread("s3/2.pgm",0));
db.push_back(imread("s3/3.pgm",0));
db.push_back(imread("s4/1.pgm",0));
db.push_back(imread("s4/2.pgm",0));
db.push_back(imread("s4/3.pgm",0));
int total = db[0].rows * db[0].cols;
// build matrix (column)
Mat mat(total, db.size(), CV_32FC1);
for(int i = 0; i < db.size(); i++) {
Mat X = mat.col(i);
db[i].reshape(1, total).col(0).convertTo(X, CV_32FC1, 1/255.);
}
// Change to the number of principal components you want:
int numPrincipalComponents = 12;
// Do the PCA:
PCA pca(mat, Mat(), CV_PCA_DATA_AS_COL, numPrincipalComponents);
// Create the Windows:
namedWindow("avg", 1);
namedWindow("pc1", 1);
namedWindow("pc2", 1);
namedWindow("pc3", 1);
// Mean face:
imshow("avg", pca.mean.reshape(1, db[0].rows));
// First three eigenfaces:
imshow("pc1", normalize(pca.eigenvectors.row(0)).reshape(1, db[0].rows));
imshow("pc2", normalize(pca.eigenvectors.row(1)).reshape(1, db[0].rows));
imshow("pc3", normalize(pca.eigenvectors.row(2)).reshape(1, db[0].rows));
// Show the windows:
waitKey(0);
}
and if you want to build the matrix by row (like in your original question above) use this instead:
// build matrix
Mat mat(db.size(), total, CV_32FC1);
for(int i = 0; i < db.size(); i++) {
Mat X = mat.row(i);
db[i].reshape(1, 1).row(0).convertTo(X, CV_32FC1, 1/255.);
}
and set the flag in the PCA to:
CV_PCA_DATA_AS_ROW
Regarding machine learning. I wrote a document on machine learning with the OpenCV C++ API that has examples for most of the classifiers, including Support Vector Machines. Maybe you can get some inspiration there: http://www.bytefish.de/pdf/machinelearning.pdf.
data.row(i) = projectedMat.row(0);
This will not work. operator= is a shallow copy, meaning no data is actually copied. Use
cv::Mat sample = data.row(i); // also a shallow copy, points to old data!
projectedMat.row(0).copyTo(sample);
The same also for:
desc_mat.row(i) = images[i].reshape(1, 1);
I would suggest looking at the newly checked in tests in svn head
modules/core/test/test_mat.cpp
online here : https://code.ros.org/svn/opencv/trunk/opencv/modules/core/test/test_mat.cpp
has examples for PCA in the old c and new c++
Hope that helps!