Im new in openCV SVM. Im running in Xcode 7.0, openCV 3.0, Below is my code
MatMat labels(0,1,CV_32FC1);
//Mat labels(0,1,CV_32S); //I also try this when i saw some posting, But error too.
...
Mat samples_32f; samples.convertTo(samples_32f, CV_32F);
//Mat samples_32f; samples.convertTo(samples_32f, CV_32FC1); //tried!
Ptr<SVM> classifier = SVM::create();
classifier->SVM::train(samples_32f, 0, labels); <- Here the Error
The OpenCV Error: Bad argument (in the case of classification problem the responses must be categorical; either specify varType when creating TrainData, or pass integer responses) in train.
When I search around some solutions, the error message seem was came from labels that define not integer value. So i had try to changed to Mat labels(0,1,CV_32S), but the issues error still the same.
So i have no idea what going wrong with the code..is anyone can help?
The errors is because labels does not content any value as defined as 0 rows with 1 cols. Therefore, it is correct to make sure labels has holding numbers of rows records for SVM training.
My Solutions:
Mat labels(0,1,CV_32S);
/*
for loop to use pushback added value into lables
{...}
*/
/*
or
define another Mat labeled(labels.rows, 1, CV_32S); after the for loop
and use it in the SVM.train
*/
Mat samples_32f; samples.convertTo(samples_32f, CV_32F);
Ptr<SVM> classifier = SVM::create();
classifier->SVM::train(samples_32f, 0, labels); <-change the labels names if you define new labels after the for loop.
Thanks, that's what i can share.
Related
I'm currently using OpenCV (in C++) to normalize some data in the form of images. Since I'm planning to train an autoencoder I found some articles and papers suggesting that it's better if the data is normalised in the range from -1 to 1 (tanh vs sin). I managed to do this with the following code:
cv::VideoCapture cap(video);
cv::Mat frame;
cap.read(frame);
//convert to grayscale
cv::cvtColor(*frame, *frame, cv::COLOR_BGR2GRAY);
//normalise
frame->convertTo(*frame, CV_32FC1); //change data type from CV_8UC1 to CV_32FC1
cv::normalize(*frame, *frame, -1, 1, cv::NORM_MINMAX);
//downscale
cv::resize(*frame, *frame, cv::Size(256, 256));
The next thing I want to do is to save the normalised image and for this I'm using cv::imwrite(), so I have the following line: cv::imwrite("normImage.tiff", *frame);.
I'm wondering, however, if writing the image to .TIFF format actually reverses the normalisation and I haven't been able to verify whether that's the case or not. I'd also like to ask if there's a better way/format to write the image using OpenCV?
Cheers
I want to train mlp in OpenCV to recognize if there is an specified object on an image.
The problem is as far as I know, constructors of Mat object (with wich mlp operates) can use just simple variables types. So I can't use Mat of Mat, vector or Mat of hists even despite the fact it consists of floats, I don't see the way to split the objects inside it, if I use the only one Mat object to collect all hists.
Sorry if question is stupid.
P.S. I need to use mlp concrete, because Haar cascade is used already and alternative way is neccessary to look of.
Mat trainingDataMat(600, 8, CV_32FC1, trainingData);
Mat labelsMat(600, 1, CV_32SC1, labels);
Ptr svm = SVM::create();
svm->setType(SVM::C_SVC);
svm->setKernel(SVM::LINEAR);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
svm->train(trainingDataMat, ROW_SAMPLE, labelsMat);
I am trying to do classification of images with combining SIFT features, Bag of Visual Words and SVM.
Now I am on training part. I need to get BoW histograms for each training image to be able to train SVM. For this I am using BOWImgDescriptorExtractor from OpenCV. I am using OpenCV version 3.1.0.
The problem is that it computes histogram for some images, but for some of images it gives me this error:
OpenCV Error: Assertion failed (queryIdx == (int)i) in compute,
file /Users/opencv-3.1.0/modules/features2d/src/bagofwords.cpp, line 200
libc++abi.dylib: terminating with uncaught exception of type
cv::Exception: /Users/opencv-3.1.0/modules/feature/src/bagofwords.cpp:200: error: (-215) queryIdx == (int)i in function compute
Training images are all of the same size, all have same number of channels.
For creating dictionary I use another image set than for training SVM.
Here's part of code:
Ptr<FeatureDetector> detector(cv::xfeatures2d::SIFT::create());
Ptr<DescriptorMatcher> matcher(new BFMatcher(NORM_L2, true));
BOWImgDescriptorExtractor bow_descr(det, matcher);
bow_descr.setVocabulary(dict);
Mat features_svm;
for (int i = 0; i < num_svm_data; ++i) {
Mat hist;
std::vector<KeyPoint> keypoints;
detector->detect(data_svm[i], keypoints);
bow_descr.compute(data_svm[i], keypoints, hist);
features_svm.push_back(hist);
}
data_svm is a vector<Mat> type. It is my training set images which I will use in SVM.
What the problem can be?
I ran into the same issue. I was able to fix it by changing the cross check option of the Brute Force matcher to False. I think its because the cross check option keeps only a select few matches and removes the rest and messes up the indexes in the process
I am quite new to both OpenCV and Support Vector Machines. I want to use SVM to train a dataset with two labels and then predict the label of a given set. My current set contains about 600 rows with equal class distributions (300 for 1 and 300 for -1) containing 34 columns.
This is my current code for setting up OpenCV's SVM. I am using OpenCV 3.0.0
// trainingData is an int array with size 600x34
// labels is an int array with size 600, they're the labels corresponding to the trainingData rows
cv::Mat trainingDataMat(600, 34, CV_32FC1, trainingData);
cv::Mat labelsMat(600, 1, CV_32SC1, labels);
cv::Ptr<cv::ml::SVM> svm = cv::ml::SVM::create();
cv::Ptr<cv::ml::TrainData> tempData = cv::ml::TrainData::create(trainingDataMat, cv::ml::ROW_SAMPLE, labelsMat);
svm->setType(cv::ml::SVM::C_SVC);
svm->setKernel(cv::ml::SVM::RBF);
svm->setTermCriteria(cv::TermCriteria(cv::TermCriteria::MAX_ITER, 100, 0.001));
// Assign the SVM parameters to the most accurate result
svm->trainAuto(tempData);
// Train the SVM
svm->train(trainingDataMat, cv::ml::ROW_SAMPLE, labelsMat);
// predictRow contains a row of data with 34 columns to predict against the SVM Model
cv::Mat sampleMat(1, 34, CV_32FC1, predictRow);
// Prediction
float response = svm->predict(sampleMat);
std::cout << response << std::endl;
The SVM training seems to work fine. But when I predict a row, response always shows "1" no matter how the input looks like. Even when I try to predict using training rows with "-1" label I used earlier, the response is still "1".
I tried to increase the max iteration parameter for the termination criteria to a large number. The training process takes more time but the results are still the same.
I tried to use the libsvm library (https://www.csie.ntu.edu.tw/~cjlin/libsvm/) to see if the same behavior occurs. Interestingly, it worked well. I use the Windows "svm-train.exe" and "svm-predict.exe" command to validate it and the responses are accurate.
I even tried to run the executables on the OpenCV program by using some dirty system calls and file I/O. The resulting responses using the training rows are correct.
I suspect there is something wrong with the my SVM parameters. Even by using train_auto function, the SVM model still shows strange behaviour. I wonder if anyone can help me setting the SVM parameters correctly in OpenCV 3.0?
I'm stuck on this one.
I am trying to do some object classification through OpenCV feature 2d framework, but am running into troubles on training my SVM.
I am able to extract vocabularies and cluster them using BowKMeansTrainer, but after I extract features from training data to add to the trainer and run SVM.train method, I get the following exception.
OpenCV Error: Bad argument (There is only a single class) in cvPreprocessCategoricalResponses, file /home/tbu/prog/OpenCV-2.4.2/modules/ml/src /inner_functions.cpp, line 729
terminate called after throwing an instance of 'cv::Exception'
what(): /home/tbuchy/prog/OpenCV-2.4.2/modules/ml/src/inner_functions.cpp:729: error: (-5) There is only a single class in function cvPreprocessCategoricalResponses
I have tried modifying dictionary size, using different trainers, ensuring my matrix types are correct (to the best of my ability, still new to opencv).
Has any seen this error or have any insight on how to fix it?
My code looks like this:
trainingPaths = getFilePaths();
extractTrainingVocab(trainingPaths);
cout<<"Clustering..."<<endl;
Mat dictionary = bowTrainer.cluster();
bowDE.setVocabulary(dictionary);
Mat trainingData(0, dictionarySize, CV_32FC1);
Mat labels(0, 1, CV_32FC1);
extractBOWDescriptor(trainingPaths, trainingData, labels);
//making the classifier
CvSVM classifier;
CvSVMParams params;
params.svm_type = CvSVM::C_SVC;
params.kernel_type = CvSVM::LINEAR;
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
classifier.train(trainingData, labels, Mat(), Mat(), params);
Based on the error, it looks like your labels only contains one category of data. That is, all of the features in your trainingData have the same label.
For example, say you're trying to use the SVM to determine whether an image contains a cat or not. If every entry in the labels is the same, then either...
all your training images are labeled as "yes this is a cat"
or, all your training images are labeled as "no, this is not a cat."
SVMs try to separate two (or sometimes more) classes of data, so the SVM library complains if you only only provide one class of data.
To see if this is the problem, I recommend adding a print statement to check whether labels only contains one category. Here's some code to do this:
//check: are the printouts all the same?
for(int i=0; i<labels.rows; i++)
for(int j=0; j<labels.cols; j++)
printf("labels(%d, %d) = %f \n", i, j, labels.at<float>(i,j));
Once your extractBOWDescriptor() loads data into labels, I'm assuming that labels is of size (trainingData.rows, trainingData.cols). If not, this could be a problem.