I am doing some machine learning in OpenCV and i'm using Decision Trees. I am currently using OpenCV 3.0.0-rc1. Whenever i attempt to train Decision Trees with my training data and labels, i get either
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
or
Segmentation fault
Depending on what i put into setMaxDepth(); if the number is larger than 22, it's bad_alloc, else it's seg fault.
Here's my source code:
//import data
Mat trainData=imread("/home/jetson/Documents/CB/ml/td.jpg",CV_LOAD_IMAGE_GRAYSCALE);
Mat labels=imread("/home/jetson/Documents/CB/ml/lab.jpg",CV_LOAD_IMAGE_GRAYSCALE);
//convert to the right type
trainData.convertTo(trainData,CV_32FC1);
labels.convertTo(labels,CV_32SC1);
transpose(trainData,trainData);
Ptr<ml::TrainData> tData = ml::TrainData::create(trainData, ml::ROW_SAMPLE, labels);
cout <<"Training data ready\n";
Ptr<ml::DTrees> dec_trees = ml::DTrees::create();
//params
dec_trees->setMaxDepth(1);
dec_trees->setMinSampleCount(10);
dec_trees->setRegressionAccuracy(0.01f);
dec_trees->setUseSurrogates(false);
dec_trees->setMaxCategories(2);
dec_trees->setCVFolds(10);
dec_trees->setUse1SERule(true);
dec_trees->setTruncatePrunedTree(true);
dec_trees->setPriors(Mat());
cout <<"Params set\n";
dec_trees->train(tData);
cout <<"Done!\n";`
In addition to this, when i try to train a SVM model with the same data, using the same steps (below) it works just fine.
Ptr<ml::SVM> svm = ml::SVM::create();
//params
svm->setType(ml::SVM::C_SVC);
svm->setKernel(ml::SVM::POLY);
svm->setGamma(3);
svm->setDegree(0.1);
cout <<"Params set\n";
svm->train(tData);
cout <<"Done!\n";
I need to point out that the error occurs when i try to train the model. I'm using the default parameters for decision trees, as suggested on the OpenCV documentation page.
Does anybody know what's wrong here and how to go about fixing my problem?
Thanks in advance.
EDIT: I upgraded OpenCV to version 3.0.0 and the issues stay the same
Related
I've built digits from this tutorial recently, everything is ok and I finally trained my AlexNet model (also trained a SqueezNet so that I can upload the model here) ! the problem is when I download my model from Digits, I can not load it into my program for testing!I have tested my program with GoogleNet downloaded from this link and it's working fine!
I'm using OpenCV readNetFromCaffe in this function to load Caffe model
void deepNetwork::loadModel( cv::String model ,cv::String weight ,string lablesPath,int ps){
patchSize=ps;
labeslPath=lablesPath;
try
{
net = dnn::readNetFromCaffe(weight,model);
cerr<<"loaded succ"<<endl;
}
catch (cv::Exception& e)
{
std::cerr << "Exception: " << e.what() << std::endl;
}}
I get the following error loading my model
OpenCV Error: Assertion failed (pbBlob.raw_data_type() ==
caffe::FLOAT16) in blo
bFromProto, file
/home/nvidia/build-opencv/opencv/modules/dnn/src/caffe/caffe_im
porter.cpp, line 242 Exception:
/home/nvidia/build-opencv/opencv/modules/dnn/src/caffe/caffe_importer
.cpp:242: error: (-215) pbBlob.raw_data_type() == caffe::FLOAT16 in
function blo
bFromProto
OpenCV Error: Requested object was not found (Requested blob "data"
not found) i
n setInput, file
/home/nvidia/build-opencv/opencv/modules/dnn/src/dnn.cpp, line
1606 terminate called after throwing an instance of 'cv::Exception'
what():
/home/nvidia/build-opencv/opencv/modules/dnn/src/dnn.cpp:1606: error:
(-204) Requested blob "data" not found in function setInput
Aborted (core dumped)
any help would be appreciated <3
opencv version 3.3.1 also tested on (3.3.0 ,3.4.1) same error!
testing on a system without Cuda, Cudnn or Caffe just pure c++ and OpenCv...
but i've trained my model on a aws ec2 instance (p3.2xlarge ) with Cuda,Cudnn and caffe !
you can download the trained squeezNet model (.prototxt and .caffemodel) here
finally, I found the problem!
it's a version problem I have digits 6.1.1 working with nvcaffe 0.17.0 for training which is not compatible with previous Caffe and OpenCv libraries ! you have to downgrade NvCaffe to version 0.15.14 and it will open with OpenCv easily!
OpenCV DNN model expect caffemodel in BVLC format. But, NVCaffe stores the caffe model in more efficient format which different than BVLC Caffe.
If you want model compatible with both BVLC/Caffe as well as NVcaffe.
Add this flag in solver.prototxt
store_blobs_in_old_format = true
Please read the DIGITS NVCaffe Documentation.
NVCaffe Documenation - store_blobs_in_old_format
I am currently trying to implement the Lucy Richardson algorithm in Opencv, when it comes to running the 'cv::subtract' method in my program it throws an InteropServices exception (stack trace below)
************** Exception Text **************
System.Runtime.InteropServices.SEHException (0x80004005): External has thrown an exception.
at cv.Mat.=(Mat* , MatExpr* expr) in e:\opencv\opencv\build\include\opencv2\core\mat.inl.hpp:line 3107
at LucyRichardson.LucyRich(LucyRichardson* , Mat* , basic_string<char\,std::char_traits<char>\,std::allocator<char> >* imagePath) in e:\documents\development\realtimeimageprocessing\imageprocessing\imageprocessing\lucyrichardson.cpp:line 63
Below is the block of code where the error occurs, it is thrown on the second line.
im_correction = cv::Mat (cvSize(383, 357), 8, 1);
cv::subtract(im, im_conv_kernel, im_correction);
cv::namedWindow("Sub");
cv::imshow("Sub", im_correction);
The variables im and im_conv_kernel are both of type cv::Mat and are correctly populated and the variable im_correction I have tried creating a version of before I save the result of the subtraction in.
I am using cv::subtractions fine in other parts of the program.
Does anyone know why this error occurs and how I could fix it? Or if there is a different method I could try for the subtraction?
I have worked out where the problem was I needed to make sure all of the images where of the same type. - After adding the below three lines before performing the subtract it worked fine.
im.convertTo(im, CV_8UC1);
im_conv_kernel.convertTo(im_conv_kernel, CV_8UC1);
im_correction.convertTo(im_correction, CV_8UC1);
I'm building OCR application with Visual Studio 2010, C++, SVM in OpenCV. It's ok when I train SVM with under 181 different labels but fails when over 181 labels. Below is IDE and OpenCV error message and my code. Please help me, thank you so much!
IDE error message
First-chance exception at 0x771e4b32 in OCR.exe: Microsoft C++
exception: cv::Exception at memory location 0x0081da74.. The thread
'Win32 Thread' (0xdac) has exited with code -1073741510 (0xc000013a).
The program '[2512] OCR.exe: Native' has exited with code -1073741510
(0xc000013a).
OpenCV error message
......\src\opencv\modules\core\src\datastructs.cpp:332: error: (-211)
requested size is negative or too big
SVM's configuration
CvSVMParams params;
params.svm_type = CvSVM::C_SVC;
params.kernel_type = CvSVM::LINEAR;
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
SVM.train( training_vectors, training_labels, cv::Mat(), cv::Mat(), params );
libSVM uses "one vs all" technique to represent multi-class problem using the binary SVM classifier. It means, that if you have N (>2) labels, libSVM will generate N distinct classifiers, each with different data labeling (so it expresses the "one vs all" scheme). This can lead to the memory problems that you are experiencing. Some other models like for example neural networks or knn can represent the multi-class classification without such overhead. So if your data is to large to treat it the way libsvm does you have at least three possible options:
Change SVM to some other model, which can directly addresss mutli-label classification
Try to use other, lighter implementation of the library, especially that opencv does not use the most recent implementation of libsvm (it could help, but does not have to)
You can manually do the "one vs all" implementation and save each separate model. This way you should avoid the memory problems, as at any time you will allocate at most as much memory as it is needed for a binary problem. At the end you just load your models from file and apply simple voting scheme. If saved models are too big, it means that your model overfitted (in SVMs the overfitting is usually expressed as too big number of support vectors, which are in fact the only thing needed to define the model - so if the model has too many sv's to load them to memory it means that it is most probably incorrectly trained and you should try different parameters/kernels)
I am getting an error while trying to update the CvBoost classifier in OpenCV, the error i am getting is as follows
OpenCV Error: Bad argument (The new training data must have the same types and the input and output variables and the same categories for categorical variables) in CvDTreeTrainData::set_data, file /home/bsoni/Downloads/OpenCV-2.4.1/modules/ml/src/tree.cpp, line 172
Basically i am working on a 2 class problem and initially i train the classifier with a set of SURF features. So the process is that i initially train the classifier using a set of surf descriptors.
data.surf_features are a set of 128 bit SURF descriptors
data.surf_classes are a set of class labels which are either +1 or -1
Initially i train the classifier using
void train()
{
CvBoostParams params(CvBoost::REAL,80,0.95,2,false,0);
aSurfBoost.train(data.surf_features,CV_ROW_SAMPLE,data.surf_classes,Mat(),Mat(),Mat(),Mat(),params,false);
}
following that i try to re-train the classifier using the code below
void train()
{
CvBoostParams params(CvBoost::REAL,80,0.95,2,false,0);
aSurfBoost.train(data.surf_features,CV_ROW_SAMPLE,data.surf_classes,Mat(),Mat(),Mat(),Mat(),params,true);
}
the only think i am changing is setting the update parameter to true.
I have checked the Mat.type of the descriptors and in both cases they are the exact same thing.
any suggestions solutions or possibly even workarounds would be welcome.
My MFC app runs various API from OpenCV2. Everything else is working fine. But when my program runs
cv::Mat result;
cv::equalizeHist(m_cvImage,result);
I get following runtime exception.
Unhandled exception at 0x7727fbae in OpenCVTest.exe: Microsoft C++ exception: cv::Exception at memory location 0x0029e944..
"C:\slave\WinInstallerMegaPack\src\opencv\modules\imgproc\src\histogram.cpp:2430: error: (-215) CV_ARE_SIZES_EQ(src, dst) && CV_ARE_TYPES_EQ(src, dst) && CV_MAT_TYPE(src->type) == CV_8UC1"
According to debugger, the exception was thrown in the middle of processing (about 40%) the image in equalizeHist. Is there anything I need to do? FYI: I am using binary OpenCV provided by its web site.
UPDATE:
I've resolved this issue by converting images to gray-level before equalizing it. I just didn't know
the function only works with gray-level image
images that look like gray-level can be non-gray.
I imagine the problem you are encountering is that m_cvImage is a 3-channel image. So, you need to convert it to a grayscale image before you can call equalizeHist.
cvtColor(m_cvImage, m_cvImage, CV_BGR2GRAY);
cv::Mat result;
cv::equalizeHist(m_cvImage, result);
Also, have a look at the EqualizeHist_Demo.cpp tutorial sample to see how it is used.