How train neural network in OpenCV on histograms - c++

I want to train mlp in OpenCV to recognize if there is an specified object on an image.
The problem is as far as I know, constructors of Mat object (with wich mlp operates) can use just simple variables types. So I can't use Mat of Mat, vector or Mat of hists even despite the fact it consists of floats, I don't see the way to split the objects inside it, if I use the only one Mat object to collect all hists.
Sorry if question is stupid.
P.S. I need to use mlp concrete, because Haar cascade is used already and alternative way is neccessary to look of.

Mat trainingDataMat(600, 8, CV_32FC1, trainingData);
Mat labelsMat(600, 1, CV_32SC1, labels);
Ptr svm = SVM::create();
svm->setType(SVM::C_SVC);
svm->setKernel(SVM::LINEAR);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
svm->train(trainingDataMat, ROW_SAMPLE, labelsMat);

Related

Opencv 3.0 SVM train classification issues

Im new in openCV SVM. Im running in Xcode 7.0, openCV 3.0, Below is my code
MatMat labels(0,1,CV_32FC1);
//Mat labels(0,1,CV_32S); //I also try this when i saw some posting, But error too.
...
Mat samples_32f; samples.convertTo(samples_32f, CV_32F);
//Mat samples_32f; samples.convertTo(samples_32f, CV_32FC1); //tried!
Ptr<SVM> classifier = SVM::create();
classifier->SVM::train(samples_32f, 0, labels); <- Here the Error
The OpenCV Error: Bad argument (in the case of classification problem the responses must be categorical; either specify varType when creating TrainData, or pass integer responses) in train.
When I search around some solutions, the error message seem was came from labels that define not integer value. So i had try to changed to Mat labels(0,1,CV_32S), but the issues error still the same.
So i have no idea what going wrong with the code..is anyone can help?
The errors is because labels does not content any value as defined as 0 rows with 1 cols. Therefore, it is correct to make sure labels has holding numbers of rows records for SVM training.
My Solutions:
Mat labels(0,1,CV_32S);
/*
for loop to use pushback added value into lables
{...}
*/
/*
or
define another Mat labeled(labels.rows, 1, CV_32S); after the for loop
and use it in the SVM.train
*/
Mat samples_32f; samples.convertTo(samples_32f, CV_32F);
Ptr<SVM> classifier = SVM::create();
classifier->SVM::train(samples_32f, 0, labels); <-change the labels names if you define new labels after the for loop.
Thanks, that's what i can share.

OpenCV: Why create multiple Mat objects to transform an image's format?

I have not worked with OpenCV for a while, so please bear with my beginner questions. I curiously thought of something as I was looking through OpenCV tutorials and sample code.
Why do people create multiple Mat images when going through multiple transformations? Here is an example:
Mat mat, gray, thresh, equal;
mat = imread("E:/photo.jpg");
cvtColor(mat, gray, CV_BGR2GRAY);
equalizeHist(gray, equal);
threshold(equal, thresh, 50, 255, THRESH_BINARY);
Example of a code that uses only two Mat images:
Mat mat, process;
mat = imread("E:/photo.jpg");
cvtColor(mat, process, CV_BGR2GRAY);
equalizeHist(process, process);
threshold(process, process, 50, 255, THRESH_BINARY);
Is there anything different between the two examples? Also, another beginner question: will OpenCV run faster when it only creates two Mat images, or will it still be the same?
Thank you in advance.
The question comes down to, do you still need the unequalized image later on in the code? If you want to further process the gray image then the first option is better. If not, then use the second option.
Some functions might not work in-place; specifically, ones that transform the matrix to a different format, either by changing its dimensions (such as copyMakeBorder) or number of channels (such as cvtColor).
For your use case, the two blocks of code perform the same number of calculations, so the speed wouldn't change at all. The second option is obviously more memory efficient.

Custom SIFT detector in OpenCV

Is there a way to specify custom SIFT detector parameters in OpenCV?
It seems that the FeatureDetector constructor does not take any parameter, whereas it seems possible to specify those parameters in the SIFT constructor.
I am working on logo detection.
Some of the logos have very low texture information, so I would like to detect more keypoints when there are too few (I could increase the edgeThreshold of SIFT?).
It is possible to create a custom SIFT descriptor extractor:
SIFT siftDetectorExtractor = SIFT(200, 3, 0.04, 15, 1.6);
Mat logo = imread("logoName.jpg");
vector<KeyPoint> keyPoints;
Mat sifts;
siftDetectorExtractor(logo, Mat(), keyPoints, sifts);
or use the detector-class:
Ptr<FeatureDetector> detector = Ptr<FeatureDetector>( new SIFT( <your arguments> ) );

OpenCV SVM throwing exception on train, "Bad argument (There is only a single class)"

I'm stuck on this one.
I am trying to do some object classification through OpenCV feature 2d framework, but am running into troubles on training my SVM.
I am able to extract vocabularies and cluster them using BowKMeansTrainer, but after I extract features from training data to add to the trainer and run SVM.train method, I get the following exception.
OpenCV Error: Bad argument (There is only a single class) in cvPreprocessCategoricalResponses, file /home/tbu/prog/OpenCV-2.4.2/modules/ml/src /inner_functions.cpp, line 729
terminate called after throwing an instance of 'cv::Exception'
what(): /home/tbuchy/prog/OpenCV-2.4.2/modules/ml/src/inner_functions.cpp:729: error: (-5) There is only a single class in function cvPreprocessCategoricalResponses
I have tried modifying dictionary size, using different trainers, ensuring my matrix types are correct (to the best of my ability, still new to opencv).
Has any seen this error or have any insight on how to fix it?
My code looks like this:
trainingPaths = getFilePaths();
extractTrainingVocab(trainingPaths);
cout<<"Clustering..."<<endl;
Mat dictionary = bowTrainer.cluster();
bowDE.setVocabulary(dictionary);
Mat trainingData(0, dictionarySize, CV_32FC1);
Mat labels(0, 1, CV_32FC1);
extractBOWDescriptor(trainingPaths, trainingData, labels);
//making the classifier
CvSVM classifier;
CvSVMParams params;
params.svm_type = CvSVM::C_SVC;
params.kernel_type = CvSVM::LINEAR;
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
classifier.train(trainingData, labels, Mat(), Mat(), params);
Based on the error, it looks like your labels only contains one category of data. That is, all of the features in your trainingData have the same label.
For example, say you're trying to use the SVM to determine whether an image contains a cat or not. If every entry in the labels is the same, then either...
all your training images are labeled as "yes this is a cat"
or, all your training images are labeled as "no, this is not a cat."
SVMs try to separate two (or sometimes more) classes of data, so the SVM library complains if you only only provide one class of data.
To see if this is the problem, I recommend adding a print statement to check whether labels only contains one category. Here's some code to do this:
//check: are the printouts all the same?
for(int i=0; i<labels.rows; i++)
for(int j=0; j<labels.cols; j++)
printf("labels(%d, %d) = %f \n", i, j, labels.at<float>(i,j));
Once your extractBOWDescriptor() loads data into labels, I'm assuming that labels is of size (trainingData.rows, trainingData.cols). If not, this could be a problem.

OpenCV C++, getting Region Of Interest (ROI) using cv::Mat

I'm very new to OpenCV (started using it two days ago), I'm trying to cut a hand image from a depth image got from Kinect, I need the hand image for gesture recognition. I have the image as a cv::Mat type. My questions are:
Is there a way to convert cv::Mat to cvMat so that I can use cvGetSubRect method to get the Region of interest?
Are there any methods in cv::Mat that I can use for getting the part of the image?
I wanted to use IplImage but I read somewhere that cv::Mat is the preferred way now.
You can use the overloaded function call operator on the cv::Mat:
cv::Mat img = ...;
cv::Mat subImg = img(cv::Range(0, 100), cv::Range(0, 100));
Check the OpenCV documentation for more information and for the overloaded function that takes a cv::Rect. Note that using this form of slicing creates a new matrix header, but does not copy the data.
Maybe an other approach could be:
//Create the rectangle
cv::Rect roi(10, 20, 100, 50);
//Create the cv::Mat with the ROI you need, where "image" is the cv::Mat you want to extract the ROI from
cv::Mat image_roi = image(roi)
I hope this can help.