How to port the MATLAB libSVM parameters in C++ - c++

In my cross-validation in MATLAB with libSVM I found that these are the best parameters to use:
model = svmtrain( labels, training, '-s 0 -t 2 -c 10000 -g 100');
Now I want to replicate the classification in C++ with OpenCV.
But I do not understand how to set the C++ parameters to be the same as MATLAB:
Based on this documentation I tried the following:
CvSVMParams params;
params.svm_type = CvSVM::C_SVC;
params.kernel_type = CvSVM::RBF;
//params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 10000, 1e-6);
params.Cvalue = 10000;
params.gamma = 100;
CvSVM SVM;
SVM.train(train, labels, Mat(), Mat(), params);
but I get this error:
error: no member named 'Cvalue' in 'CvSVMParams' params.Cvalue = 10000;
Last thing, should I uncomment
//params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 10000, 1e-6);
and try other values or is it not important?
Because I can't even understand in MATLAB how to set the same parameters.

Not every parameter has an exact equivalent when porting from LibSVM in matlab to OpenCV SVM. The term criteria is one of them. Keep in mind that the SVM of opencv might have some bugs depending on the version you use (not an issue with the latest version).
You should un-comment the line, to have better control of your termination criteria. This line says that the algorithm should end when 10000 iterations are performed. If you use CV_TERMCRIT_EPS, it will stop when a precision less than the specified (for you, its 1e-6) is achieved for each vector. Use both stopping criteria, and it will stop when either of them completes.
Alternatively, could also try using LibSVM for C++, by linking it as a library. This will give you the exact same algorithms and functions that you are using in matlab.

Related

extracting output data from typed_output_tensor in TFlite

Thanks in advance for your support.
I'm trying to get the output of a tensor after the inference on a .tflite U-net neural network. I'm using Tensorflow lite image classification code as a baseline.
I need to adapt the code for a segmentation task. My question is how I can access the output of the inferenced model (which 128x128x1) and write the result into an image?
I already debugged the code and explored many different approaches. Unfortunately, I'm not confident with the C++ language. What I found is that the command: interpreter->typed_output_tensor<float>(0) should be what I need, as also referenced here: https://www.tensorflow.org/lite/guide/inference#loading_a_model. However, I cannot access the 128x128 tensor generated by the network.
You can find the code at the address: https://github.com/tensorflow/tensorflow/blob/770481fb3e9126f9a29db5667f528e450d54d719/tensorflow/lite/examples/label_image/label_image.cc
The interesting part is here (lines 217 -224):
const float threshold = 0.001f;
std::vector<std::pair<float, int>> top_results;
int output = interpreter->outputs()[0];
TfLiteIntArray* output_dims = interpreter->tensor(output)->dims;
// assume output dims to be something like (1, 1, ... ,size)
auto output_size = output_dims->data[output_dims->size - 1];
I expect the values saved in an image or an alternative way of saving the output tensor

Cannot Obtain Similar DL Prediction Result in Pytorch C++ API Compared to Python

I have trained a deep learning model using unet architecture in order to segment the nuclei in python and pytorch. I would like to load this pretrained model and make prediction in C++. For this reason, I obtained trace file(with pt extension). Then, I have run this code:
#include <iostream>
#include <torch/script.h> // One-stop header.
#include <iostream>
#include <memory>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main(int argc, const char* argv[]) {
Mat image;
image = imread("C:/Users/Sercan/PycharmProjects/samplepyTorch/test_2.png", CV_LOAD_IMAGE_COLOR);
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load("C:/Users/Sercan/PycharmProjects/samplepyTorch/epistroma_unet_best_model_trace.pt");
module->to(torch::kCUDA);
std::vector<int64_t> sizes = { 1, 3, image.rows, image.cols };
torch::TensorOptions options(torch::ScalarType::Byte);
torch::Tensor tensor_image = torch::from_blob(image.data, torch::IntList(sizes), options);
tensor_image = tensor_image.toType(torch::kFloat);
auto result = module->forward({ tensor_image.to(at::kCUDA) }).toTensor();
result = result.squeeze().cpu();
result = at::sigmoid(result);
cv::Mat img_out(image.rows, image.cols, CV_32F, result.data<float>());
cv::imwrite("img_out.png", img_out);
}
Image outputs ( First image: test image, Second image: Python prediction result, Third image: C++ prediction result):
As you see, C++ prediction output is not similar to python prediction output. Could you offer a solution to fix this problem?
Even though the question is old it might be useful to some. This answer is based on pytorch 1.5.0 release (and first stable version of C++ frontend), the case might be a little different in previous versions (though 1.4.0+ would work the same IIRC).
PyTorch C++ frontend code
no need to explicitly create torch::TensorOptions object if you only want to specify the type in torch::from_blob. Check Configuring Properties of Tensor in PyTorch notes, this will clear it up further. Basically, you can just use torch::ScalarType::Byte.
This type is equivalent to torch::kUInt8 which is easier to find in the docs IMO
No need to create std::vector object to keep shape as torch::from_blob has it's second argument of type IntArrayRef, which is a typedef for ArrayRef<int64_t> (see ArrayRef documentation). This class, in turn, has multiple overloaded constructors, one of which takes std::initializer_list (which is exactly yours { 1, 3, image.rows, image.cols })
With all that in mind you can create tensor_image in a single line like so (added auto as returned type is IMO obvious and const as it won't be modified further as the type is changed in the same line):
const auto tensor_image =
torch::from_blob(image.data, {1, 3, image.rows, image.cols},
torch::kUInt8)
.toType(torch::kFloat);
Actual error
OpenCV loads images in BGR (blue-green-red) format, while PyTorch usually uses RGB (say in torchvision in Python). Solution is to permute your image so the colors match.
Including above change, whole code becomes:
const auto tensor_image =
torch::from_blob(image.data, {1, 3, image.rows, image.cols},
torch::kUInt8)
.toType(torch::kFloat)
.permute(0, 3, 2, 1);
And you should be fine now with your predictions. Maybe it would be beneficial to get tensor > 0 instead of sigmoid as it's probably binary classification and there is no need for this operation per se.
Other PyTorch related stuff
There is no need to use at (ATen - as described in docs, foundational tensor and mathematical operation library on which all else is built) namespace anymore as torch:: namespace redirects to it.
Clearer and less confusing options would be:
torch::kCUDA instead of at::kCUDA
torch::sigmoid instead of at::sigmoid
Also .data<T> is deprecated in favor of .data_ptr<T>
All in all you rarely need to use different namespace than torch:: and it's sub-namespaces.
In the general case the output of unet is (batch, classes, height, width), where classes refer to the segment class in your final mask. This means that each pixel has an associated vector of probabilities in dim 1, which should be activated using softmax across this dimension, so that they sum to 1. After that you can use argmax in the same dimension to obtain the most probable class for each pixel. In your case this would just be one of the two classes -- object or background.
If by any chance you were using FastAI to train your model, you can have a look here. This is a lookup which maps the activation function that should be used on the final layer based on the loss function used during training. Unet uses the cross_entropy_loss loss function.

Which algorithm is used to train/predict Opencv LBPH face recognizer?

I couldn't understand how training stage and predition stage is working.İs it using another algorithm like svm or k-nearestneighbour after finding LBPH features?
If you check: https://github.com/Itseez/opencv_contrib/blob/master/modules/face/src/lbph_faces.cpp
Then you will see they use 1-nearest neighbour, excerpt from detect function:
// find 1-nearest neighbor
collector->init((int)_histograms.size(), state);
for (size_t sampleIdx = 0; sampleIdx < _histograms.size(); sampleIdx++) {
double dist = compareHist(_histograms[sampleIdx], query, HISTCMP_CHISQR_ALT);
int label = _labels.at<int>((int)sampleIdx);
if (!collector->collect(label, dist, state))return;
}
A 1-nearest neighbour classifier is used since the Local Binary Pattern descriptor is simple enough. See for a more in depth explanation the paper: "Face Recognition with Local Binary Patterns"
On a side note. This is not really an implementation/practical question and thus does not really belong on this forum. I would suggest using the opencv forum.

SVM parameter optimization in Opencv

I want to optimize SVM parameters in Opencv. But, every time I use train_auto I get C=1 and gamma=1. Some people use LibSVM but I could not write a wrapper for that. Both trainingData and labels are taken from an existing code which gives good results so I am trying to get the same parameters for that code with train_auto. In the original code C=312.5 and gamma=0.50625. I saw that somebody used CvStatModel for python, is it necessary for C++? Where do I make a mistake?
Thanks in advance.
The Code:
CvParamGrid CvParamGrid_C(pow(2.0,-5), pow(2.0,15), pow(2.0,2));
CvParamGrid CvParamGrid_gamma(pow(2.0,-15), pow(2.0,3), pow(2.0,2));
if (!CvParamGrid_C.check() || !CvParamGrid_gamma.check())
cout<<"The grid is NOT VALID."<<endl;
CvSVMParams paramz;
paramz.kernel_type = CvSVM::RBF;
paramz.svm_type = CvSVM::C_SVC;
paramz.term_crit = cvTermCriteria(CV_TERMCRIT_ITER,100,0.000001);
svm.train_auto(trainingData, labels, Mat(), Mat(), paramz,10, CvParamGrid_C, CvParamGrid_gamma, CvSVM::get_default_grid(CvSVM::P), CvSVM::get_default_grid(CvSVM::NU), CvSVM::get_default_grid(CvSVM::COEF), CvSVM::get_default_grid(CvSVM::DEGREE), true);
svm.get_params();
cout<<"gamma:"<<paramz.gamma<<endl;
cout<<"C:"<<paramz.C<<endl;
I modified the code as follows paramz = svm.get_params() and it worked fine.

Opencv classifier update error

I am getting an error while trying to update the CvBoost classifier in OpenCV, the error i am getting is as follows
OpenCV Error: Bad argument (The new training data must have the same types and the input and output variables and the same categories for categorical variables) in CvDTreeTrainData::set_data, file /home/bsoni/Downloads/OpenCV-2.4.1/modules/ml/src/tree.cpp, line 172
Basically i am working on a 2 class problem and initially i train the classifier with a set of SURF features. So the process is that i initially train the classifier using a set of surf descriptors.
data.surf_features are a set of 128 bit SURF descriptors
data.surf_classes are a set of class labels which are either +1 or -1
Initially i train the classifier using
void train()
{
CvBoostParams params(CvBoost::REAL,80,0.95,2,false,0);
aSurfBoost.train(data.surf_features,CV_ROW_SAMPLE,data.surf_classes,Mat(),Mat(),Mat(),Mat(),params,false);
}
following that i try to re-train the classifier using the code below
void train()
{
CvBoostParams params(CvBoost::REAL,80,0.95,2,false,0);
aSurfBoost.train(data.surf_features,CV_ROW_SAMPLE,data.surf_classes,Mat(),Mat(),Mat(),Mat(),params,true);
}
the only think i am changing is setting the update parameter to true.
I have checked the Mat.type of the descriptors and in both cases they are the exact same thing.
any suggestions solutions or possibly even workarounds would be welcome.