Using Matlab SVM model in C++ - c++

I have used libsvm in Matlab to create an SVM model. I can't create the model in the code where I do the prediction so I need to save the model and use it later. I want to use that model in my C++ code to make predictions. I know how to predict in matlab itself using svmpredict, but I want to save the model created in matlab and use it in C++ for predictions. Fist of all, is it possible? If so how do I save the model in matlab and call it back in C++?

One option is to save the parameters learned by the model in a csv file. The returned model from svmtrain is a struct. One of the elements of this struct are the model parameters. You could then read this into your C++ file.
However, this seems redundant because libSVM is already written in C. Hence, the predict function being called is being called in C.

If all you need is being able to predict values in your C++ code, one thing you can do is extracting the model parameters in matlab and use it in predictions in your C++ code.
You may already know that you can manually do the predictions by substituting the required values and predicting based on the sign.
This answer has information about what parameters to extract in the case of RBF kernel and how you can make predictions.

Related

Word2Vec model output types

When Word2Vec model is trained, there are three outputs created.
model
model.wv.syn0
model.syn1neg
I have a couple of questions regarding these models.
How are these outputs essentially different from each other?
Which model to look at if I want to access trained results?
Thanks in advance !
Those are 3 files created by the gensim Word2Vec .save() function. The model file is a Python pickle of the main model; the other files are some of the over-large numpy arrays stored separately for efficiency. The syn0 happens to contain the raw word vectors, and the syn1neg the model's internal weights – but neither are cleanly interpretable without the other data.
So, the only support for re-loading them is to use the matching .load() function, with all three available. A successful re-load() will result in a model object just like the one you save()d, and you'd access the results via that loaded object.
(If you only need the raw word-vectors, you can also use the .save_word2vec_format() method, which writes in a format compatible with the original Google-releases word2vec.c code. But that format has strictly less information that gensim's native save, so you'd only use it if you absolutely need to for compatibility with other software. Working with the gensim native files ensures you could always save the other format later, while you can't go the other way.)

How can I print the output of a hidden layer in Lasagne

I am trying to use lasgne to train a simple neural network, and to use my own C++ code to do inference. I use the weights generated by lasgne, but I am not able to get good results. Is there a way I can print the output of a hidden layer and/or the calculations themselves? I want to see who it works under the hood, so I can implement it the same way in C++.
I can help with Lasagne + Theano in Python, I am not sure from your question whether you fully work in C++ or you only need the results from Python + Lasagne in your C++ code.
Let's consider you have a simple network like this:
l_in = lasagne.layers.InputLayer(...)
l_in_drop = lasagne.layers.DropoutLayer(l_in, ...)
l_hid1 = lasagne.layers.DenseLayer(l_in_drop, ...)
l_out = lasagne.layers.DenseLayer(l_hid1, ...)
You can get the output of each layer by calling the get_output method on a specific layer:
lasagne.layers.get_output(l_in, deterministic=False) # this will just give you the input tensor
lasagne.layers.get_output(l_in_drop, deterministic=True)
lasagne.layers.get_output(l_hid1, deterministic=True)
lasagne.layers.get_output(l_out, deterministic=True)
When you are dealing with dropout and you are not in the training phase, it's important to remember to call get_output method with the deterministic parameter set to True, to avoid non-deterministic behaviours. This applies to all layers that are preceded by one or more dropout layers.
I hope this answers your question.

Extracting MatConvnet model weights

I am currently developing an application for facial recognition.
The algorithms are implemented and trained using the MatConvnet library (http://www.vlfeat.org/matconvnet/). At the end, I have a Network (.mat file) which looks like that:
I would like to know if it were possible to extract the weights of the Network using its .mat file, write them in a XML file and read them with Caffe C++. I would like to reuse them in Caffe C++ in order to do some testing and hardware implementation. Is there an efficient and practical way to proceed so ?
Thank you for very much for your help.
The layer whose parameters you'd like to store, must be set as 'precious'. In net.var you can access the parameters and write them.
There is a conversion script that converts matconvnet models to caffe models here which you may find useful.
You can't use weights of the trained Network by matconvnet for caffe. You can merely import your model from matconvnet to caffe.(https://github.com/vlfeat/matconvnet/blob/4ce2871ec55f0d7deed1683eb5bd77a8a19a50cd/utils/import-caffe.py). But this script does not support all layers and you may have difficulties in employing it.
The best way is to define your caffe prototxt in python as the matconvnet model.

Using Microsoft Solver Foundation to solve a linear programming task requiring thousands of data points

Using Microsoft Solver Foundation,I am trying to solve a linear program of the form Ax <= b where A is a matrix containing thousands of data points.
I know that I can new up a Model object and then use the AddConstraint method to add constraints in equation form. However putting those equations together where each contains thousands of variables is just not possible. I looked at the Model Class and can not find a way to just give it the matrix and other info.
How can I do this?
Thanks!
You can make A a parameter and bind data to it. Warning: Microsoft Solver Foundation has been discontinued a while ago, so you are advised to consider an alternative modeling system.

Read svm data and retrain with more data?

I am implementing a facial expression recognition and am using SVM to classify given expression.
When I train, I use this command line
svm.train(myFeatureVector,myLabels,Mat(),Mat(), myParameters);
svm.save("myClassifier.yml");
which will later when I will predict using
response = svm.predict(incomingFeatureVector);
But then when I want to train more than once (exited the program and start again), it seems to have overwritten my previous svm file. Is there any way I could do read previous svm file and add more data into it (and then resave it ,etc) ? I looked up on this openCV documentation and found nothing. However, when I read on this page; there is a method called CvSVM::read. I don't know what that does/how to implement it.
Hope anyone can help me :(
What you are trying to do is incremental learning but unfortunately Support Vector Machines is a batch algorithm, hence if you want to add more data you have to retrain with the whole set again.
There are online learning alternatives, like Pegasos SVM but I am not aware of any that is implemented on OpenCV