I have a dataset from custom abstract objects and a custom distance function. Is there any good SVM libraries that allows me to train on my custom objects (not 2d points) and my custom distance function?
I searched the answers in this similar stackoverflow question, but none of them allows me to use custom objects and distance functions.
First things first.
SVM does not work on distance functions, it only accepts dot products. So your distance function (actually similarity, but usually 1-distance is similarity) has to:
be symmetric s(a,b)=s(b,a)
be positive definite s(a,a)>=0, s(a,a)=0 <=> a=0
be linear in first argument s(ka, b) = k s(a,b) and s(a+b,c) = s(a,c) + s(b,c)
This can be tricky to check, as you actually ask "is there a function from my objects to some vector space, phi such that s(phi(x), phi(y))" is a dot-product, thus leading to definition of so called kernel, K(x,y)=s(phi(x), phi(y)). If your objects are themselves elements of vector space, then sometimes it is enough to put phi(x)=x thus K=s, but it is not true in general.
Once you have this kind of similarity nearly any SVM library (for example libSVM) works with providing Gram matrix. Which is simply defined as
G_ij = K(x_i, x_j)
Thus requiring O(N^2) memory and time. Consequently it does not matter what are your objects, as SVM only works on pairwise dot-products, nothing more.
If you look appropriate mathematical tools to show this property, what can be done is to look for kernel learning from similarity. These methods are able to create valid kernel which behaves similarly to your similarity.
Check out the following:
MLPack: a lightweight library that provides lots of functionality.
DLib: a very popular toolkit that is used both in industry and academia.
Apart from these, you can also use Python packages, but import them from C++.
Related
I trained a Word2Vec Model an I am trying formulate the most_similar function mathematicaly.
I thought about a set, that contains the n most similar word, given a word as reference.
Exist there somewhere a good definition?
You can view the source code which implements most_similar() for the gensim Python library's KeyedVectors abstraction (for holding & performing common actions on sets of word-vectors):
https://github.com/RaRe-Technologies/gensim/blob/fbc7d0952f1461fb5de3f6423318ae33d87524e3/gensim/models/keyedvectors.py#L491
Roughly, it first computes a target vector – by combining any positive & negative examples the caller has provided. In the common case, this might just be a single ('positive') word-vector.
Then, it calculates the cosine-similarity with every other vector, and sorts those similarities for highest, and returns the top-N results.
I need to make some polygon computation on 2D plan. Typically, isInside operation.
I found boost::Polygon API but my points are inside a single big array.
That's I call indexed geometry.
See http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-9-vbo-indexing/
So my best way is just to boost::Polygon and give to it my array + indices of points to use.
The objective is just to don't copy my million of points (because they are shared at least by two polygons).
I don't know if API allows it ( or I need to inherit my own class :-( ).
Maybe, someone know another API (inside boost or other).
Thanks
Documentation
within demo : https://www.boost.org/doc/libs/1_68_0/libs/geometry/doc/html/geometry/reference/algorithms/within/within_2.html
Boost Geometry allows for adapted user-defined data types.
Specifically, C arrays are adapted here: https://www.boost.org/doc/libs/1_68_0/boost/geometry/geometries/adapted/c_array.hpp
I have another answer up where I show how to use Boost Geometry algorithms on a direct C array of structs (in that case I type punned using tuple as the point type): How to calculate the convex hull with boost from arrays instead of setting each point separately? (the other answers show alternatives that may be easier if you can afford to copy some data).
The relevant algorithms would be:
https://www.boost.org/doc/libs/1_68_0/libs/geometry/doc/html/geometry/reference/algorithms/within.html
https://www.boost.org/doc/libs/1_68_0/libs/geometry/doc/html/geometry/reference/algorithms/disjoint.html
I'm new to C++ and I think a good way for me to jump in is to build some basic models that I've built in other languages. I want to start with just Linear Regression solved using first order methods. So here's how I want things to be organized (in pseudocode).
class LinearRegression
LinearRegression:
tol = <a supplied tolerance or defaulted to 1e-5>
max_ite = <a supplied max iter or default to 1k>
fit(X, y):
// model learns weights specific to this data set
_gradient(X, y):
// compute the gradient
score(X,y):
// model uses weights learned from fit to compute accuracy of
// y_predicted to actual y
My question is when I use fit, score and gradient methods I don't actually need to pass around the arrays (X and y) or even store them anywhere so I want to use a reference or a pointer to those structures. My problem is that if the method accepts a pointer to a 2D array I need to supply the second dimension size ahead of time or use templating. If I use templating I now have something like this for every method that accepts a 2D array
template<std::size_t rows, std::size_t cols>
void fit(double (&X)[rows][cols], double &y){...}
It seems there likely a better way. I want my regression class to work with any size input. How is this done in industry? I know in some situations the array is just flattened into row or column major format where just a pointer to the first element is passed but I don't have enough experience to know what people use in C++.
You wrote a quite a few points in your question, so here are some points addressing them:
Contemporary C++ discourages working directly with heap-allocated data that you need to manually allocate or deallocate. You can use, e.g., std::vector<double> to represent vectors, and std::vector<std::vector<double>> to represent matrices. Even better would be to use a matrix class, preferably one that is already in mainstream use.
Once you use such a class, you can easily get the dimension at runtime. With std::vector, for example, you can use the size() method. Other classes have other methods. Check the documentation for the one you choose.
You probably really don't want to use templates for the dimensions.
a. If you do so, you will need to recompile each time you get a different input. Your code will be duplicated (by the compiler) to the number of different dimensions you simultaneously use. Lots of bad stuff, with little gain (in this case). There's no real drawback to getting the dimension at runtime from the class.
b. Templates (in your setting) are fitting for the type of the matrix (e.g., is it a matrix of doubles or floats), or possibly the number of dimesions (e.g., for specifying tensors).
Your regressor doesn't need to store the matrix and/or vector. Pass them by const reference. Your interface looks like that of sklearn. If you like, check the source code there. The result of calling fit just causes the class object to store the parameter corresponding to the prediction vector β. It doesn't copy or store the input matrix and/or vector.
I’d like to learn how best set up an SVM in openCV (or other C++ library) for my particular problem (or if indeed there is a more appropriate algorithm).
My goal is to receive a weighting of how well an input set of labeled points on a 2D plane compares or fits with a set of ‘ideal’ sets of labeled 2D points.
I hope my illustrations make this clear – the first three boxes labeled A through C, indicate different ideal placements of 3 points, in my illustrations the labelling is managed by colour:
The second graphic gives examples of possible inputs:
If I then pass for instance example input set 1 to the algorithm it will compare that input set with each ideal set, illustrated here:
I would suggest that most observers would agree that the example input 1 is most similar to ideal set A, then B, then C.
My problem is to get not only this ordering out of an algorithm, but also ideally a weighting of by how much proportion is the input like A with respect to B and C.
For the example given it might be something like:
A:60%, B:30%, C:10%
Example input 3 might yield something such as:
A:33%, B:32%, C:35% (i.e. different order, and a less 'determined' result)
My end goal is to interpolate between the ideal settings using these weights.
To get the ordering I’m guessing the ‘cost’ involved of fitting the inputs to each set maybe have simply been compared anyway (?) … if so, could this cost be used to find the weighting? or maybe was it non-linear and some kind of transformation needs to happen? (but still obviously, relative comparisons were ok to determine the order).
Am I on track?
Direct question>> is the openCV SVM appropriate? - or more specifically:
A series of separated binary SVM classifiers for each ideal state and then a final ordering somehow ? (i.e. what is the metric?)
A version of an SVM such as multiclass, structured and so on from another library? (...that I still find hard to conceptually grasp as the examples seem so unrelated)
Also another critical component I’m not fully grasping yet is how to define what determines a good fit between any example input set and an ideal set. I was thinking Euclidian distance, and I simply sum the distances? What about outliers? My vector calc needs a brush up, but maybe dot products could nose in there somewhere?
Direct question>> How best to define a metric that describes a fit in this case?
The real case would have 10~20 points per set, and time permitting as many 'ideal' sets of points as possible, lets go with 30 for now. Could I expect to get away with ~2ms per iteration on a reasonable machine? (macbook pro) or does this kind of thing blow up ?
(disclaimer, I have asked this question more generally on Cross Validated, but there isn't much activity there (?))
I am trying to fit a curve to a number of pixels in an image so I can do further processing regarding it's shape. Does anyone know how to implement a least squares method in C/++ preferably using the following parameters: an x array, a y array, and an answers array (the length of the answers array should tell how many coefficients need to be calculated)?
If this is not some exercise in implementing this yourself, I would suggest you use a ready-made library like GNU gsl. Have a look at the functions whose names start with gsl_multifit_, see e.g. the second example here.
If you are trying to fit ordered points (x,y) like in a graph you can use linear least squares methods but always with such methods you will need to specify the degree of the polynomial you use to approximate with (length of your answers array presumably). If your points are general ordered points in the plane that are able to form a closed loop or some outline of a structure (for example trying to fit points that describe an ellipse or a circle or other closed or more complex geometry) then you are going to need something more sophisticated. You can still use least squares but you will need to use a parametric type curve like a spline. Take a look at the pdf at this link which may give what you need (or at the very least illustrate what I am saying): http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CE0QFjAA&url=http%3A%2F%2Ffolk.uio.no%2Fin329%2Fnchap6.pdf&ei=Yp8CUNvHC8Kg0QX6r_mEBw&usg=AFQjCNHBUZ5t2Y7C8eONYSosRydLs4Zu4A
Without seeing an image of exactly what you are trying to fit it is hard to say - it is quite possible that your data can be fit in a non parametric way with linear least squares polynomials - if so all you will need is a linear algebra library and you can code the approximations yourself like so: http://en.wikipedia.org/wiki/Ordinary_least_squares
Even so, all forms of approximation require you to decide on your form (function basis and degree etc) before you fit it. For example, if you want to decide on whether you need a 4th,5th,6th or 7th degree polynomial fit your data you would need to fit each one and assess the suitability for yourself. There is no generic way (at least none that I know of) that will tell you the degree of approximation you need to fit to your data.