FastMarching method from MATLAB alternative in OpenCV C++ - c++

I am currently working on image processing software from MATLAB code to C++ with OpenCV.
I came upon the imseggmm function in MATLAB (Link) wich use the internal MEX fastMarchingMex wich return a distance map.
There seems to be no equivalent at this day in OpenCV, and implementations using OpenCV are not very common (in fact, inexistants on the web)
My question is : does others functions that do the same exact job exists even with Superior computation time ? (computation time is not an issue in my case)
EDIT : as pointed by comment, my question is not very clear.
Let's say that for me, performance is not important, I only search a function that output the same exact results as the FastMarching method.
I tested the DistanceTransform function wich did not returned the same results at all with my test sample.
There seems to be something I don't understand in the process of the FastMarching method, that differ from others distance calculation methods.
My MATLAB code use fastMarching providing it a binary image of zeros and ones, and an index of those non-zero points (in linear array coordinates). It return a distance map.
Please consider also this is the first time I post a question on StackOverflow, I hope I have not been anoying and that my question, if it got an answer, will help other people to find an alternative to this function.

Related

equivalent of wavedec (matlab function) in opencv

I am trying to rewrite a matlab code to cpp and I still blocked with this line :
[c, l]=wavedec(S,4,'Dmey');
Is there something like that in opencv ?
if someone have an idea about it try to share it with and thanks in advance.
Maybe if you would integrate your codes with Python, then PyWavelet might be an option.
I'm seeing the function that you're looking for is in there. That function is called Discrete Meyer (Dmey). Not sure what you're planning to do with that though, maybe you're processing some images or something, but Dmey is OK, not very widely used. You might want to just find some GitHub codes and integrate to whatever you're doing to see if it would work first, and based on those you can also change the details of your currently posted function (might find something more efficient).
1-D wavelet decomposition (wavedec)
In your code, c and l stand for coefficients and level. You're passing level four with a Dmey function. If you'd have one dimensional data, the following map is how your decomposition would look like, roughly I guess:
There are usually two types of decomposition models that are being used in Wavelets, one is called packet which is similar to a Full Binary Tree, from architecture standpoint:
The other one, which is the one you're most likely using, is less computationally expensive, because it does not decompose both branches of a tree, if you will. It'd just do the mathematical decomposition in one branch of the tree. Maybe, these images would shed some lights:
1 D
2 D
Notes:
If you have a working model in MatLab, you might want to see the C/C++ Code Generation in MatLab, will automatically convert MatLab codes to C++.
References:
Images are from Wikipedia or mathworks.com
Wiki
mathworks
Wavelet 2D

C++: find a minimum set of rectangles that cover a binary image

Task
I have a binary image, and I want to extract a relatively small number of rectangles that cover the non-zero area.
I don't really need the smallest set. Of course finding it at least some of the time wouldn't hurt, however average speed on non-pathological cases is much more important for me.
If it helps, you can think of the binary image as a set of 1x1 rectangles. Indeed such a set is trivial to find, and this solution already works for me basically – however, finding any smaller set would speed up other things considerably.
Yet another alternative (more or less equivalent to the above, at least from my point of view) is to have a set of arbitrary contours (including holes) as the input:
Related work
This question has been asked and answered many times; for example, the accepted answer here is great.
There is even a Javascript implementation here.
So the question is not about whether an algorithm exists, or which algorithm is optimal.
The question
Instead, the question is this:
Given the fact that I'm using C++, and OpenCV by the way, is there an implementation that I could easily integrate to my code?
Even better, is there an implementation with a permissive license?
Ideal solution
Ideally, I could just do:
const cv::Mat binary_image = ...; // get input image from somewhere
const std::vector<cv::Rect> rects = dissect(binary_image);
Or:
const std::vector<cv::Rect> initial_rects = ...;
const std::vector<cv::Rect> rects = find_small_dissection(initial_rects);
But of course a solution that doesn't use OpenCV is just fine also; it's no problem at all to convert back and forth. But since OpenCV already has types for image (cv::Mat) and rectangle (cv::Rect), it was convenient to use these definitions above.
Motivation for this question
It's not that I can't take the pseudocode, or the Javascript code, and start porting it to C++ and OpenCV. However, it'll probably take quite a while before my implementation runs bug-free. I'd much rather use that same time testing, and possibly fixing, an existing open-source implementation. If none can be found, then maybe I'll need to write the code myself, however I might not be able to then open-source the result (because I'm developing a commercial product for my client).

Least Median of Squares robust regression C++

I have a set of data z(0), z(1), z(2)...,z(n) that I am currently fitting with a 2 variables polynomial of the kind p(x,y) = a(1)*x^2+a(2)*y^2+a(3)*x*y+a(4). I have i=1,...,n (x(i),y(i)) coordinates that I impose to be p(x(i),y(i))=z(i). In this way I have a Overdetermined System that I can solve using Eigen SVD . I am looking for a more sophisticated method that can take care of outliers, like a Least Median of Squares robust regression (as described here) but I haven't found a C++ implementation for 2 variables. I looked in GSL but it seems there is nothing for 2 variable functions. The only other solution I can think of is using a TGraph2D in ROOT. Do you know any other solution? Numerical recipes maybe? Since I am writing C++ code I would prefer C or C++ implementations.
Since non answer has been given yet, but I am still working on this problem, I will share my progresses here.
The class TLinearFitter has a fit method that allows you to select Robust fitting - Least Trimmed Squares regression (LTS):
https://root.cern.ch/root/html532/TLinearFitter.html
Another possible solution, more time consuming maybe, but maybe more efficient on the long run is to write my own function to be minimized, and the use:
https://projects.coin-or.org/Ipopt to minimize it. Although in this approach there is a bigger "step". I don't know how to use the library and I haven't (yet?) found a nice tutorial to understand it.
here: https://wis.kuleuven.be/stat/robust/software there is a Fortran implementation of the LMedS algorithm called PROGRESS. So another possible solution could be to port this software to C/C++ and make a library out of it.

Supprt Vector Machine works in matlab, doesn't work in c++

I'm writing an application that uses an SVM to do classification on some images (specifically these). My Matlab implementation works really well. Using a SIFT bag-of-words approach, I'm able to get near 100% accuracy with a linear kernel.
I need to implement this in C++ for speed/portability reasons, and so I've tried using both libsvm and dlib. I've tried multiple SVM types (c_svm, nu_svm, one_class) and multiple kernels (linear, polynomial, rbf). The best I've been able to achieve is around 50% accuracy - even on the same samples that I've trained on. I've confirmed that my feature generators are working, because when I export my c++-generated features to Matlab and train on those, I'm able to get near-perfect results again.
Is there something magical about Matlab's SVM implementation? Are there any common pitfalls or areas that I might look into that would explain the behavior I'm seeing? I know this is a little vague, but part of the problem is that I don't know where to go. Please let me know in the comments if there is other info I can provide that would be helpful.
There is nothing magical about the Matlab version of the libraries, other that it runs in Matlab which makes it harder to shoot yourself on the foot.
A check list:
Are you normalizing your data, making all values lie between 0 and 1
(or between -1 and 1), either linearly or using the mean and the
standard deviation?
Are you parameter searching for a good value of C (or C and gamma in
the case of an RBF kernel)? Doing cross validation or on a hold out set?
Are you sure that your're handling NaN, and all other floating point
nastiness? Matlab is very good at hiding this from you, C++ not so
much.
Could it be that you're loading your data incorrectly, reading a
"%s" into a double or something that is adding noise to your input
data?
Could it be that libsvm/dlib expects the data in row major order and
your're sending it in in column major (or the other way around)? Again Matlab makes this almost impossible, C++ not so much.
32-64 bit nastiness one version of the library, executable compiled
with the other?
Some other things:
Could it be that in Matlab you're somehow leaking the class (y) into
the preprocessing? no one does this on purpose, but I've seen it happen.
If you make almost any f(y) a feature, you'll get almost 100%
everytime.
Sometimes it helps to verify that everything is numerically
identical by printing to file before training both in C++ and
Matlab.
i'm very happy with libsvm using the rbf kernel. carlosdc pointed out the most common errors in the correct order :-). for libsvm - did you use the python tools shipped with libsvm? if not i recommend to do so. write your feature vectors to a file (from matlab and/or c++) and do a metatraining for the rbf kernel with easy.py. you get the parameters and a prediction for the generated model. if this prediction is ok continue with c++. from training you also get a scaled feature file (min/max transformed to -1.0/1.0 for every feature). compare these to your c++ implementation as well.
some libsvm issues: a nasty habit is (if i remember correctly) that values scaling to 0 (zero) are omitted in the scaled file. in grid.py is a parameter "nr_local_worker" which is defining the mumber of threads. you might wish to increase it.

CUDA convolutionFFT2D example - I can't understand it

I studied the Cooley Tukey algorithm and I understood it. I got everything in the CUDA convolutionFFT2D example till these kernels:
spProcess2D calls -> spProcess2D_kernel which calls a lot of -> spPostprocessC2C, mulAndScale and spPreprocessC2C
Here's the complete code:
http://nopaste.info/30c13e44fe.html (convolutionFFT2D.cu, here is the spProcess2D function)
http://nopaste.info/78d22afac2.html (convolutionFFT2D.cuh, here are the other functions)
I already read all the nvidia sdk papers but I can't still figure out what these function do (they use twiddles, but nothing seems like a Cooley Tukey algorithm there)
Please help me if you can, or at least point me out where to solve my problem
Update: I found this link: http://cnx.org/content/m16336/latest/#uid38
Maybe these functions are performing a breadth-first algorithm? I still can't say that but the shape seems the same
It looks like the algorithm is doing something similar to the algorithm mentioned here. The preprocess step looks to be re-ordering the Real input of size N (after padding) to complex input of size N/2. The postprocess step is re-ordering the data to get back the FFT of the original
input array.
spPostprocessC2C looks like a single FFT butterfly. The complexity in the calling routines just comes from fitting the FFT algorithm into a SIMT model for CUDA.
Perhaps if you explained what it is that you are trying to achieve (beyond just understanding how this particular FFT implementation works) then you might get some more specific answers.