I want to learn how to do high precision calculations of a (Matlab) function in C++ via mex files – how to define new types if required, and what requirements / installations / makes I need. I am pretty new (again) to C++ and have Windows 7 and g++ / Microsoft SDK 7.1 as C++ compiler, and also Visual Studio 2013.
In detail: In my Matlab code, I want to run a function f, which takes several parameters and arrays as arguments and returns a scalar, for speed in C++ via mex files, and for accuracy with high precision, that is, more than “double”, so up to 50 (enough for now, later maybe up to 80) significant decimal digits.
The function f finds the root of another function g, where g finds the roots of a fourth-order polynomial (which are known to be real), does some basic computations (exp, power, mult., div.) with them, and is known to be numerically unstable if not being used with high precision.
I managed to run it through mex files in C++ with “standard” precision. Now, which library or tool could I use to run in C++ - or is it possible without any further tool – and how do I define a new type and how in mex…
I tried gmp but do not see how to install it on Windows (from the manual). Can you show me (best would be instructions) how to?
Related
I converted some matlab code to c++. Some lines of the code have about 250,000 length. In addition, they involve very big mantissa numbers such as “2.209647215146515615616515615615103202897891
316e-258” and the precision is important to me (I know the number is very near to zero, but I can’t replace it with zero).
These codes run in matlab perfectly (fast and exact), but in c++, there is some problem:
First: the build time takes too long.
Second: after spend a long time for build, it works very very very slow!
I’m Using Visual Studio 2015 and when write this codes in it, it stopped working because of the huge size of lines and preprocessing tasks and I have to restart it.
Is there any way to work with long lines of code and very big numbers in c++ and Visual Studio IDE?
You might want to try GMP from gmplib.org
GMP is a free library for arbitrary precision arithmetic, operating on
signed integers, rational numbers, and floating-point numbers. There
is no practical limit to the precision except the ones implied by the
available memory in the machine GMP runs on. GMP has a rich set of
functions, and the functions have a regular interface.
You're question is very broad though plus since you're using Visual Studio it might be a nightmare to compile this with your existing library. I suggest you go over to Linux and work on that for "scientific computation".
I am trying to use the FFTW library in a Visual Studio project, and I am having trouble with getting the floating point precision to work. I have created the library files and linked them as in this post and I have only included the libfftw3f-3.lib library as it says in their documentation, and everything else is fine except for the fftw_plan_dft_r2c_2d() function. It will not accept my array of type float* for parameter 3 because it is incompatible with its argument of type double*. I understand it is just precision but, I would prefer to use floating point because fftw does support it, and b/c of the way I am memcpy() 'ing data from a float array. I am just having issues with fftw library using floating point. I havent seen anything specific on this in their Windows section like in their Unix section besides linking to the libfftw3f-3.lib library. Can anyone give me clues on how to make this happen?
For single precision routines in FFTW you need to use routines with an fftwf_ prefix rather than fftw_, so your call to fftw_plan_dft_r2c_2d() should actually be fftwf_plan_dft_r2c_2d().
I use C++ and odeint to solve a set of differential equations. I compile the code in Matlab using mex and g++ on Mac OS X. For some time everything worked perfectly, but now something curious is happening:
I can run the same simulation (with the same parameters) twice and in one run the results are fine and in the other a whole column (or multiple columns) of the output is NaN (this also influences the other outputs hinting at a problem occurring during the integration process).
I tried switching between various solvers. Now I am using runge_kutta_fehlberg78 with a fixed step size. (One possibility for NaN output is that the initial step size of an adaptive solver is too large.)
What can be the cause of this? Especially the randomness makes me wonder.
Edit: I am starting to suspect that the problem has something to do with Matlab. I compiled a version without the Matlab interface with Xcode as a normal executable and so far I didn't have a issue with NaN results. Its hard to tell if the problem is truly resolved and I don't understand why that would resolve it.
I am writing a simulation of some differential equation. My idea was the following:
1. Write the core simulation (moving forward in time, takes a lot of time) in C++
2. Do initialisation and the analysis of the results with a program
like Matlab/Scilab
The reason for (1) is that C++ is faster if implemented correctly.
The reason for (2) is that for me it is easier to make analysis, like plotting etc..., with a program like Matlab.
Is it possible to do it like this, how do I call C++ from Matlab?
Or do you have some suggestions to do it in a different way?
You could certainly do as you suggest. But I suggest instead that you start by developing your entire solution in Matlab and only then, if its performance is genuinely holding your work back, consider translating key elements into C++. This will optimise the use of your time, possibly at the cost of your computer's time. But a computer is a modern donkey without a humane society to intervene when you flog it to death.
As you suggest, well written C++ can be expected to be faster than interpreted Matlab. But ask yourself What is Matlab written in ? For much of its computationally-intensive core functionality Matlab calls libraries written in C++ (or whatever). Your task would be not to write code faster than interpreted Matlab, but faster than C++ (or whatever) written by specialists urged on by a huge market of installed software.
Yes, Matlab has a C/C++ API.
This API permits to:
Write C++ functions which can be invoked from Matlab
Read/Write data from a .mat file
Invoke the Matlab engine from C++
I am working to something similar to what you are trying to do, my approach is:
Import in C++ the input data from a .mat file
Run the simulation
Export the results back in a .mat file
The Matlab API is in C, and I suggest you to write a C++ wrapper for your convenience.
In order to work with Matlab mxArray, I suggest to take a look at the boost::multi_array library.
In particular you can initialize an object of type multi_array_ref from a Matlab mxArray like this:
boost::multi_array_ref<double,2> vec ( mxGetPr (p), boost::extents[10][10], boost::fortran_storage_order() );
This approach made the code much more readable.
You can call your own C, C++, or Fortran subroutines from the MATLAB command line as if they were built-in functions. These programs, called binary MEX-files, are dynamically-linked subroutines that the MATLAB interpreter loads and executes.
You should set compiler, look here Setting up mex to use the Visual Studio 2010 compiler.
All about MEX-files here: http://www.mathworks.com/help/matlab/matlab_external/using-mex-files-to-call-c-c-and-fortran-programs.html.
I want to achieve exactly same floating-point results in a gcc/Linux ported version of a Windows software. For that reason I want all double operations to be of 64-bit precision. This can be done using for example -mpc64 or -msse2 or -fstore-floats (all with side effects). However one thing I can't fix is transcendental functions like sin/asin etc. The docs say that they internally expect (and use I suppose) long double precision and whatever I do they produce results different from Windows counterparts.
How is it possible for these function to calculate results using 64-bit floating point precision?
UPDATE: I was wrong, it is printf("%.17f") that incorrectly rounds the correct double result, "print x" in gdb shows that the number itself is correct. I suppose I need a different question on this one... perhaps on how to make printf not to treat double internally as extended. Maybe using stringstream will give expected results... Yes it does.
Different LibM libraries use different algorithms for elementary functions, so you have to use the same library on both Windows and Linux to achieve exactly the same results. I would suggest to compile FDLibM and statically link it with your software.
I found that it is printf("%.17f") that uses incorrect precision to print results (probably extended internally), when I use stringstream << setprecision(17) the result is correct. So the answer is not really related to the question but, at least it works for me.
But I would be glad if someone provides a way to make printf to produce expected results.
An excellent solution for the transcendental function problem is to use the GNU MPFR Library. But be aware that Microsoft compilers do not support extended precision floating point. With the Microsoft compiler, double and long double are both 53-bit precision. With gcc, long double is 64-bit precision. To get matching results across Windows/linux, you must either avoid use of long double or avoid use of Microsoft compilers. For many Windows projects, the Windows port of gcc (mingw) works well. This lets the Windows project use 64-bit precision long doubles. A problem with mingw long double support is that mingw uses Microsoft libraries for calls such as printf. For that reason, printing a long double doesn't work correctly. A work-around for this problem is to use mpfr_printf.