Using single precision floating-point with FFTW in Visual Studio - c++

I am trying to use the FFTW library in a Visual Studio project, and I am having trouble with getting the floating point precision to work. I have created the library files and linked them as in this post and I have only included the libfftw3f-3.lib library as it says in their documentation, and everything else is fine except for the fftw_plan_dft_r2c_2d() function. It will not accept my array of type float* for parameter 3 because it is incompatible with its argument of type double*. I understand it is just precision but, I would prefer to use floating point because fftw does support it, and b/c of the way I am memcpy() 'ing data from a float array. I am just having issues with fftw library using floating point. I havent seen anything specific on this in their Windows section like in their Unix section besides linking to the libfftw3f-3.lib library. Can anyone give me clues on how to make this happen?

For single precision routines in FFTW you need to use routines with an fftwf_ prefix rather than fftw_, so your call to fftw_plan_dft_r2c_2d() should actually be fftwf_plan_dft_r2c_2d().

Related

Matlab high precision calculation in c++ via mex files

I want to learn how to do high precision calculations of a (Matlab) function in C++ via mex files – how to define new types if required, and what requirements / installations / makes I need. I am pretty new (again) to C++ and have Windows 7 and g++ / Microsoft SDK 7.1 as C++ compiler, and also Visual Studio 2013.
In detail: In my Matlab code, I want to run a function f, which takes several parameters and arrays as arguments and returns a scalar, for speed in C++ via mex files, and for accuracy with high precision, that is, more than “double”, so up to 50 (enough for now, later maybe up to 80) significant decimal digits.
The function f finds the root of another function g, where g finds the roots of a fourth-order polynomial (which are known to be real), does some basic computations (exp, power, mult., div.) with them, and is known to be numerically unstable if not being used with high precision.
I managed to run it through mex files in C++ with “standard” precision. Now, which library or tool could I use to run in C++ - or is it possible without any further tool – and how do I define a new type and how in mex…
I tried gmp but do not see how to install it on Windows (from the manual). Can you show me (best would be instructions) how to?

C++ compilers/platforms that don't use IEEE754 floating point

I'm working on updating a serialization library to add support for serializing floating point in a portable manner. Ideally I'd like to be able to test the code in an environment where IEEE754 isn't supported. Would it be sufficient to test using a soft-float library? Or any other suggestions about how I can properly test the code?
Free toolchains that you can find for ARM (embedded Linux) development, mostly do not support hard-float operations but soft-float only. You could try with one of these (i.e. CodeSourcery) but you would need some kind of a platform to run the compiled code (real HW or QEMU).
Or if you would want to do the same but on x86 machine, take a look at: Using software floating point on x86 linux
Should your library work on a system where both hardware floating point and soft-float are not available ? If so, if you test using a compiler with soft-float, your code may not compile/work on such a system.
Personally, I would test the library on a ARM9 system with a gcc compiler without soft-float.
Not an answer to your actual question, but describing what you must do to solve the problem.
If you want to support "different" floating point formats, your code would have to understand the internal format of floats [unless you only support "same architecture both ends"], pick the floating point number apart into your own format [which of course may be IEEE-754, but beware of denormals, 128-bit long doubles, NaN, INFINITY, and other "exception values", and of course out of range numbers], and then put it back together to the format required by the other end. If you are not doing this, there is no point in hunting down a non-IEEE-754 system, because it won't work.

gcc and sin/cos/transcendental functions precision like in Windows

I want to achieve exactly same floating-point results in a gcc/Linux ported version of a Windows software. For that reason I want all double operations to be of 64-bit precision. This can be done using for example -mpc64 or -msse2 or -fstore-floats (all with side effects). However one thing I can't fix is transcendental functions like sin/asin etc. The docs say that they internally expect (and use I suppose) long double precision and whatever I do they produce results different from Windows counterparts.
How is it possible for these function to calculate results using 64-bit floating point precision?
UPDATE: I was wrong, it is printf("%.17f") that incorrectly rounds the correct double result, "print x" in gdb shows that the number itself is correct. I suppose I need a different question on this one... perhaps on how to make printf not to treat double internally as extended. Maybe using stringstream will give expected results... Yes it does.
Different LibM libraries use different algorithms for elementary functions, so you have to use the same library on both Windows and Linux to achieve exactly the same results. I would suggest to compile FDLibM and statically link it with your software.
I found that it is printf("%.17f") that uses incorrect precision to print results (probably extended internally), when I use stringstream << setprecision(17) the result is correct. So the answer is not really related to the question but, at least it works for me.
But I would be glad if someone provides a way to make printf to produce expected results.
An excellent solution for the transcendental function problem is to use the GNU MPFR Library. But be aware that Microsoft compilers do not support extended precision floating point. With the Microsoft compiler, double and long double are both 53-bit precision. With gcc, long double is 64-bit precision. To get matching results across Windows/linux, you must either avoid use of long double or avoid use of Microsoft compilers. For many Windows projects, the Windows port of gcc (mingw) works well. This lets the Windows project use 64-bit precision long doubles. A problem with mingw long double support is that mingw uses Microsoft libraries for calls such as printf. For that reason, printing a long double doesn't work correctly. A work-around for this problem is to use mpfr_printf.

Integers greater than 4294967295 on 32-bit Windows

I'm trying to get to grips with C++ basics by building a simple arithmetic calculator application. Right now I'm trying to figure out how to make it capable of dealing with integers greater than 4294967295 on 32-bit Windows. I know that Windows' integrated Calculator is capable of this. What have I missed?
Note that this application should be compilable with both MSVC compiler and g++ (MinGW/GCC).
Thank you.
If you want to be both gcc and msvc compatible use <stdint.h>. It's source code compatible with both.
You probably want uint64_t for this. It will get you up to 18,446,744,073,709,551,615.
There are also libraries to get you up to integers as large as you have memory to handle as well.
Use __int64 to get 64-bit int calculations in Visual C++ - not sure if GCC will like this, though.
You could create a header file that typedefs (say) MyInt64 to the appropriate thing for each compiler. Then you can work internally with MyInt64, and the compiled code will be correct for each target. This is a pretty standard way of supporting different target compilers on one source codebase.
afai can tell, long long would work OK for both, but I have not used GCC so YMMV - see here for GCC info and here for Visual C++.
You could also create a "Large Number" class that would basically store the value across multiple variables in one form or another
There are different solutions, if 2^64 is big enough for you, you can use a 64 bit integer type (these are implementation dependent, so search for your particular compiler). On the other hand, if you want to be able to handle any number, you will have to use or implement a BigInteger type that encapsulates it. The implementation is an interesting exercise... basically use a vector of smaller type, operate on each subelement and then merge and normalize the result.

Alternatives for accurately representing Visual Basic Decimal variables in C++

I am currently working on a program that takes Visual Basic data in the form of a text file, and then stores this data in C++. Some of the data from Visual Basic is of the type Decimal. C++ has no built in type equivalent to decimal. I don't want to use double because there is a possible loss of significant figures if the numbers are large enough.
One option is write my own decimal class. I was wondering if there were any other alternatives for solving this problem before I attempted to do that.
Thanks for you help.
There's the decNumber library. This is a C library designed for use with decimal numbers without losing precision/accuracy.
Given that it's a C library, you should be able to easily wrap it in a C++ class, or just use the C functions directly
This is an IBM sponsored lib and it's available under an open source license (ICU)
Using a Decimal class is the best solution in my opinion. As to writing your own implementation, try a short web research first: It seems that others had the same problem before. The first Google result reveals a CodeProject solution, there may be many other...