I converted some matlab code to c++. Some lines of the code have about 250,000 length. In addition, they involve very big mantissa numbers such as “2.209647215146515615616515615615103202897891
316e-258” and the precision is important to me (I know the number is very near to zero, but I can’t replace it with zero).
These codes run in matlab perfectly (fast and exact), but in c++, there is some problem:
First: the build time takes too long.
Second: after spend a long time for build, it works very very very slow!
I’m Using Visual Studio 2015 and when write this codes in it, it stopped working because of the huge size of lines and preprocessing tasks and I have to restart it.
Is there any way to work with long lines of code and very big numbers in c++ and Visual Studio IDE?
You might want to try GMP from gmplib.org
GMP is a free library for arbitrary precision arithmetic, operating on
signed integers, rational numbers, and floating-point numbers. There
is no practical limit to the precision except the ones implied by the
available memory in the machine GMP runs on. GMP has a rich set of
functions, and the functions have a regular interface.
You're question is very broad though plus since you're using Visual Studio it might be a nightmare to compile this with your existing library. I suggest you go over to Linux and work on that for "scientific computation".
Related
I am unable to print double value using wsprintf().
I tried sprintf() and it worked fine.
Syntax used for wsprintf() and sprintf() is as follows:
wsprintf(str,TEXT("Square is %lf "),iSquare); // Does not show value
sprintf(str," square is %lf",iSquare); // works okay
Am I making any mistakes while using wsprintf() ?
wsprintf doesn't support floating point. The mistake is using it at all.
If you want something like sprintf, but for wide characters/strings, you want swprintf instead.
Actually, since you're using the TEXT macro, you probably want _stprintf instead though: it'll shift from a narrow to wide implementation in sync with the same preprocessor macros as TEXT uses to decide whether the string will be narrow or wide. This whole approach, however, is largely a relic from the days when Microsoft still sold and supported versions of Windows based on both the 32-bit NT kernel, and on the 16-bit kernel. The 16-bit versions had only extremely minimal wide-character support, so Microsoft worked hard at allowing a single source code base to be compiled to use either narrow characters (targeting 16-bit kernels) or wide characters (to target the 32-bit kernels). The 16-bit kernels have been gone for long enough that almost nobody really has much reason to support them any more.
For what it's worth: wsprintf is almost entirely a historic relic. The w apparently stands for Windows. It was included as part of Windows way back when (back to the 16-bit days). It was written without support for floating point because at that time, Windows didn't use any floating point internally--this is part of why it has routines like MulDiv built-in, even though doing (roughly) the same with floating point is quite trivial.
The function wsprintf() does not support floating point parameters, try using swprintf() instead if you're working with floating point values.
More information about swprint can be found here
wsprintf does not support floating point. See its documentation - lf is not listed as a valid format code.
The swprintf function part of the Visual Studio standard library is what you want. It supports all of the format codes that sprintf does.
Presumably you're not compiled to UNICODE and TEXT is #defined to just a regular string.
I want to learn how to do high precision calculations of a (Matlab) function in C++ via mex files – how to define new types if required, and what requirements / installations / makes I need. I am pretty new (again) to C++ and have Windows 7 and g++ / Microsoft SDK 7.1 as C++ compiler, and also Visual Studio 2013.
In detail: In my Matlab code, I want to run a function f, which takes several parameters and arrays as arguments and returns a scalar, for speed in C++ via mex files, and for accuracy with high precision, that is, more than “double”, so up to 50 (enough for now, later maybe up to 80) significant decimal digits.
The function f finds the root of another function g, where g finds the roots of a fourth-order polynomial (which are known to be real), does some basic computations (exp, power, mult., div.) with them, and is known to be numerically unstable if not being used with high precision.
I managed to run it through mex files in C++ with “standard” precision. Now, which library or tool could I use to run in C++ - or is it possible without any further tool – and how do I define a new type and how in mex…
I tried gmp but do not see how to install it on Windows (from the manual). Can you show me (best would be instructions) how to?
I want to achieve exactly same floating-point results in a gcc/Linux ported version of a Windows software. For that reason I want all double operations to be of 64-bit precision. This can be done using for example -mpc64 or -msse2 or -fstore-floats (all with side effects). However one thing I can't fix is transcendental functions like sin/asin etc. The docs say that they internally expect (and use I suppose) long double precision and whatever I do they produce results different from Windows counterparts.
How is it possible for these function to calculate results using 64-bit floating point precision?
UPDATE: I was wrong, it is printf("%.17f") that incorrectly rounds the correct double result, "print x" in gdb shows that the number itself is correct. I suppose I need a different question on this one... perhaps on how to make printf not to treat double internally as extended. Maybe using stringstream will give expected results... Yes it does.
Different LibM libraries use different algorithms for elementary functions, so you have to use the same library on both Windows and Linux to achieve exactly the same results. I would suggest to compile FDLibM and statically link it with your software.
I found that it is printf("%.17f") that uses incorrect precision to print results (probably extended internally), when I use stringstream << setprecision(17) the result is correct. So the answer is not really related to the question but, at least it works for me.
But I would be glad if someone provides a way to make printf to produce expected results.
An excellent solution for the transcendental function problem is to use the GNU MPFR Library. But be aware that Microsoft compilers do not support extended precision floating point. With the Microsoft compiler, double and long double are both 53-bit precision. With gcc, long double is 64-bit precision. To get matching results across Windows/linux, you must either avoid use of long double or avoid use of Microsoft compilers. For many Windows projects, the Windows port of gcc (mingw) works well. This lets the Windows project use 64-bit precision long doubles. A problem with mingw long double support is that mingw uses Microsoft libraries for calls such as printf. For that reason, printing a long double doesn't work correctly. A work-around for this problem is to use mpfr_printf.
I'm researching methods to port a large (>10M lines) amount of C++ code to 64-bit. I have looked at static code analyzers and compiler flags, and I am now looking at macros or other tools that can make common, repetitive changes.
I've written a few regular expressions to see how well they work in practice, and as predicted, they're quite effective. That said, it takes a while to build the expressions in the first place, so I'd like to see if there are any lists of such expressions or software tools that can perform changes automatically.
The following lines are prototypical examples of code to be matched and fixed. (To clarify, these lines are not meant to represent a single block of code, but instead are lines pulled from different places.)
int i = 0;
long objcount;
int count = channels.count(ch);
for (int k = 0; k < n; k++) { /*...*/ }
The objective is not to thoroughly port code to 64-bit, but instead to perform a first pass over the code to reduce the amount of code that needs to be manually inspected. It's okay for some needed changes to be missed, and it's probably okay for some wrong changes to be made, but those should be minimized.
Visual Studio is the IDE that will be used for conversion work, so something that works well with VS is a plus. Cost is not an issue.
Rexexps suffer from a high false positive rate; by definition, a "regular expression" cannot parse a context free langauge such as C++. Futhermore, regexps cannot take into
account type information; is
fooT i=0;
ok, for some typedef'd fooT? Finally, a regexp cannot change code; you might consider Perl or SED (using regexps to drive changes), but you'll get erroneous changes due to the false positives of regexps. At 10M SLOC, that can't be fun; a 5% error rate means possibly 50,000 lines of code to fix by hand.
You might consider a program transformation tool. Such engines operate on language structures, not text, and more sophisticated versions know scopes, types, and the meaning of symbol (e.g., what is fooT, exactly?). They offer you the ability to write langauge- and context-specific patterns, and propose structurally correct code changes, using the surface syntax of the target language. This enables the reliable application of code changes on scale.
Our DMS Software Reengineering Toolkit with its C++ Front End has been used to carry out massive changes to large C++ systems in a syntax- and type-accurate way. (See Akers, R., Baxter, I., Mehlich, M. , Ellis, B. , Luecke, K., Case Study: Re-engineering C++ Component Models Via Automatic Program Transformation, Information & Software Technology 49(3):275-291 2007.)
What version of the compiler are you using? Did you try running the compiler with the /Wp64 flag to detect portability to 64 bit issues?
From the MS website:
"/Wp64 detects 64-bit portability problems on types that are also marked with the __w64 keyword. /Wp64 is off by default in the Visual C++ 32-bit compiler and on by default in the Visual C++ 64-bit compiler."
http://msdn.microsoft.com/en-us/library/yt4xw8fh%28v=vs.71%29.aspx
I am currently working on a program that takes Visual Basic data in the form of a text file, and then stores this data in C++. Some of the data from Visual Basic is of the type Decimal. C++ has no built in type equivalent to decimal. I don't want to use double because there is a possible loss of significant figures if the numbers are large enough.
One option is write my own decimal class. I was wondering if there were any other alternatives for solving this problem before I attempted to do that.
Thanks for you help.
There's the decNumber library. This is a C library designed for use with decimal numbers without losing precision/accuracy.
Given that it's a C library, you should be able to easily wrap it in a C++ class, or just use the C functions directly
This is an IBM sponsored lib and it's available under an open source license (ICU)
Using a Decimal class is the best solution in my opinion. As to writing your own implementation, try a short web research first: It seems that others had the same problem before. The first Google result reveals a CodeProject solution, there may be many other...