Convert very large decimal numbers to binary - fortran

For example, to convert an arbitrary long string of decimal numbers to binary. I think it's possible, once the length is known, starting from the left to right digits. I'm not able to find the way to do this. How can it be achieved?

Your best bet would be to use a library which already does what you want: gmp. It does not have Fortran bindings, but it is easy enough to call from Fortran using Fortran's ISO C binding features and possibly a wrapper function.
If you have gcc installed, you already are using gmp; you may just have to install the relevant development files.
Using gmp, you would set your integer from a string value using mpz_set_str and convert it to another base using mpz_get_str.
No need to reinvent that particular wheel.

Related

Not getting expected result from wsprintf [duplicate]

I am unable to print double value using wsprintf().
I tried sprintf() and it worked fine.
Syntax used for wsprintf() and sprintf() is as follows:
wsprintf(str,TEXT("Square is %lf "),iSquare); // Does not show value
sprintf(str," square is %lf",iSquare); // works okay
Am I making any mistakes while using wsprintf() ?
wsprintf doesn't support floating point. The mistake is using it at all.
If you want something like sprintf, but for wide characters/strings, you want swprintf instead.
Actually, since you're using the TEXT macro, you probably want _stprintf instead though: it'll shift from a narrow to wide implementation in sync with the same preprocessor macros as TEXT uses to decide whether the string will be narrow or wide. This whole approach, however, is largely a relic from the days when Microsoft still sold and supported versions of Windows based on both the 32-bit NT kernel, and on the 16-bit kernel. The 16-bit versions had only extremely minimal wide-character support, so Microsoft worked hard at allowing a single source code base to be compiled to use either narrow characters (targeting 16-bit kernels) or wide characters (to target the 32-bit kernels). The 16-bit kernels have been gone for long enough that almost nobody really has much reason to support them any more.
For what it's worth: wsprintf is almost entirely a historic relic. The w apparently stands for Windows. It was included as part of Windows way back when (back to the 16-bit days). It was written without support for floating point because at that time, Windows didn't use any floating point internally--this is part of why it has routines like MulDiv built-in, even though doing (roughly) the same with floating point is quite trivial.
The function wsprintf() does not support floating point parameters, try using swprintf() instead if you're working with floating point values.
More information about swprint can be found here
wsprintf does not support floating point. See its documentation - lf is not listed as a valid format code.
The swprintf function part of the Visual Studio standard library is what you want. It supports all of the format codes that sprintf does.
Presumably you're not compiled to UNICODE and TEXT is #defined to just a regular string.

Should I use wsprintf() to print a double as a wide string?

I am unable to print double value using wsprintf().
I tried sprintf() and it worked fine.
Syntax used for wsprintf() and sprintf() is as follows:
wsprintf(str,TEXT("Square is %lf "),iSquare); // Does not show value
sprintf(str," square is %lf",iSquare); // works okay
Am I making any mistakes while using wsprintf() ?
wsprintf doesn't support floating point. The mistake is using it at all.
If you want something like sprintf, but for wide characters/strings, you want swprintf instead.
Actually, since you're using the TEXT macro, you probably want _stprintf instead though: it'll shift from a narrow to wide implementation in sync with the same preprocessor macros as TEXT uses to decide whether the string will be narrow or wide. This whole approach, however, is largely a relic from the days when Microsoft still sold and supported versions of Windows based on both the 32-bit NT kernel, and on the 16-bit kernel. The 16-bit versions had only extremely minimal wide-character support, so Microsoft worked hard at allowing a single source code base to be compiled to use either narrow characters (targeting 16-bit kernels) or wide characters (to target the 32-bit kernels). The 16-bit kernels have been gone for long enough that almost nobody really has much reason to support them any more.
For what it's worth: wsprintf is almost entirely a historic relic. The w apparently stands for Windows. It was included as part of Windows way back when (back to the 16-bit days). It was written without support for floating point because at that time, Windows didn't use any floating point internally--this is part of why it has routines like MulDiv built-in, even though doing (roughly) the same with floating point is quite trivial.
The function wsprintf() does not support floating point parameters, try using swprintf() instead if you're working with floating point values.
More information about swprint can be found here
wsprintf does not support floating point. See its documentation - lf is not listed as a valid format code.
The swprintf function part of the Visual Studio standard library is what you want. It supports all of the format codes that sprintf does.
Presumably you're not compiled to UNICODE and TEXT is #defined to just a regular string.

gcc and sin/cos/transcendental functions precision like in Windows

I want to achieve exactly same floating-point results in a gcc/Linux ported version of a Windows software. For that reason I want all double operations to be of 64-bit precision. This can be done using for example -mpc64 or -msse2 or -fstore-floats (all with side effects). However one thing I can't fix is transcendental functions like sin/asin etc. The docs say that they internally expect (and use I suppose) long double precision and whatever I do they produce results different from Windows counterparts.
How is it possible for these function to calculate results using 64-bit floating point precision?
UPDATE: I was wrong, it is printf("%.17f") that incorrectly rounds the correct double result, "print x" in gdb shows that the number itself is correct. I suppose I need a different question on this one... perhaps on how to make printf not to treat double internally as extended. Maybe using stringstream will give expected results... Yes it does.
Different LibM libraries use different algorithms for elementary functions, so you have to use the same library on both Windows and Linux to achieve exactly the same results. I would suggest to compile FDLibM and statically link it with your software.
I found that it is printf("%.17f") that uses incorrect precision to print results (probably extended internally), when I use stringstream << setprecision(17) the result is correct. So the answer is not really related to the question but, at least it works for me.
But I would be glad if someone provides a way to make printf to produce expected results.
An excellent solution for the transcendental function problem is to use the GNU MPFR Library. But be aware that Microsoft compilers do not support extended precision floating point. With the Microsoft compiler, double and long double are both 53-bit precision. With gcc, long double is 64-bit precision. To get matching results across Windows/linux, you must either avoid use of long double or avoid use of Microsoft compilers. For many Windows projects, the Windows port of gcc (mingw) works well. This lets the Windows project use 64-bit precision long doubles. A problem with mingw long double support is that mingw uses Microsoft libraries for calls such as printf. For that reason, printing a long double doesn't work correctly. A work-around for this problem is to use mpfr_printf.

Integers greater than 4294967295 on 32-bit Windows

I'm trying to get to grips with C++ basics by building a simple arithmetic calculator application. Right now I'm trying to figure out how to make it capable of dealing with integers greater than 4294967295 on 32-bit Windows. I know that Windows' integrated Calculator is capable of this. What have I missed?
Note that this application should be compilable with both MSVC compiler and g++ (MinGW/GCC).
Thank you.
If you want to be both gcc and msvc compatible use <stdint.h>. It's source code compatible with both.
You probably want uint64_t for this. It will get you up to 18,446,744,073,709,551,615.
There are also libraries to get you up to integers as large as you have memory to handle as well.
Use __int64 to get 64-bit int calculations in Visual C++ - not sure if GCC will like this, though.
You could create a header file that typedefs (say) MyInt64 to the appropriate thing for each compiler. Then you can work internally with MyInt64, and the compiled code will be correct for each target. This is a pretty standard way of supporting different target compilers on one source codebase.
afai can tell, long long would work OK for both, but I have not used GCC so YMMV - see here for GCC info and here for Visual C++.
You could also create a "Large Number" class that would basically store the value across multiple variables in one form or another
There are different solutions, if 2^64 is big enough for you, you can use a 64 bit integer type (these are implementation dependent, so search for your particular compiler). On the other hand, if you want to be able to handle any number, you will have to use or implement a BigInteger type that encapsulates it. The implementation is an interesting exercise... basically use a vector of smaller type, operate on each subelement and then merge and normalize the result.

Alternatives for accurately representing Visual Basic Decimal variables in C++

I am currently working on a program that takes Visual Basic data in the form of a text file, and then stores this data in C++. Some of the data from Visual Basic is of the type Decimal. C++ has no built in type equivalent to decimal. I don't want to use double because there is a possible loss of significant figures if the numbers are large enough.
One option is write my own decimal class. I was wondering if there were any other alternatives for solving this problem before I attempted to do that.
Thanks for you help.
There's the decNumber library. This is a C library designed for use with decimal numbers without losing precision/accuracy.
Given that it's a C library, you should be able to easily wrap it in a C++ class, or just use the C functions directly
This is an IBM sponsored lib and it's available under an open source license (ICU)
Using a Decimal class is the best solution in my opinion. As to writing your own implementation, try a short web research first: It seems that others had the same problem before. The first Google result reveals a CodeProject solution, there may be many other...