how to store 1000000 digit integers in C++ - c++

in my problem i have to save big big integers like upto 1000000 digits and to do some operation. how can i do that.i know that a long int in c++ can store upto 10 digits

You can use GMP, the GNU arbitrary precision library. Just be aware that it's not a very good library if you run out of memory.
By that, I mean it will just exit out from underneath you if it cannot allocate memory. I find this an ... interesting ... architectural decision for a general purpose library but it's popular for this sort of stuff so, provided you're willing to wear that restriction, it's probably a good choice.
Another good one is MPIR, a fork of GMP which, despite the name "Multiple Precision Integers and Rationals", handles floating point quite well. I've found these guys far more helpful than the GMP developers when requesting help or suggesting improvements (but, just be aware, that's my experience, your mileage may vary).

Related

Any reason why Fortran is outputting some strange numbers for Project Euler 57?

learned python in the beginning of the summer, need to switch to Fortran for lab work. Could someone please help me discern why Fortran is outputting such odd numbers when doing simple addition? The photo below should be a good explanation of what I am trying to do with the program.
Fortran vs Python program
From Python's floating-point tutorial
almost all platforms map Python floats to IEEE-754 "double precision"
which in Fortran terms is a double precision or real(kind=REAL64) variable. Note that Python is weakly typed; you can stuff pretty much whatever you want into a Python variable and it just sort of knows what to do with it. Fortran is strongly typed so if you want your floating point data stored as REAL32, REAL64, or REAL128 (whatever your compiler defines in the ISO_Fortran_env module), you have to explicitly tell Fortran which specific type of float you want. By default, Fortran reals are REAL32 (so-called 'single precision') so you shouldn't be surprised that the results don't match what Python is generating.
That, of course, presumes you know the Secret Mystery Knowledge of the default numerical precision of both Fortran and Python, something we are all born with but which most of us lose along with our baby teeth.
Put another way, there's no way you could know this unless you knew the right question to ask in the first place, which nobody does the first time they see weird, seemingly-inconsistent floating point behavior. Back when FORTRAN was still taught, this sort of problem was introduced pretty early in the curriculum because the language is intended for crunching numbers and the problems with mixed-type and mixed-precision arithmetic are serious and well known. You had to learn about these pitfalls quickly because it was the difference between getting believable answers and garbage.
Modern languages are designed to simplify the delivery of cat videos. No real computer scientist would be caught dead discussing floating point mathematics so you need to search obscure backwater websites for information on how to make your numbers add up good and do other stuff good too. There is good info out there but again, you need to know what you're looking for in order to find it which most programmers don't when they hit this problem for the first time.
The short answer is to understand how computers simulate real numbers, how the languages you're using store those sorts of numbers, and ensure that the precision your application needs is supported by the data types you use. Hopefully that's more helpful than telling you to rephrase your question or RTFM.
And for what it's worth, I've been bitten by a similar problem recently where I had converted a code from single to double precision, forgetting that one of the binary files I was writing expected a single precision value. I only found this out during testing when visualization software choked on the broken binary file. The solution was obvious in hindsight; I reverted one variable back to single precision and all was well. The point is, even experienced people get tripped up by floating point. Barbie was right; math is hard...

Using extremely large integer holding 3001 digits

For example, how can I use the result of 1000^1000 for arithmetic? I don't think there's a library that can accommodate that, all I see at most is 100 number digits.
Use an arbitrary-precision arithmetic library like GMP or Boost.Multiprecision.
What you are looking for is a library like GMP or Boost-Multiprecision or TTmath.
Or, you might challenge yourself to write a low level representation that handles longer than standard bit representations, and do arithmetic with it.
Stick with the first option though, if it does the job you have in mind.

C++ library for integer trigonometry, speed optimized with optional approximations?

I've reached the point in a project where it makes more sense to start building some support classes for vectors and misc trigonometry than keep using ad-hoc functions. I expect there to be many C++ libraries for this, but I don't want to sacrifice speed and features I am used to.
Specifically, I want to be able to use integer angles, and I want to keep the blazing speed afforded by approximations like this:
static inline int32_t sin_approx(int32_t angle)
//Angle is -32768 to 32767: Return -32768 to 32767
{
return (angle<<1) - ((angle*abs(angle))>>14);
}
So, before I needlessly roll my own, are there any really fast fixed point libraries for c++ with template classes such as vectors where I can specify the width of the integer used and that has fast approximations such as the one above that I should look at?
I went down this path a few years ago when I had to convert some audio fingerprinting code from floating-point to fixed-point. The hard parts were the DCT (which used a large cosine table) and a high-precision logarithm. I found surprisingly little in the way of existing libraries. Since then, I have heard that the original Sony PlayStation (PS1) had no floating-point support, so development forums (fori?) for it, if they still exist, might have what you are looking for.
Some people I have worked with have had good luck with the NewMat library, though it is geared toward linear algebra rather than trigonometry, and seems to focus on floating-point numbers. Still, its site leads to this list, which looks to be worth checking out. I also found spuc, a signal processing library that might be good for fixed-point support. And years ago I saw a signal processing template library (sptl) from Fraunhofer. I think it was proprietary, but may be available somehow.
All that being said, I think you are pretty close with what you have already. Since you have a sine function, you basically also have a cosine function, provided you transform the input appropriately (cos(x) == sin(x + pi/2)). Since the tangent is the quotient of the sine and cosine (tan(x) = sin(x) / cos(x)) you are basically there for the trigonometry.
Regarding vectors, don't the STL vector and valarray classes combined with STL algorithms get you pretty close, too? If not, there's always Boost's math libraries.
Sorry I can't point you to the silver bullet you are looking for, but what you are trying to do is rather uncommon these days. People who want precision usually go straight to floating-point, which has decent performance on modern processors, and lots of library support. Those who want speed on resource-constrained hardware usually don't need precision and aren't doing trig by the vector, and probably aren't doing C++ either. I think your best option is to roll your own. Try to think of it as applying the wheel design pattern in a new context, rather than reinventing it. :)

C++ compiler maximum number of classes

In meta-programming number of classes grows quite fast.
Is maximum number of classes modern compiler allows, for example g++, something to be concerned about?
Thank you
I'd guess this question is best answered by the standard published by the C++ committee. But looking at this place, I can't see any upper limit on the number of classes although there is minimum quantity limit on many items (saying at least the given number of items of each type should be supported by the compiler but that is not a binding limit). If your compiler can support these minimum limits, you should be OK.
But what factors would have the say on the upper limits on the number of classes kindles my academic curiosity. I'd be glad to know if a compiler guru can answer that.
If you run on a 64-bit computer, you're unlikely to run out of any limitation in any modern compilers. Type information is likely to be dynamically allocated, rather than put into some hard-coded limited-size container.
I can think of some systems that might conceivably grow to be hard to compile in a 2 GB memory space as you'll have for a 32-bit computer. However, even though I've worked on some pretty big C++ code bases with lots of template metaprogramming, that hasn't actually been a problem in practice. The slowness of compilation and the annoyance of debugging probably will kill you before memory size does :-)
Given that the compiler's parse trees are just that—trees—I think it's safe to assume that compiler limits are a matter of overall complexity rather than the number of any one sort of entity.
Of course, someone with the source in front of them can give a more definite answer :)

Platform independent math library

Is there a publically available library that will produce the exact
same results for sin, cos, floor, ceil, exp and log on 32 bit and
64 bit linux, solaris and possibly other platforms?
I am considering the following alternatives:
a) cephes compiled
with gcc -mfpmath=sse and the same optimization levels on each
platform ... but its not clear that this would work.
b) MPFR but I am worried that this would be
too slow.
Regarding precision (edited): For this particular application
I don't really need something that produces the value that is
numerically closest to the exact value. I just need the answers
to be the exact same on all platforms, os and "bitness". That
being said the values need to be reasonable (5 digits would
probably be enough). I apologize for not having made this clear
in my initial question.
I guess MAPM or MPFR with a low enough precision setting might do
the trick but I was hoping to find something that did not have
the "multiple precision" machinery/flavor to it. In any case, I will
try this out.
Would something like: http://lipforge.ens-lyon.fr/www/crlibm/index.html be what you are searching for (this is a library whose aim is to be able to replace the standard math library of C99 -- so keep good enough performance in the normal cases -- while ensuring correctly rounded result according to IEEE 754 rounding modes) ?
crlibm is the correct tool for this. An earlier poster linked to it. Because it is correctly rounded, it will deliver bit-identical results on all platforms with IEEE-754 compliant hardware if compiled properly. It is much, much faster than MPFR.
You shouldn't need one. floor and ceil will be exact since their computation is straightforward.
What you are concerned with is rounding on the last bit for the transendentals like sin, cos, and exp. But these are native to the CPU microcode and can be done in high quality consistently regardless of library. However, the rounding does vary from chip architecture to architecture.
So, if exact answers for the transindentals is indeed your goal, you do need a portable library, and you also will be giving up huge efficiencies by doing so.
You could use a portable library like MAPM which gives you not only consistent ULP results but as a side benefit lets you define arbirary precision.
You can check your math precision with tools like this one and this one.
You mention using SSE. If you're planning on only running on x86 chips, then what exactly are the inconsistencies you're expecting?
As for MPFR, don't worry - test it! By the way, if it's good enough to be included in GCC, it's probably good enough for you.
You want to use MPFR. That library has been around for years and has been ported to every platform under the sun and optimized by tons of people.
If MPFR isn't sufficient for your needs we're talking about full custom ASM implementations in which case it might be more efficient to consider implementing it in dedicated hardware.