How do integers multiply in C++? - c++

I was wondering what kind of method was used to multiply numbers in C++. Is it the traditional schoolbook long multiplication? Fürer's algorithm? Toom-Cook?
I was wondering because I am going to need to be multiplying extremely large numbers and need a high degree of efficiency. Therefore the traditional schoolbook long multiplication O(n^2) might be too inefficient, and I would need to resort to another method of multiplication.
So what kind of multiplication does C++ use?

You seem to be missing several crucial things here:
There's a difference between native arithmetic and bignum arithmetic.
You seem to be interested in bignum arithmetic.
C++ doesn't support bignum arithmetic. The primitive datatypes are generally native arithmetic to the processor.
To get bignum (arbitrary precision) arithmetic, you need to implement it yourself or use a library. (such as GMP) Unlike Java, and C# (among others), C++ does not have a library for arbitrary precision arithmetic.
All of those fancy algorithms:
Karatsuba: O(n^1.585)
Toom-Cook: < O(n^1.465)
FFT-based: ~ O(n log(n))
are applicable only to bignum arithmetic which are implemented in bignum libraries. What the processor uses for its native arithmetic operations is somewhat irrelevant as it's
usually constant time.
In any case, I don't recommend that you try to implement a bignum library. I've done it before and it's quite demanding (especially the math). So you're better off using a library.

What do you mean by "extremely large numbers"?
C++, like most other programming languages, uses the multiplication hardware that is built-in in the processor. Exactly how that works is not specified by the C++ language. But for normal integers and floating-point numbers, you will not be able to write something faster in software.
The largest numbers that can be represented by the various data types can vary between different implementations, but some typical values are 2147483647 for int, 9223372036854775807 for long, and 1.79769e+308 for double.

In C++ integer multiplication is handled by the chip. There is no equivalent of Perl's BigNum in the standard language, although I'm certain such libraries do exist.

That all depends on the library and compiler used.

It is performed in hardware. for the same reason huge numbers won't work. The largest number c++ can represent in 64 bit hardware is 18446744073709551616. if you need larger numbers you need an arbitrary precision library.

If you work with large numbers the standard integer multiplication in c++ will no longer work and you should use a library providing arbitrary precision multiplication, like GMP http://gmplib.org/
Also, you should not worry about performance prior to writing your application (=premature optimization). These multiplications will be fast, and most likely many other components in your software will cause much more slowdown.

plain c++ uses CPU mult instructions (or schoolbook multiplication using bitshifts and additions if your CPU does not have such an instruction. )
if you need fast multiplication for large numbers, I would suggest looking at gmp ( http://gmplib.org ) and use the c++ interface from gmpxx.h

Just how big are these numbers going to be? Even languages like python can do 1e100*1e100 with arbitrary precision integers over 3 million times a second on a standard processor. That's multiplication to 100 significant places taking less than one millionth of second. To put that into context there are only about 10^80 atoms in the observable universe.
Write what you want to achieve first, and optimise later if necessary.

Related

The fastest way for dividing large integers [duplicate]

I need to divide numbers represented as digits in byte arrays with non standard amount of bytes. It maybe 5 bytes or 1 GB or more. Division should be done with numbers represented as byte arrays, without any conversions to numbers.
Divide-and-conquer division winds up being a whole lot faster than the schoolbook method for really big integers.
GMP is a state-of-the-art big-number library. For just about everything, it has several implementations of different algorithms that are each tuned for specific operand sizes.
Here is GMP's "division algorithms" documentation. The algorithm descriptions are a little bit terse, but they at least give you something to google when you want to know more.
Brent and Zimmermann's Modern Computer Arithmetic is a good book on the theory and implementation of big-number arithmetic. Probably worth a read if you want to know what's known.
The standard long division algorithm, which is similar to grade school long division is Algorithm D described in Knuth 4.3.1. Knuth has an extensive discussion of division in that section of his book. The upshot of this that there are faster methods than Algorithm D but they are not a whole lot faster and they are a lot more complicated than Algorithm D.
If you determined to get the fastest possible algorithm, you can resort to what is known as the SRT algorithm.
All of this and more is covered by the way on the Wikipedia Division Algorithm.

Using extremely large integer holding 3001 digits

For example, how can I use the result of 1000^1000 for arithmetic? I don't think there's a library that can accommodate that, all I see at most is 100 number digits.
Use an arbitrary-precision arithmetic library like GMP or Boost.Multiprecision.
What you are looking for is a library like GMP or Boost-Multiprecision or TTmath.
Or, you might challenge yourself to write a low level representation that handles longer than standard bit representations, and do arithmetic with it.
Stick with the first option though, if it does the job you have in mind.

What should i know when using floats/doubles between different machines?

I've heard that there are many problems with floats/doubles on different CPU's.
If i want to make a game that uses floats for everything, how can i be sure the float calculations are exactly the same on every machine so that my simulation will look exactly same on every machine?
I am also concerned about writing/reading files or sending/receiving the float values to different computers. What conversions there must be done, if any?
I need to be 100% sure that my float values are computed exactly the same, because even a slight difference in the calculations will result in a totally different future. Is this even possible ?
Standard C++ does not prescribe any details about floating point types other than range constraints, and possibly that some of the maths functions (like sine and exponential) have to be correct up to a certain level of accuracy.
Other than that, at that level of generality, there's really nothing else you can rely on!
That said, it is quite possible that you will not actually require binarily identical computations on every platform, and that the precision and accuracy guarantees of the float or double types will in fact be sufficient for simulation purposes.
Note that you cannot even produce a reliable result of an algebraic expression inside your own program when you modify the order of evaluation of subexpressions, so asking for the sort of reproducibility that you want may be a bit unrealistic anyway. If you need real floating point precision and accuracy guarantees, you might be better off with an arbitrary precision library with correct rounding, like MPFR - but that seems unrealistic for a game.
Serializing floats is an entirely different story, and you'll have to have some idea of the representations used by your target platforms. If all platforms were in fact to use IEEE 754 floats of 32 or 64 bit size, you could probably just exchange the binary representation directly (modulo endianness). If you have other platforms, you'll have to think up your own serialization scheme.
What every programmer should know: http://docs.sun.com/source/806-3568/ncg_goldberg.html

Large number of float digits without extra library

i have a float value that is hundreds of digits long (like the first 100 digits of pi - 3) and need a way to operate on it. is there any way to store and operate on the float that has a large number of decimals and maintain much precision with built in libraries? is there anything like python's Decimal module in c++?
The other answers all point to high precision integer libraries. There are however a few floating point libraries around:
The High Precision Arithmetic library
The GNU Multiple Precision Arithmetic Library (GMP) "Arithmetic without limitations"
The GNU multiple-precision floating-point computations with correct rounding (the GNU MPFR library). There's also a C++ wrapper.
NTL: A Library for doing Number Theory. Together with NTL::RR you can use this even within boost.
The LBNL double-double precision, quad-double precision and arbitrary precision software.
... and don't forget the possibility that you can always implement your own solution. (Might not be the most effective or fastest solution, but it's "the" solution if you want to learn something.
No built-in library, but you can do that using Bignum arithmetics :) http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic.
What a Bignum is: an array (vector) of digits. You can easily implement sum/difference....
I've actually asked something simillar here: STL big int class implementation
Unless it is some extra exotic platform, where a float is 100+ bytes long, you will find it hard to archive what you want without a library for big numbers.

Fast way to compute n times 10 raised to the power of minus m

I want to compute 10 raised to the power minus m. In addition to use the math function pow(10, -m), is there any fast and efficient way to do that?
What I ask such a simple question to the c++ gurus from SO is that, as you know, just like base 2, 10 is also a special base. If some value n times the 10's power minus m, it is equivalent to move n's decimal point to the left m times. I think it must be a fast and efficient way to cope with.
For floating point m, so long as your standard library implementation is well written, then pow will be efficient.
If m is an integer, and you hinted that it is, then you could use an array of pre calculated values.
You should only be worrying about this kind of thing if that routine is a bottleneck in your code. That is if the calls to that routine take a significant proportion of the total running time.
Ten is not a special value on a binary machine, only two is. Use pow or exponentiation by squaring.
Unfortunately there is no fast and efficient way to calculate it using IEEE 754 floating point representation. The fastest way to get the result is to build a table for every value of m that you care about, and then just perform a lookup.
If there's a fast and efficient way to do it then I'm sure your CPU supports it, unless you're running on an embedded system in which case I'd hope that the pow(...) implementation is well written.
10 is special to us as most of us have ten fingers. Computers only have two digits, so 2 is special to them. :)
Use lookup table there cant be more than 1000 floats and especially if m is integer.
If you could operate with log n instead of n for a significant time, you could save time because instead of
n = pow(10*n,-m)
you now have to calculate (using the definition l = log10(n))
l = -m*(l+1)
Just some more ideas which may lead you to further solutions...
If you are interested in
optimization on algorithm level you
might look for a parallelized
approach.
You may speed up on
system/archtectural level on using Ipp
(for Intel Processors), or e.g. AMD
Core Math Library (ACML) for AMD
To use the power of your graphics
card may be another way (e.g. CUDA for NVIDEA cards)
I think it's also worth to look at
OpenCL
IEEE 754 specifies a bunch of floating-point formats. Those that are in widespread use are binary, which means that base 10 isn't in any way special. This is contrary to your assumption that "10 is also a special base".
Interestingly, IEEE 754-2008 does add decimal floating-point formats (decimal32 and friends). However, I'm yet to come across hardware implementations of those.
In any case, you shouldn't be micro-optimizing your code before you've profiled it and established that this is indeed the bottleneck.