Algorithm for dividing very large numbers - c++

I need to write an algorithm(I cannot use any 3rd party library, because this is an assignment) to divide(integer division, floating parts are not important) very large numbers like 100 - 1000 digits. I found http://en.wikipedia.org/wiki/Fourier_division algorithm but I don't know if it's the right way to go. Do you have any suggestions?
1) check divisior < dividend, otherwise it's zero (because it will be an int division)
2) start from the left
3) get equal portion of digits from the dividend
4) if it's divisor portion is still bigger, increment digits of dividend portion by 1
5) multiply divisor by 1-9 through the loop
6) when it exceeds the dividend portion, previous multiplier is the answer
7) repeat steps 3 to 5 until reaching to the end

I'd imagine that dividing the 'long' way like in grade school would be a potential route. I'm assuming you are receiving the original number as a string, so what you do is parse each digit. Example:
Step 0:
/-----------------
13 | 453453453435....
Step 1: "How many times does 13 go into 4? 0
0
/-----------------
13 | 453453453435....
Step 2: "How many times does 13 go into 45? 3
03
/-----------------
13 | 453453453435....
- 39
--
6
Step 3: "How many times does 13 go into 63? 4
etc etc. With this strategy, you can have any number length and only really have to hold enough digits in memory for an int (divisor) and double (dividend). (Assuming I got those terms right). You store the result as the last digit in your result string.
When you hit a point where no digits remain and the calculation wont go in 1 or more times, you return your result, which is already formatted as a string (because it could be potentially larger than an int).

The easiest division algorithm to implement for large numbers is shift and subtract.
if numerator is less than denominator then finish
shift denominator as far left as possible while it is still smaller than numerator
set bit in quotient for amount shifted
subtract shifted denominator from numerator
repeat
the numerator is now the remainder
The shifting need not be literal. For example, you can write an algorithm to subtract a left shifted value from another value, instead of actually shifting the whole value left before subtracting. The same goes for comparison.
Long division is difficult to implement because one of the steps in long division is long division. If the divisor is an int, then you can do long division fairly easily.

Knuth, Donald, The Art of Computer Programming, ISBN 0-201-89684-2, Volume 2: Seminumerical Algorithms, Section 4.3.1: The Classical Algorithms

You should probably try something like long division, but using computer words instead of digits.
In a high-level language, it will be most convenient to consider your "digit" to be half the size of your largest fixed-precision integer. For the long division method, you will need to handle the case where your partial intermediate result may be off by one, since your fixed-precision division can only handle the most-significant part of your arbitrary-precision divisor.
There are faster and more complicated means of doing arbitrary-precision arithmetic. Check out the appropriate wikipedia page. In particular, the Newton-Raphson method, when implemented carefully, can ensure that the time performance of your division is within a constant factor of your arbitrary-precision multiplication.

Unless part of your assignment was to be completely original, I would go with the algorithm I (and I assume you) were taught in grade school for doing large division by hand.

Related

Looking for the fastest way to divide by 2

I've searched half the day and found some very interesting things about using fixed point data types and bit shifting in C++ to accomplish division operations while avoiding floating point math. However, I have only been able to understand a small fraction of it and I can't seem to get anything to work.
All I'm wanting to do is to take two integers, ad them up, and divide by two to get the average. I need to be able to do this very quickly though, since I'm interpolating camera pixel data on an Arduino and I also have other operations to do.
So I'm confused about shifting in general. Say the integer I want to divide by two is 27. Half of 27 is 13.5. But no matter what fixed point datatype I try, I can only get 13 as an output. For example:
uint8_t x = 27;
Serial.println( x >> 1 );
returns 13
There's got to be some simple way to do this, right?
Fixed point does give you a way to represent 13.5. The Wikipedia article on the Q number format is informative: https://en.wikipedia.org/wiki/Q_(number_format)
Think of it this way: You keep using integers, but instead of taking them at face value, divide them all implicitly by a power of 2 to obtain their semantic value.
So, if using an unsigned byte as your base type (values between 0 and 255, inclusive), you might implicitly divide by 2**3 (8). Now to represent 27, you need an integer set to 27*8=>216. To divide by two, you shift one to the right; now your integer is 108, which when divided by the implicit denominator of 8 gives 13.5, the value you're expecting.
You have to realize that fixed-point number systems (and floating point too, though it's less immediately evident) still have limits, of course; certain operations will overflow no matter what you do, and some operations cause a loss of precision. This is a normal consequence of working with limited-size types.
Say the integer I want to divide by two is 27. Half of 27 is 13.5. But
no matter what fixed point data type I try, I can only get 13 as an
output.
From wikipedia Fixed-Point Arithmetic:
The scaling factor is usually a power of 10 (for human convenience) or
a power of 2 (for computational efficiency).
You actually mentioned fixed point data type, and I think that is the best approach. But no matter what you tried? Perhaps we have different understandings of fixed-point-arithmetic.
while avoiding floating point math.
Another worth while goal, though reducing in value. Even in embedded systems, I seldom had to deal with a processor that did not have floating point parts. Floating point hardware has gotten reasonably good.
Any way, using fixed point avoids any need for floating point. Even for display purposes.
I think I need to proceed with a few examples.
Fixed point Example 1: Dollars and pennies
The unit of American money is based on the dollar. The Dollar is a fixed point data type.
So, if you have 27 dollars, how do you split it with your sibling?
One way (of several) that you all know is to convert 27 dollars into 2700 pennies. Dividing this value by 2 is trivial. Now you and your sibling can each get 1350 pennies. (i.e. the penny is a fixed point data type, that easily converts to/from dollars, and vice-vesa)
Note that this is completely integer arithmetic. Adding 2 integers, and dividing by 2 (any modern compiler will choose the fastest implementation.. either integer divide or perhaps right-shift-by-2) and on my desktop these 2 actions take less than a microsecond to complete.
You should waste no more time on measuring the relative performance of those two options (divide vs right-shift), you simply enable -O3 when your code tests correct. Your compiler should be able to choose correctly.
The choice of units in any problem is based on a scale factor that covers the range of values (in your problem) AND the understandable and quickly implemented conversion between units. And note that uint64_t can describe a large amount of cash, even in pennies. (challenge to the student.)
In General, about fixed point:
Given
uint8_t x = 27;
and the desire to divide by 2 evenly and quickly... can any scale factor be something that serves your needs? I say yes.
example 2 - 50 cent coins and a dollar
How about we try, for example, a simple scale factor of 2, i.e. the unit is a hu, or half unit. (analogous to the 50-cent-coin)
uint8_t x = 27 * 1/hu; (hu = 1/2)
This means that 54 hu represents 27 units. (ie, it takes 54 50-cent-coins to add up to 27 dollars)
The fixed point solution is to scale your integer values to achieve the arithmetic required. If you scale to even values, all your integers will divide evenly to the hu units.
example 3 - nickles and a dollar
Another possible scale might be 20, both decimal (for readability) and binary for performance. (note that there are 20 nickels in a dollar)
uint16 x = 27 * 1/tu; (tu = 1/20)
Now 540 represents a scaled 27. i.e. 540 nickles
All examples are fully integer, provide exact answers, and there is a trivial mechanism to convert the values for presentation to the user. i.e. which ever fixed point used, convert to analogue of pennies, and thus 1350 pennies.
Display the penny count as a dollar
std::cout << (pennyCount / 100) << "." << (pennyCount % 100) << std::endl;
I think this should look something like (untested)
13.50
Now your challenge is to make it look nice on the output.
The reason you only get 13 is because you are actually cutting off the least significant bits when you bit shift. Since you are cutting them off, there is no remainder to check. If you are interested in what your remainder is, you could do something like:
uint8_t x = 27;
Serial.println((x - (x >> 1) - (x >> 1));
(x - (x >> 1)) should give 14 here.
it would be pretty simple to add .5 to a number once you determine whether the remainder is 1.
The following should work and should be fast:
float y = (x >> 1) + (0.5 * (x & 0x01))
What it does
(x >> 1) Divide by 2 using the bit shift
(0.5 * (x & 0x01)) Add 0.5 if the last bit was 1 (odd number)

How to calculate number of digits on huge number? C++

so the problem I have is that there is two integers (a, b) that is in [1, 10^16] interval and I need to do find out how many digits will number a^b have? Those numbers are too big for saving them on single variables, and if I write them on Array it would take a lot of time.
Is there a way to count the number a^b number of digits with some kind of formula or any simpler way then Arrays?
after fixing the one-off error suggested in the comments
number of digits of a^b = floor( b * log(a) ) + 1
karakfa has it right.
The base-k logarithm of a number n, rounded up to the nearest whole number, will give you the number of digits required to represent n in base k.
EDIT: as pointed out in comments, it should not be rounded up, but rounded down and then incremented by one. This accounts for round powers of 10 having an extra digit.
If your number is a^b then take the base-10 logarithm, log a^b and use the laws of logarithms to simplify as b log a. Note that this simplification happens inside the ceiling function so the simplification is valid. Computing log a should not be an issue (it will be between 0 and 16) and b is known. Just make sure to round after multiplying, not before.
Note that limited precision of floating-point numbers may introduce some errors into this method. If the true value of b x log a is different from the nearest floating-point representation of b x log a in such a way that they fall on different sides of an integer, the method fails. You can possibly detect when you are close to this condition and remediate it somehow.
You could use a library that supports arbitrarily large numbers, like GMP .
The core C++ language itself offers no types to work with such large numbers. So either you use a pre-existing library or write one yourself (I suggest the former - don't re-invent the wheel).

What's the best multiplication algorithm for fixed point where precision is necessary

I know, I know, people are probably going to say "just switch to floating point", but currently that is not an option due to the nature of the project that I am working on. I am helping write a programming language in C++ and I am currently having difficulty trying to get a very accurate algorithm for multiplication whereby I have a VM and mainly the operations for mod/smod, div/sdiv (ie signed numbers are not a concern here), mul, a halving number for fully fractional numbers and a pushed shift number that I multiply and divide by to create my shifting. For simplicity, lets say I'm working with a 32 byte space. My algorithms work fine for pretty much anything involving integers, it's just that when my fractional portion gets over 16 bytes that I run into problems with precision, and if I were to round it, the number would be fairly accurate, but I want it as accurate as possible, even willing to sacrifice a tad in performance for it, so long as it stays a fixed point and doesn't go into floating point land. The algorithms I'm concerned with I will map out in a sort of pseudocode. Would love any insight into how I could make this better, or any reasoning as to why by the laws of computational science, what I'm asking for is a fruitless endeavor.
For fully fractional numbers (all bytes are fractional):
A = num1 / halfShift //truncates the number down to 16 so that when multiplied, we get a full 32 byte num
B = num2 / halfShift
finalNum = A * B
For the rest of my numbers that are larger than 16 bytes I use this algorithm:
this algorithm can essentially be broken down into the int.frac form
essentially A.B * C.D taking the mathematic form of
D*B/shift + C*A*shift + D*A + C*B
if the fractional numbers are larger than the integer, I halve them, then multiply them together in my D*B/shift
just like in the fully fractional example above
Is there some kind of "magic" rounding method that I should be aware of? Please let me know.
You get the most accurate result if you do the multiplication first and scale afterwards. Of course that means, that you need to store the result of the multiplication in a 64-bit int type.
If that is not an option, your approach with shifting in advance makes sense. But you certainly loose precision.
Either way, you can increase accuracy a little if you round instead of truncate.
I support Aconcagua's recommendation to round to nearest.
For that you need to add the highest bit which is going to be truncated before you apply the division.
In your case that would look like this:
A = (num1 + 1<<(halfshift-1)) >> halfshift
B = (num2 + 1<<(halfshift-1)) >> halfshift
finalNum = A * B
EDIT:
Example on how to dynamically scale the factors and the result depending on the values of the factors (this improves resolution and therefore the accuracy of the result):
shiftA and shiftB need to be set such that A and B are 16 byte fractionals each and therefore the 32 byte result cannot overflow. If shiftA and shiftB is not known in advance, it can be determined by counting the leading zeros of num1 and num2.
A = (num1 + 1<<(shiftA-1)) >> shiftA
B = (num2 + 1<<(shiftB-1)) >> shiftB
finalNum = (A * B) >> (fullshift - (shiftA + shiftB))
The number of fractional digits of a product equals the sum of the numbers of fractional digits in the operands. You have to carry out the multiplication to that precision and then round or truncate according to the desired target precision.

c++ bitwise addition , calculates the final number of representative bits

I am currently developing an utility that handles all arithmetic operations on bitsets.
The bitset can auto-resize to fit any number, so it can perform addition / subtraction / division / multiplication and modulo on very big bitsets (i've come up to load a 700Mo movie inside to treat it just as a primitive integer)
I'm facing one problem though, i need for my addition to resize my bitset to fit the exact number of bits needed after an addition, but i couldn't come up with an absolute law to know exactly how many bits would be needed to store everything, knowing only the number of bits that both numbers are handling (either its representation is positive or negative, it doesn't matter)
I have the whole code that i can share with you to point out the problem if my question is not clear enough.
Thanks in advance.
jav974
but i couldn't come up with an absolute law to know exactly how many bits would be needed to store everything, knowing only the number of bits that both numbers are handling (either its representation is positive or negative, it doesn't matter)
Nor will you: there's no way given "only the number of bits that both numbers are handling".
In the case of same-signed numbers, you may need one extra bit - you can start at the most significant bit of the smaller number, and scan for 0s that would absorb the impact of a carry. For example:
1010111011101 +
..10111010101
..^ start here
As both numbers have a 1 here you need to scan left until you hit a 0 (in which case the result has the same number of digits as the larger input), or until you reach the most significant bit of the larger number (in which case there's one more digit in the result).
1001111011101 +
..10111010101
..^ start here
In this case where the longer input has a 0 at the starting location, you first need to do a right-moving scan to establish whether there'll be a carry from the right of that starting position before launching into the left-moving scan above.
When signs differ:
if one value has 2 or more digits less than the other, then the number of digits required in the result will be either the same or one less than the digits in the larger input
otherwise, you'll have to do more of the work for an addition just to work out how many digits the result needs.
This is assuming the sign bit is separate from the count of magnitude bits.
Finally the number of representative bits after an addition is at maximum the number of bits of the one that owns the most + 1.
Here is an explanation, using an unsigned char:
For max unsigned char :
11111111 (255)
+ 11111111 (255)
= 111111110 (510)
Naturally if max + max = (bits of max + 1) then for x and y between 0 and max the result bits is at max + 1 (very maximum)
this works the same way with signed integers.

Conversion Big Integer <-> double in C++

I am writing my own long arithmetic library in C++ for fun and it is already pretty finished, I even implemented several Cryptogrphic algorithms with that library, but one important thing is still missing: I want to convert doubles (and floats/long doubles) into my number and vice versa. My numbers are represented as a variable sized array of unsigned long ints plus a sign bit.
I tried to find the answer with google, but the problem is that people rarely ever implement such things themselves, so I only find things about how to use Java BigInteger etc.
Conceptually, it is rather easy: I take the mantissa, shift it by the number of bits dictated by the exponent and set the sign. In the other direction I truncate it so that it fits into the mantissa and set the exponent depending on my log2 function.
But I am having a hard time to figure out the details, I could either play around with some bit patterns and cast it to a double, but I didn't find an elegant way to achieve that or I could "calculate" it by starting with 2, exponentiate, multiply etc, but that doesn't seem very efficient.
I would appreciate a solution that doesn't use any library calls because I am trying to avoid libraries for my project, otherwise I could just have used gmp, furthermore, I often have two solutions on several other occasions, one using inline assembler which is efficient and one that is more platform independent, so either answer is useful for me.
edit: I use uint64_t for my parts, but I would like to be able to change it depending on the machine, but I am willing to do some different implementations with some #ifdefs to achieve that.
I'm going to make non-portable assumption here: namely, that unsigned long long has more accurate digits than double. (This is true on all modern desktop systems that I know of.)
First, convert the most significant integer(s) into an unsigned long long. Then convert that to a double S. Let M be the number of integers less than those used in that first step. multiply S by(1ull << (sizeof(unsigned)*CHAR_BIT*M). (If shifting more than 63 bits, you will have to split those into seperate shifts and do some alrithmetic) Finally, if the original number was negative you multiply this result by -1.
This rounds a lot, but even with this rounding, due to the above assumption, no digits are lost that wouldn't be lost anyway with the conversion to a double. I think this is a similar process to what Mark Ransom said, but I'm not certain.
For converting from a double to a biginteger, first seperate the mantissa into a double M and the exponent into an int E, using frexp. Multiply M by UNSIGNED_MAX, and store that result in an unsigned R. If std::numeric_limits<double>::radix() is 2 (I don't know if it is or not for x86/x64), you can easily shift R left by E-(sizeof(unsigned)*CHAR_BIT) bits and you're done. Otherwise the result will instead beR*(E**(sizeof(unsigned)*CHAR_BIT)) (where ** means to the power of)
If performance is a concern, you can add an overload to your bignum class for multiplying by std::constant_integer<unsigned, 10>, which simply returns (LHS<<4)+(LHS<<2). You can similarly optimize other constants if you wish.
This blog post might help you Clarifying and optimizing Integer>>asFloat
Otherwise, you can yet have an idea of algorithm with this SO question Converting from unsigned long long to float with round to nearest even
You don't say explicitly, but I assume your library is integer only and the unsigned longs are 32 bit and binary (not decimal). The conversion to double is simple, so I'll tackle that first.
Start with a multiplier for the current piece; if the number is positive it will be 1.0, if negative it will be -1.0. For each of the unsigned long ints in your bignum, multiply by the current multiplier and add it to the result, then multiply your multiplier by pow(2.0, 32) (4294967296.0) for 32 bits or pow(2.0, 64) (18446744073709551616.0) for 64 bits.
You can optimize this process by working with only the 2 most significant values. You need to use 2 even if the number of bits in your integer type is larger than the precision of a double, since the number of used bits in the most significant value might only be 1. You can generate the multiplier by taking a power of 2 to the number of skipped bits, e.g. pow(2.0, most_significant_count*sizeof(bit_array[0])*8). You can't use a bit shift as given in another answer because it will overflow after the first value.
To convert from double, you can get the exponent and mantissa separated from each other with the frexp function. The mantissa will come as a floating point value between 0.5 and 1.0 so you'll want to multiply it by pow(2.0, 32) or pow(2.0, 64) to convert it to an integer, then adjust the exponent by -32 or -64 to compensate.
To go from a big integer to a double, just do it the same way you parse numbers. For example, you parse the number "531" as "1 + (3 * 10) + (5 * 100)". Compute each portion using doubles, starting with the least significant portion.
To go from a double to a big integer, do it the same way but in reverse starting with the most significant portion. So, to convert 531, you first see that it's more than 100 but less than 1000. You find the first digit by dividing by 100. Then you subtract to get the remainder of 31. Then find the next digit by dividing by 10. And so on.
Of course, you won't be using tens (unless you store your big integers as digits). Exactly how you break it apart depends on how your big integer class is constructed. For example, if it's uses 64-bit units, then you'll use powers of 2^64 instead of powers of 10.