ADT Integer class questions - c++

I am pretty new to programming and I have to do an Abstract Data Type (ADT) for integer numbers.
I've browsed the web for some tips, examples, tutorials but i couldn't find anything usefull, so i hope i will get here some answers.
I thinked a lot about how should i format the ADT that stores my integer and I'm thinking of something like this:
int lenght; // stores the length of the number(an limit since this numbers goes to infinite)
int[] digits; // stores the digits of my number, with the dimension equal to length
Now, I'm confused about how should i tackle the sign representation.Is it ok to hold the sign into an char something like: char sign?
But then comes the question what to do when I have to add and multiply two integers, what about the cases when i have overflows on this operations.
So , if some of you have some ideas about how should I represent the number(the format) and how should I do the multiply and add i would be very great full. I don't need any code, I i the learning stage just some ideas. Thank you.

One good way to do this is to store the sign as a bool (e.g. bool is_neg;). That way it's completely clear what that data means (vice with a char, where it's not entirely clear.
You might want to store each digit in an unsigned short (or if you want to be precise about sign, uint16_t). Then, when you do a multiply of two digits, you can just multiply them as unsigned ints (uint32_t), and then the low 16 bits are your result and the overflow is in the high 16 bits. You can then add this to the result array fairly easily. You know that the multiplication of a n-bit number by a k-bit number is at most n + k bits long, so you can preallocate your array to that size and then worry about removing extra zeros later.
Hope this helps, and let me know if you want more tips.

The first design decision you have to make is the choice of a basis.
You seem to lean towards plain decimal. Could be unpacked (one full byte per digit, numerical or ASCII representation), or packed digits pairs (Decimal Coded Binary, twice four bits in a byte).
Other schemes are more convenient for faster operations: basis being a power of 2 or a power of 10, fitting in a byte, a short, an int...
Powers of 10 have the benefit that conversion to and from base 10 can be done word by word.
Addition is an easy matter: add the words in pairs and handle the carries. Same for subtraction, with borrows.
Multiplies are a whole different story if you care about efficiency. The method of written computation taught at school can be used, but it requires length1 x length2 operations. For long numbers, more efficient methods are preferred (http://en.wikipedia.org/wiki/Multiplication_algorithm#Karatsuba_multiplication). They are also more complex.

Related

16,32 etc.-byte variable for a utopic application

The following lines are part from my really "useless" C++ program... which is calculating powers of 2 only up to 2^63 instead of 2^128 "which is being asked" due to the length of the "unsigned long long" variable which is proposed for numbers with 15 digits accuracy...!!!
Just that....I need a 16 bytes or more variable...which is not provided by:
-__int128(Visual Studio 2010 turns the letters to blue but a red line and a error in debug: "keyword not supported on this architecture"32-bit system)
-Boost::Projects...after I googled it due to the fact that I am a newcomer "I was lost in the universe" when I came across with professionals sites (does boost::bigint...exist??? not a rhetorical question)
(-Multi-typing long of' course)
int main()
{
unsigned long long result;
int i;
const int max=128;
for(i=0, result=1ll; i <= max; ++i,result *=2 )
cout<<setw(3)<< i <<setw(32)<< result <<endl;
system("pause");
return 0;
}
You could find a "bigint" implementation in C++ that implements operator<<() to output to ostream's, but if all you want to do is print out powers of 2 to a console or text string, and you don't need to actually do "bigint" math (except to compute those powers-of-2), there's a simpler approach that will give you powers of 2 out to pretty much as large as you want to go & have the patience to look through:
Store each decimal digit (numbers 0 through 9) as a separate entity, perhaps as an array of chars or ints or in a std::list of the digits. Using a std::list has the advantage that you can easily add new digit places at the front as your number gets bigger, but you can do that almost as easily by storing the digits in reverse order in a std::vector (of course to print them, you have to iterate from the back to the front to print the digits in their proper order).
Once you figure out how you want to store the digits, your algorithm for doubling the number is as follows: Iterate over the digits of the large number, doubling each (mod 10 of course) and carrying any overflow (i.e. a bool that says if its result... before the %10... was greater than 9) from that digit to the next. On the next digit, double it first and then add 1 if the previous digit overflowed. And if that result overflows, carry that overflow on to the next digit & continue to the end of all of the digits. At the end of the digits, if doubling the last digit & adding any overflow from the previous digit caused an overflow in that last digit, then add a new digit & set it to 1. Then print the resulting list of digits.
With this algorithm, you can print powers-of-2 as large as you like. Of course they're not "numbers" in the sense that you can't use them directly in C++ math ops.
SSE and AVX intrinsics go up to 256 bytes, given a modern CPU. They're named __m128i and __m256i.
128 bit integer is a really big integer. You should implement your own data type. You can create an array of shorts, store there numbers (digits) and implement multiplying, just like you do in your math notebook, that's probably the simplest approach.
This one is not finished, of course! The '2' is still missing ;)

Conversion Big Integer <-> double in C++

I am writing my own long arithmetic library in C++ for fun and it is already pretty finished, I even implemented several Cryptogrphic algorithms with that library, but one important thing is still missing: I want to convert doubles (and floats/long doubles) into my number and vice versa. My numbers are represented as a variable sized array of unsigned long ints plus a sign bit.
I tried to find the answer with google, but the problem is that people rarely ever implement such things themselves, so I only find things about how to use Java BigInteger etc.
Conceptually, it is rather easy: I take the mantissa, shift it by the number of bits dictated by the exponent and set the sign. In the other direction I truncate it so that it fits into the mantissa and set the exponent depending on my log2 function.
But I am having a hard time to figure out the details, I could either play around with some bit patterns and cast it to a double, but I didn't find an elegant way to achieve that or I could "calculate" it by starting with 2, exponentiate, multiply etc, but that doesn't seem very efficient.
I would appreciate a solution that doesn't use any library calls because I am trying to avoid libraries for my project, otherwise I could just have used gmp, furthermore, I often have two solutions on several other occasions, one using inline assembler which is efficient and one that is more platform independent, so either answer is useful for me.
edit: I use uint64_t for my parts, but I would like to be able to change it depending on the machine, but I am willing to do some different implementations with some #ifdefs to achieve that.
I'm going to make non-portable assumption here: namely, that unsigned long long has more accurate digits than double. (This is true on all modern desktop systems that I know of.)
First, convert the most significant integer(s) into an unsigned long long. Then convert that to a double S. Let M be the number of integers less than those used in that first step. multiply S by(1ull << (sizeof(unsigned)*CHAR_BIT*M). (If shifting more than 63 bits, you will have to split those into seperate shifts and do some alrithmetic) Finally, if the original number was negative you multiply this result by -1.
This rounds a lot, but even with this rounding, due to the above assumption, no digits are lost that wouldn't be lost anyway with the conversion to a double. I think this is a similar process to what Mark Ransom said, but I'm not certain.
For converting from a double to a biginteger, first seperate the mantissa into a double M and the exponent into an int E, using frexp. Multiply M by UNSIGNED_MAX, and store that result in an unsigned R. If std::numeric_limits<double>::radix() is 2 (I don't know if it is or not for x86/x64), you can easily shift R left by E-(sizeof(unsigned)*CHAR_BIT) bits and you're done. Otherwise the result will instead beR*(E**(sizeof(unsigned)*CHAR_BIT)) (where ** means to the power of)
If performance is a concern, you can add an overload to your bignum class for multiplying by std::constant_integer<unsigned, 10>, which simply returns (LHS<<4)+(LHS<<2). You can similarly optimize other constants if you wish.
This blog post might help you Clarifying and optimizing Integer>>asFloat
Otherwise, you can yet have an idea of algorithm with this SO question Converting from unsigned long long to float with round to nearest even
You don't say explicitly, but I assume your library is integer only and the unsigned longs are 32 bit and binary (not decimal). The conversion to double is simple, so I'll tackle that first.
Start with a multiplier for the current piece; if the number is positive it will be 1.0, if negative it will be -1.0. For each of the unsigned long ints in your bignum, multiply by the current multiplier and add it to the result, then multiply your multiplier by pow(2.0, 32) (4294967296.0) for 32 bits or pow(2.0, 64) (18446744073709551616.0) for 64 bits.
You can optimize this process by working with only the 2 most significant values. You need to use 2 even if the number of bits in your integer type is larger than the precision of a double, since the number of used bits in the most significant value might only be 1. You can generate the multiplier by taking a power of 2 to the number of skipped bits, e.g. pow(2.0, most_significant_count*sizeof(bit_array[0])*8). You can't use a bit shift as given in another answer because it will overflow after the first value.
To convert from double, you can get the exponent and mantissa separated from each other with the frexp function. The mantissa will come as a floating point value between 0.5 and 1.0 so you'll want to multiply it by pow(2.0, 32) or pow(2.0, 64) to convert it to an integer, then adjust the exponent by -32 or -64 to compensate.
To go from a big integer to a double, just do it the same way you parse numbers. For example, you parse the number "531" as "1 + (3 * 10) + (5 * 100)". Compute each portion using doubles, starting with the least significant portion.
To go from a double to a big integer, do it the same way but in reverse starting with the most significant portion. So, to convert 531, you first see that it's more than 100 but less than 1000. You find the first digit by dividing by 100. Then you subtract to get the remainder of 31. Then find the next digit by dividing by 10. And so on.
Of course, you won't be using tens (unless you store your big integers as digits). Exactly how you break it apart depends on how your big integer class is constructed. For example, if it's uses 64-bit units, then you'll use powers of 2^64 instead of powers of 10.

How do I find the largest integer fully supported by hardware arithmetics?

I am implementing a BigInt class that must support arbitrary-precision operations on integers.
Quote from "The Algorithm Design Manual" by S.Skiena:
What base should I do [editor's note: arbitrary-precision] arithmetic in? - It is perhaps simplest to implement your own high-precision arithmetic package in decimal, and thus represent each integer as a string of base-10 digits. However, it is far more efficient to use a higher base, ideally equal to the square root of the largest integer supported fully by hardware arithmetic.
How do I find the largest integer supported fully by hardware arithmetic? If I understand correctly, being my machine an x64 based PC, the largest integer supported should be 2^64 (http://en.wikipedia.org/wiki/X86-64 - Architectural features: 64-bit integer capability), so I should use base 2^32, but is there a way in c++ to get this size programmatically so I can typedef my base_type to it?
You might be searching for std::uintmax_t and std::intmax_t.
static_cast<unsigned>(-1) is the max int. e.g. all bits set to 1 Is that what you are looking for ?
You can also use std::numeric_limits<unsigned>::max() or UINT_MAX, and all of these will yield the same result. and what these values tell is the maximum capacity of unsigned type. e.g. the maximum value that can be stored into unsigned type.
int (and, by extension, unsigned int) is the "natural" size for the architecture. So a type that has half the bits of an int should work reasonably well. Beyond that, you really need to configure for the particular hardware; the type of the storage unit and the type of the calculation unit should be typedefs in a header and their type selected to match the particular processor. Typically you'd make this selection after running some speed tests.
INT_MAX doesn't help here; it tells you the largest value that can be stored in an int, which may or may not be the largest value that the hardware can support directly. Similarly, INTMAX_MAX is no help, either; it tells you the largest value that can be stored as an integral type, but doesn't tell you whether operations on such a value can be done in hardware or require software emulation.
Back in the olden days, the rule of thumb was that operations on ints were done directly in hardware, and operations on longs were done as multiple integer operations, so operations on longs were much slower than operations on ints. That's no longer a good rule of thumb.
Things are not so black and white. There are MAY issues here, and you may have other things worth considering. I've now written two variable precision tools (in MATLAB, VPI and HPF) and I've chosen different approaches in each. It also matters whether you are writing an integer form or a high precision floating point form.
The difference is, integers can grow without bound in the number of digits. But if you are doing a floating point implementation with a user specified number of digits, you always know the number of digits in the mantissa. This is fixed.
First of all, it is simplest to use a single integer for each decimal digit. This makes many things work nicely, so I/O is easy. It is a bit inefficient in terms of storage though. Adds and subtracts are easy though. And if you use integers for each digit, then multiplies are even easy. In MATLAB for example, conv is pretty fast, though it is still O(n^2). I think gmp uses an fft multiply, so faster yet.
But assuming you use a basic conv multiply, then you need to worry about overflows for numbers with a huge number of digits. For example, suppose I store decimal digits as 8 bit signed integers. Using conv, followed by carries, I can do a multiply. For example, suppose I have the number 9999.
N = repmat(9,1,4)
N =
9 9 9 9
conv(N,N)
ans =
81 162 243 324 243 162 81
Thus even to form the product 9999*9999, I'd need to be careful as the digits will overflow an 8 bit signed integer. If I'm using 16 bit integers to accumulate the convolution products, then a multiply between a pair of 1000 digits integers can cause an overflow.
N = repmat(9,1,1000);
max(conv(N,N))
ans =
81000
So if you are worried about the possibility of millions of digits, you need to watch out.
One alternative is to use what I call migits, essentially working in a higher base than 10. Thus by using base 1000000 and doubles to store the elements, I can store 6 decimal digits per element. A convolution will still cause overflows for larger numbers though.
N = repmat(999999,1,10000);
log2(max(conv(N,N)))
ans =
53.151
Thus a convolution between two sets of base 1000000 migits that are 10000 migits in length (60000 decimal digits) will overflow the point where a double cannot represent an integer exactly.
So again, if you will use numbers with millions of digits, beware. A nice thing about the use of a higher base of migits with a convolution based multiply is since the conv operation is O(n^2), then going from base 10 to base 100 gives you a 4-1 speedup. Going to base 1000 yields a 9-1 speedup in the convolutions.
Finally, the use of a base other than 10 as migits makes it logical to implement guard digits (for floats.) In floating point arithmetic, you should never trust the least significant bits of a computation, so it makes sense to keep a few digits hidden in the shadows. So when I wrote my HPF tool, I gave the user control of how many digits would be carried along. This is not an issue for integers of course.
There are many other issues. I discuss them in the docs carried with those tools.

Extracting digits from a float C++

Given double x, and assuming that it lies in [0,1] . Assume for example that x=0.3
In binary, (keeping 10 digits after the decimal point), it is represented as
x=0.0100110011...
I want to write some C++ code which will extract the 10 digits shown after the decimal point. In other words I want to extract the integer (0100110011)_2.
Now I am quite new to bit shifting and the (naive) solution which I have for the problem is the following
int temp= (int) (x*(1<<10))
Then temp in binary will have the necesary 10 digits.
Is this a safe way to perform the above process? OR are there safer / more correct ways to do this?
Note: I don't want the digits extracted in the form of a character array. I specifically want an integer (OR unsigned integer) for this. The reason for doing this is that in generation of octrees, points in space are given hash keys based on their position named as Morton Keys. These keys are usually stored as integers. After getting the integr keys for all the points they are then sorted. Theoretically these keys can be obtained by scaling the coordinates to [0,1], extracting the bits , and interleaving them.
Use memcpy to copy double into an array of 32-bit numbers, like this:
unsigned int b[2]; // assume int is 32-bits
memcpy(b, &x, 8);
The most 10 significant binary digits are in b[0] or b[1], depending on whether your machine is big- or little-endian.
EDIT: The same can be achieved by some casting instead of memcpy, but that would violate strict aliasing rules. An alternative is to use a union.
Read this: http://chrishecker.com/images/f/fb/Gdmfp.pdf
If you can grok what that article is telling you, you can derive the algorithm you're looking for. Just remember the bias factor in the exponent and the implicit leading one in the mantissa and the rest should fall into place.

Bit manipulation for big integer classes?

I'm having a problem coming up with an algorithm for a big integer class in C++. My initial idea was using arrays/lists, but it's very inefficient. I then discovered about things like the following class:
http://www.codeproject.com/KB/cpp/CppIntegerClass.aspx
However, I find that approach really confusing. I don't know how to work with bit manipulations, and I barely understood the code. Someone please explain to me how to utilise bit manipulation, how it works, etc. Eventually I would like to create my own big integer class, but I'm barely a novice programmer and I just learned how to use classes.
Basically my question is:
How do I use bit manipulation to create a big integer class? How does it work??
Thanks!
Start by reading up on binary numbers in general. That page shows how the common arithmetic operations (addition, subtraction etc) work on binary numbers, i.e. how the numbers are manipulated bit by bit to get the desired result.
Mapping that into a programming language such as C++ should be pretty straight-forward once you know why there are bit-manipulating operations being used.
In my experience, the most obvious bit-oriented thing needed when implementing something like this is bit testing, to check for overflow. Let's say you represent your big binary number as an array of uint16_t, i.e. chunks of 16 bits. When implementing addition, you will start at the least significant end of both numbers, and add those. If the sum is larger than 65,535, you need to "carry" one to the next uint16_t, just as when you add decimal numbers one digit at a time.
This can be implemented with a test like so:
const uint16_t *number1;
const uint16_t *number2;
/* assume code goes here to set up the number1 and number2 pointers. */
/* Compute sum of 16 bits. */
uint16_t carry = 0;
uint32_t sum = number1[0] + number2[0];
/* One way of testing for overflow: */
if (sum & (1 << 16))
carry = 1;
Here, the 1 << 16 expressions creates a mask by shifting a 1 sixteen steps to the left. The & bitwise and operator tests the sum against the mask; the result will be non-zero (i.e. true, in C++) if bit 16 is set in sum.