I want to know that what is the time complexity of this piece of code which is Russian peasant Implementation
unsigned long long int russian(unsigned long long int a, unsigned long long int b) {
unsigned long long int res = 0;
while (b > 0) {
if (b & 1)
res = res + a;
a <<= 1;
b >>= 1;
}
return res % mod;
}
As far as my knowledge i think its time complexity is either lg2b or lg2a(depending upon our choice of a or b) .Any expert comment?
The time complexity of the piece of code you supplied is, of course, O(1), because there is an upper bound on how long it can take and will never exceed that upper bound on any inputs.
Presumably, that's not the answer to the question you actually mean to ask. There are actually several different things you might be actually interested in, and they all actually have different answers.
(also, since you seem to be trying to do a modular multiply, you really should be reducing all relevant quantities inside the loop so that you don't overflow, and so that you can use - instead of %)
You might be interested in having a precise estimate of the wall-clock time. Obtaining this will actually require gathering some empirical data, but it will probably look something like
A + B bitlength(b) + C popcount(b)
(popcount is the number of 1s in the binary expansion) for some constants A, B, and C. However, CPU hardware is actually rather complicated, and it might actually be extremely involved to get a good estimate for the third term above, since branch prediction hardware might do some odd things.
And A, B, and C probably aren't even constants; they will depend to some extent on whether this function gets inlined, and the sort of code surrounding the places where it's used.
Now, you might want a more abstract answer where b can be of arbitrary size, rather than constrained to be the size of an unsigned long long, and want to count the number of arithmetic operations. This is very clearly just the bit length of b, or as the comments indicate, O(lg(b)). (where lg is the log base 2)
Now, you might actually be interested not just in the arithmetic operations, but their cost. And might be interested in a being of arbitrary size rather than constrained to be an unsigned long long. A useful unit of measure would be bit operations. e.g. doing a left-shift by 1 on an N-bit number ought to cost O(N) bit operations.
I'm pretty sure the loop works out to O(lg(a)lg(b)+lg(b)^2) bit operations. (this doesn't include the % operation you do afterwards)
Related
I have an integer constant, lets say:
expr x = ctx.int_const("x");
What I'm trying to do is apply random constraints on the individual bits of x. However, it turns out you cannot use bit-wise operations with integer sorts, only bit-vectors. My initial approach before realizing this was this:
for(int i = 0; i < 32; i++){
int mask = 0x00000001 << i;
if(rand()%2)
solver.add((x & mask) == 0);
else
solver.add((x & mask) != 0);
}
This of course does not work, as Z3 throws an exception.
After a bit of digging through the API, I found the Z3_mk_int2bv function, and figured I'd give that a try:
for(int i = 0; i < 32; i++){
if(rand()%2)
solver.add(z3::expr(ctx(),Z3_mk_int2bv(ctx(), 32, v())).extract(i, i) == ctx().bv_val(0, 1));
else
solver.add(z3::expr(ctx(),Z3_mk_int2bv(ctx(), 32, v())).extract(i, i) != ctx().bv_val(0, 1));
}
While no assertion gets thrown on the above solver add calls, the actual solving time suddenly exploded. So much so that I have yet to see how long it actually takes. Adding similar expressions using bit-vectors does not take a major toll on the SAT solver, with the solver time being less than a second as far I can tell.
I'm wondering what it is about the above expression that could cause the solver performance to degrade so badly, and whether there's a better approach?
int2bv is expensive. There are many reasons for this, but bottom line the solver now has to negotiate between the theory of integers and bit-vectors, and the heuristics probably don't help much. Notice that to do a proper conversion the solver has to perform repeated divisions, which is quite costly. Furthermore, talking about bits of a mathematical integer doesn't make much sense to start with: What if it's a negative number? Do you assume some sort of a infinite-width 2's complement representation? Or is it some other mapping? All this makes it harder to reason with such conversions. And for a long time int2bv was uninterpreted in z3 for this and similar reasons. You can find many posts regarding this on stack-overflow, for instance see here: Z3 : Questions About Z3 int2bv?
Your best bet would be to simply use bit-vectors to start with. If you're reasoning about machine arithmetic, why not model everything with bit-vectors to start with?
If you're stuck with the Int type, my recommendation would be to simply stick to mod function, making sure the second argument is a constant. This might avoid some of the complexity, but without looking at actual problems, it's hard to opine any further.
I have a 32 bit floating point f number (known to be positive) that I need to convert to 32 bit unsigned integer. It's magnitude might be too large to fit. Furthermore, there is downstream computation that requires some headroom. I can compute the maximum acceptable value m as a 32 bit integer. How do I efficiently determine in C++11 on a constrained 32 bit machine (ARM M4F) if f <= m mathematically. Note that the types of the two values don't match. The following three approaches each have their issues:
static_cast<uint32_t>(f) <= m: I think this triggers undefined behaviour if f doesn't fit the 32 bit integer
f <= static_cast<float>(m): if m is too large to be converted exactly, the converted value could be larger than m such that the subsequent comparison will produce the wrong result in certain edge cases
static_cast<double>(f) <= static_cast<double>(m): is mathematically correct, but requires casting to, and working with double, which I'd like to avoid for efficiency reasons
Surely there must be a way to convert an integer to a float directly with specified rounding direction, i.e. guaranteeing the result not to exceed the input in magnitude. I'd prefer a C++11 standard solution, but in the worst case platform intrinsics could qualify as well.
I think your best bet is to be a bit platform specific. 2³² can be represented precisely in floating point. Check if f is too large to fit at all, and then convert to unsigned and check against m.
const float unsigned_limit = 4294967296.0f;
bool ok = false;
if (f < unsigned_limit)
{
const auto uf = static_cast<unsigned int>(f);
if (uf <= m)
{
ok = true;
}
}
Not fond of the double comparison, but it's clear.
If f is usually significantly less than m (or usually significantly greater), one can test against float(m)*0.99f (respectively float(m)*1.01f), and then do the exact comparison in the unusual case. That is probably only worth doing if profiling shows that the performance gain is worth the extra complexity.
In my software I am using the input values from the user at run time and performing some mathematical operations. Consider for simplicity below example:
int multiply(const int a, const int b)
{
if(a >= INT_MAX || B >= INT_MAX)
return 0;
else
return a*b;
}
I can check if the input values are greater than the limits, but how do I check if the result will be out of limits? It is quite possible that a = INT_MAX - 1 and b = 2. Since the inputs are perfectly valid, it will execute the undefined code which makes my program meaningless. This means any code executed after this will be random and eventually may result in crash. So how do I protect my program in such cases?
This really comes down to what you actually want to do in this case.
For a machine where long or long long (or int64_t) is a 64-bit value, and int is a 32-bit value, you could do (I'm assuming long is 64 bit here):
long x = static_cast<long>(a) * b;
if (x > MAX_INT || x < MIN_INT)
return 0;
else
return static_cast<int>(x);
By casting one value to long, the other will have to be converted as well. You can cast both if that makes you happier. The overhead here, above a normal 32-bit multiply is a couple of clock-cycles on modern CPU's, and it's unlikely that you can find a safer solution, that is also faster. [You can, in some compilers, add attributes to the if saying that it's unlikely to encourage branch prediction "to get it right" for the common case of returning x]
Obviously, this won't work for values where the type is as big as the biggest integer you can deal with (although you could possibly use floating point, but it may still be a bit dodgy, since the precision of float is not sufficient - could be done using some "safety margin" tho' [e.g. compare to less than LONG_INT_MAX / 2], if you don't need the entire range of integers.). Penalty here is a bit worse tho', especially transitions between float and integer isn't "pleasant".
Another alternative is to actually test the relevant code, with "known invalid values", and as long as the rest of the code is "ok" with it. Make sure you test this with the relevant compiler settings, as changing the compiler options will change the behaviour. Note that your code then has to deal with "what do we do when 65536 * 100000 is a negative number", and your code didn't expect so. Perhaps add something like:
int x = a * b;
if (x < 0) return 0;
[But this only works if you don't expect negative results, of course]
You could also inspect the assembly code generated and understand the architecture of the actual processor [the key here is to understand if "overflow will trap" - which it won't by default in x86, ARM, 68K, 29K. I think MIPS has an option of "trap on overflow"], and determine whether it's likely to cause a problem [1], and add something like
#if (defined(__X86__) || defined(__ARM__))
#error This code needs inspecting for correct behaviour
#endif
return a * b;
One problem with this approach, however, is that even the slightest changes in code, or compiler version may alter the outcome, so it's important to couple this with the testing approach above (and make sure you test the ACTUAL production code, not some hacked up mini-example).
[1] The "undefined behaviour" is undefined to allow C to "work" on processors that have trapping overflows of integer math, as well as the fact that that a * b when it overflows in a signed value is of course hard to determine unless you have a defined math system (two's complement, one's complement, distinct sign bit) - so to avoid "defining" the exact behaviour in these cases, the C standard says "It's undefined". It doesn't mean that it will definitely go bad.
Specifically for the multiplication of a by b the mathematically correct way to detect if it will overflow is to calculate log₂ of both values. If their sum is higher than the log₂ of the highest representable value of the result, then there is overflow.
log₂(a) + log₂(b) < log₂(UINT_MAX)
The difficulty is to calculate quickly the log₂ of an integer. For that, there are several bit twiddling hacks that can be used, like counting bit, counting leading zeros (some processors even have instructions for that). This site has several implementations
https://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
The simplest implementation could be:
unsigned int log2(unsigned int v)
{
unsigned int r = 0;
while (v >>= 1)
r++;
return r;
}
In your program you only need to check then
if(log2(a) + log2(b) < MYLOG2UINTMAX)
return a*b;
else
printf("Overflow");
The signed case is similar but has to take care of the negative case specifically.
EDIT: My solution is not complete and has an error which makes the test more severe than necessary. The equation works in reality if the log₂ function returns a floating point value. In the implementation I limited thevalue to unsigned integers. This means that completely valid multiplication get refused. Why? Because log2(UINT_MAX) is truncated
log₂(UINT_MAX)=log₂(4294967295)≈31.9999999997 truncated to 31.
We have there for to change the implementation to replace the constant to compare to
#define MYLOG2UINTMAX (CHAR_BIT*sizeof (unsigned int))
You may try this:
if ( b > ULONG_MAX / a ) // Need to check a != 0 before this division
return 0; //a*b invoke UB
else
return a*b;
In an app I'm profiling, I found that in some scenarios this function is able to take over 10% of total execution time.
I've seen discussion over the years of faster sqrt implementations using sneaky floating-point trickery, but I don't know if such things are outdated on modern CPUs.
MSVC++ 2008 compiler is being used, for reference... though I'd assume sqrt is not going to add much overhead though.
See also here for similar discussion on modf function.
EDIT: for reference, this is one widely-used method, but is it actually much quicker? How many cycles is SQRT anyway these days?
Yes, it is possible even without trickery:
sacrifice accuracy for speed: the sqrt algorithm is iterative, re-implement with fewer iterations.
lookup tables: either just for the start point of the iteration, or combined with interpolation to get you all the way there.
caching: are you always sqrting the same limited set of values? if so, caching can work well. I've found this useful in graphics applications where the same thing is being calculated for lots of shapes the same size, so results can be usefully cached.
Hello from 11 years in the future.
Considering this still gets occasional votes, I thought I'd add a note about performance, which now even more than then is dramatically limited by memory accesses. You absolutely must use a realistic benchmark (ideally, your whole application) when optimising something like this - the memory access patterns of your application will have a dramatic effect on solutions like lookup tables and caches, and just comparing 'cycles' for your optimised version will lead you wildly astray: it is also very difficult to assign program time to individual instructions, and your profiling tool may mislead you here.
On a related note, consider using simd/vectorised instructions for calculating square roots, like _mm512_sqrt_ps or similar, if they suit your use case.
Take a look at section 15.12.3 of intel's optimisation reference manual, which describes approximation methods, with vectorised instructions, which would probably translate pretty well to other architectures too.
There's a great comparison table here:
http://assemblyrequired.crashworks.org/timing-square-root/
Long story short, SSE2's ssqrts is about 2x faster than FPU fsqrt, and an approximation + iteration is about 4x faster than that (8x overall).
Also, if you're trying to take a single-precision sqrt, make sure that's actually what you're getting. I've heard of at least one compiler that would convert the float argument to a double, call double-precision sqrt, then convert back to float.
You're very likely to gain more speed improvements by changing your algorithms than by changing their implementations: Try to call sqrt() less instead of making calls faster. (And if you think this isn't possible - the improvements for sqrt() you mention are just that: improvements of the algorithm used to calculate a square root.)
Since it is used very often, it is likely that your standard library's implementation of sqrt() is nearly optimal for the general case. Unless you have a restricted domain (e.g., if you need less precision) where the algorithm can take some shortcuts, it's very unlikely someone comes up with an implementation that's faster.
Note that, since that function uses 10% of your execution time, even if you manage to come up with an implementation that only takes 75% of the time of std::sqrt(), this still will only bring your execution time down by 2,5%. For most applications users wouldn't even notice this, except if they use a watch to measure.
How accurate do you need your sqrt to be? You can get reasonable approximations very quickly: see Quake3's excellent inverse square root function for inspiration (note that the code is GPL'ed, so you may not want to integrate it directly).
Don't know if you fixed this, but I've read about it before, and it seems that the fastest thing to do is replace the sqrt function with an inline assembly version;
you can see a description of a load of alternatives here.
The best is this snippet of magic:
double inline __declspec (naked) __fastcall sqrt(double n)
{
_asm fld qword ptr [esp+4]
_asm fsqrt
_asm ret 8
}
It's about 4.7x faster than the standard sqrt call with the same precision.
Here is a fast way with a look up table of only 8KB. Mistake is ~0.5% of the result. You can easily enlarge the table, thus reducing the mistake. Runs about 5 times faster than the regular sqrt()
// LUT for fast sqrt of floats. Table will be consist of 2 parts, half for sqrt(X) and half for sqrt(2X).
const int nBitsForSQRTprecision = 11; // Use only 11 most sagnificant bits from the 23 of float. We can use 15 bits instead. It will produce less error but take more place in a memory.
const int nUnusedBits = 23 - nBitsForSQRTprecision; // Amount of bits we will disregard
const int tableSize = (1 << (nBitsForSQRTprecision+1)); // 2^nBits*2 because we have 2 halves of the table.
static short sqrtTab[tableSize];
static unsigned char is_sqrttab_initialized = FALSE; // Once initialized will be true
// Table of precalculated sqrt() for future fast calculation. Approximates the exact with an error of about 0.5%
// Note: To access the bits of a float in C quickly we must misuse pointers.
// More info in: http://en.wikipedia.org/wiki/Single_precision
void build_fsqrt_table(void){
unsigned short i;
float f;
UINT32 *fi = (UINT32*)&f;
if (is_sqrttab_initialized)
return;
const int halfTableSize = (tableSize>>1);
for (i=0; i < halfTableSize; i++){
*fi = 0;
*fi = (i << nUnusedBits) | (127 << 23); // Build a float with the bit pattern i as mantissa, and an exponent of 0, stored as 127
// Take the square root then strip the first 'nBitsForSQRTprecision' bits of the mantissa into the table
f = sqrtf(f);
sqrtTab[i] = (short)((*fi & 0x7fffff) >> nUnusedBits);
// Repeat the process, this time with an exponent of 1, stored as 128
*fi = 0;
*fi = (i << nUnusedBits) | (128 << 23);
f = sqrtf(f);
sqrtTab[i+halfTableSize] = (short)((*fi & 0x7fffff) >> nUnusedBits);
}
is_sqrttab_initialized = TRUE;
}
// Calculation of a square root. Divide the exponent of float by 2 and sqrt() its mantissa using the precalculated table.
float fast_float_sqrt(float n){
if (n <= 0.f)
return 0.f; // On 0 or negative return 0.
UINT32 *num = (UINT32*)&n;
short e; // Exponent
e = (*num >> 23) - 127; // In 'float' the exponent is stored with 127 added.
*num &= 0x7fffff; // leave only the mantissa
// If the exponent is odd so we have to look it up in the second half of the lookup table, so we set the high bit.
const int halfTableSize = (tableSize>>1);
const int secondHalphTableIdBit = halfTableSize << nUnusedBits;
if (e & 0x01)
*num |= secondHalphTableIdBit;
e >>= 1; // Divide the exponent by two (note that in C the shift operators are sign preserving for signed operands
// Do the table lookup, based on the quaternary mantissa, then reconstruct the result back into a float
*num = ((sqrtTab[*num >> nUnusedBits]) << nUnusedBits) | ((e + 127) << 23);
return n;
}
I'd like to implement a big int class in C++ as a programming exercise—a class that can handle numbers bigger than a long int. I know that there are several open source implementations out there already, but I'd like to write my own. I'm trying to get a feel for what the right approach is.
I understand that the general strategy is get the number as a string, and then break it up into smaller numbers (single digits for example), and place them in an array. At this point it should be relatively simple to implement the various comparison operators. My main concern is how I would implement things like addition and multiplication.
I'm looking for a general approach and advice as opposed to actual working code.
A fun challenge. :)
I assume that you want integers of arbitrary length. I suggest the following approach:
Consider the binary nature of the datatype "int". Think about using simple binary operations to emulate what the circuits in your CPU do when they add things. In case you are interested more in-depth, consider reading this wikipedia article on half-adders and full-adders. You'll be doing something similar to that, but you can go down as low level as that - but being lazy, I thought I'd just forego and find a even simpler solution.
But before going into any algorithmic details about adding, subtracting, multiplying, let's find some data structure. A simple way, is of course, to store things in a std::vector.
template< class BaseType >
class BigInt
{
typedef typename BaseType BT;
protected: std::vector< BaseType > value_;
};
You might want to consider if you want to make the vector of a fixed size and if to preallocate it. Reason being that for diverse operations, you will have to go through each element of the vector - O(n). You might want to know offhand how complex an operation is going to be and a fixed n does just that.
But now to some algorithms on operating on the numbers. You could do it on a logic-level, but we'll use that magic CPU power to calculate results. But what we'll take over from the logic-illustration of Half- and FullAdders is the way it deals with carries. As an example, consider how you'd implement the += operator. For each number in BigInt<>::value_, you'd add those and see if the result produces some form of carry. We won't be doing it bit-wise, but rely on the nature of our BaseType (be it long or int or short or whatever): it overflows.
Surely, if you add two numbers, the result must be greater than the greater one of those numbers, right? If it's not, then the result overflowed.
template< class BaseType >
BigInt< BaseType >& BigInt< BaseType >::operator += (BigInt< BaseType > const& operand)
{
BT count, carry = 0;
for (count = 0; count < std::max(value_.size(), operand.value_.size(); count++)
{
BT op0 = count < value_.size() ? value_.at(count) : 0,
op1 = count < operand.value_.size() ? operand.value_.at(count) : 0;
BT digits_result = op0 + op1 + carry;
if (digits_result-carry < std::max(op0, op1)
{
BT carry_old = carry;
carry = digits_result;
digits_result = (op0 + op1 + carry) >> sizeof(BT)*8; // NOTE [1]
}
else carry = 0;
}
return *this;
}
// NOTE 1: I did not test this code. And I am not sure if this will work; if it does
// not, then you must restrict BaseType to be the second biggest type
// available, i.e. a 32-bit int when you have a 64-bit long. Then use
// a temporary or a cast to the mightier type and retrieve the upper bits.
// Or you do it bitwise. ;-)
The other arithmetic operation go analogous. Heck, you could even use the stl-functors std::plus and std::minus, std::times and std::divides, ..., but mind the carry. :) You can also implement multiplication and division by using your plus and minus operators, but that's very slow, because that would recalculate results you already calculated in prior calls to plus and minus in each iteration. There are a lot of good algorithms out there for this simple task, use wikipedia or the web.
And of course, you should implement standard operators such as operator<< (just shift each value in value_ to the left for n bits, starting at the value_.size()-1... oh and remember the carry :), operator< - you can even optimize a little here, checking the rough number of digits with size() first. And so on. Then make your class useful, by befriendig std::ostream operator<<.
Hope this approach is helpful!
Things to consider for a big int class:
Mathematical operators: +, -, /,
*, % Don't forget that your class may be on either side of the
operator, that the operators can be
chained, that one of the operands
could be an int, float, double, etc.
I/O operators: >>, << This is
where you figure out how to properly
create your class from user input, and how to format it for output as well.
Conversions/Casts: Figure out
what types/classes your big int
class should be convertible to, and
how to properly handle the
conversion. A quick list would
include double and float, and may
include int (with proper bounds
checking) and complex (assuming it
can handle the range).
There's a complete section on this: [The Art of Computer Programming, vol.2: Seminumerical Algorithms, section 4.3 Multiple Precision Arithmetic, pp. 265-318 (ed.3)]. You may find other interesting material in Chapter 4, Arithmetic.
If you really don't want to look at another implementation, have you considered what it is you are out to learn? There are innumerable mistakes to be made and uncovering those is instructive and also dangerous. There are also challenges in identifying important computational economies and having appropriate storage structures for avoiding serious performance problems.
A Challenge Question for you: How do you intend to test your implementation and how do you propose to demonstrate that it's arithmetic is correct?
You might want another implementation to test against (without looking at how it does it), but it will take more than that to be able to generalize without expecting an excrutiating level of testing. Don't forget to consider failure modes (out of memory problems, out of stack, running too long, etc.).
Have fun!
addition would probably have to be done in the standard linear time algorithm
but for multiplication you could try http://en.wikipedia.org/wiki/Karatsuba_algorithm
Once you have the digits of the number in an array, you can do addition and multiplication exactly as you would do them longhand.
Don't forget that you don't need to restrict yourself to 0-9 as digits, i.e. use bytes as digits (0-255) and you can still do long hand arithmetic the same as you would for decimal digits. You could even use an array of long.
I'm not convinced using a string is the right way to go -- though I've never written code myself, I think that using an array of a base numeric type might be a better solution. The idea is that you'd simply extend what you've already got the same way the CPU extends a single bit into an integer.
For example, if you have a structure
typedef struct {
int high, low;
} BiggerInt;
You can then manually perform native operations on each of the "digits" (high and low, in this case), being mindful of overflow conditions:
BiggerInt add( const BiggerInt *lhs, const BiggerInt *rhs ) {
BiggerInt ret;
/* Ideally, you'd want a better way to check for overflow conditions */
if ( rhs->high < INT_MAX - lhs->high ) {
/* With a variable-length (a real) BigInt, you'd allocate some more room here */
}
ret.high = lhs->high + rhs->high;
if ( rhs->low < INT_MAX - lhs->low ) {
/* No overflow */
ret.low = lhs->low + rhs->low;
}
else {
/* Overflow */
ret.high += 1;
ret.low = lhs->low - ( INT_MAX - rhs->low ); /* Right? */
}
return ret;
}
It's a bit of a simplistic example, but it should be fairly obvious how to extend to a structure that had a variable number of whatever base numeric class you're using.
Use the algorithms you learned in 1st through 4th grade.
Start with the ones column, then the tens, and so forth.
Like others said, do it to old fashioned long-hand way, but stay away from doing this all in base 10. I'd suggest doing it all in base 65536, and storing things in an array of longs.
If your target architecture supports BCD (binary coded decimal) representation of numbers, you can get some hardware support for the longhand multiplication/addition that you need to do. Getting the compiler to emit BCD instruction is something you'll have to read up on...
The Motorola 68K series chips had this. Not that I'm bitter or anything.
My start would be to have an arbitrary sized array of integers, using 31 bits and the 32n'd as overflow.
The starter op would be ADD, and then, MAKE-NEGATIVE, using 2's complement. After that, subtraction flows trivially, and once you have add/sub, everything else is doable.
There are probably more sophisticated approaches. But this would be the naive approach from digital logic.
Could try implementing something like this:
http://www.docjar.org/html/api/java/math/BigInteger.java.html
You'd only need 4 bits for a single digit 0 - 9
So an Int Value would allow up to 8 digits each. I decided i'd stick with an array of chars so i use double the memory but for me it's only being used 1 time.
Also when storing all the digits in a single int it over-complicates it and if anything it may even slow it down.
I don't have any speed tests but looking at the java version of BigInteger it seems like it's doing an awful lot of work.
For me I do the below
//Number = 100,000.00, Number Digits = 32, Decimal Digits = 2.
BigDecimal *decimal = new BigDecimal("100000.00", 32, 2);
decimal += "1000.99";
cout << decimal->GetValue(0x1 | 0x2) << endl; //Format and show decimals.
//Prints: 101,000.99
The computer hardware provides facility of storing integers and doing basic arithmetic over them; generally this is limited to integers in a range (e.g. up to 2^{64}-1). But larger integers can be supported via programs; below is one such method.
Using Positional Numeral System (e.g. the popular base-10 numeral system), any arbitrarily large integer can be represented as a sequence of digits in base B. So, such integers can be stored as an array of 32-bit integers, where each array-element is a digit in base B=2^{32}.
We already know how to represent integers using numeral-system with base B=10, and also how to perform basic arithmetic (add, subtract, multiply, divide etc) within this system. The algorithms for doing these operations are sometimes known as Schoolbook algorithms. We can apply (with some adjustments) these Schoolbook algorithms to any base B, and so can implement the same operations for our large integers in base B.
To apply these algorithms for any base B, we will need to understand them further and handle concerns like:
what is the range of various intermediate values produced during these algorithms.
what is the maximum carry produced by the iterative addition and multiplication.
how to estimate the next quotient-digit in long-division.
(Of course, there can be alternate algorithms for doing these operations).
Some algorithm/implementation details can be found here (initial chapters), here (written by me) and here.
subtract 48 from your string of integer and print to get number of large digit.
then perform the basic mathematical operation .
otherwise i will provide complete solution.