I have found an algorithm to multiply in modoulus. The next pseudocode is taken from wikipedia, page Modular exponention, section Right-to-left binary method.
The full pseudocode is
function modular_pow(base, exponent, modulus)
Assert :: (modulus - 1) * (modulus - 1) does not overflow base
result := 1
base := base mod modulus
while exponent > 0
if (exponent mod 2 == 1):
result := (result * base) mod modulus
exponent := exponent >> 1
base := (base * base) mod modulus
return result
I don't understand what this line of pseudocode means Assert :: (modulus - 1) * (modulus - 1) does not overflow base; what does this line mean and how would it be best programmed in C++?
In most computer programming languages numbers can only be stored with a limited precision or over a certain range.
For example a C++ integer will often be a 32 bit signed int, capable of storing at most 2^31 as a value.
If you try and multiply together two numbers and the result would be greater than 2^31 you do not get the result you were expecting, it has overflowed.
Assert is (roughly speaking) a way to check preconditions; "this must be true to proceed". In C++ you'll code it using the assert macro, or your own hand-rolled assertion system.
'does not overflow' means that the statement shouldn't be too large to fit into whatever integer type base is; it's a multiplication so it's quite possible. Integer overflow in C++ is undefined behaviour, so it's wise to guard against it! There are plenty of resources out there to explain integer overflow, such as this article on Wikipedia
To do the check in C++, a nice simple approach is to store the intermediate result in a larger integer type and check that it's small enough to fit in the destination type. For example, if base is int32_t use int64_t and check that it's lower than static_cast<int64_t>(std::numeric_limits<int32_t>::max()):
const int64_t intermediate = (static_cast<int64_t>(modulus) - 1) * (static_cast<int64_t>(modulus) - 1);
assert(intermediate < static_cast<int64_t>(std::numeric_limits<int32_t>::max()));
Related
How can we divide 2^{128} by an odd number (unsigned 64-bit integer), which is a floor more precisely, without using arbitrary multi-precision arithmetic library?
The problem is even with gcc, 2^{128} cannot be expressed.
So I'm considering creating 192-bit integer type.
But, I have no idea how to do that (especially subtraction part).
I want the result of the floor to be an unsigned number.
For any odd d > 1, UINT128_MAX / d equals floor(2128/d).
This is because 2128/d must have a remainder, as the only factors of 2128 are powers of two (including 1), so the odd d (excluding 1) cannot be a divisor. Therefore, 2128/d and (2128−1)/d have the same integral quotient, and UINT128_MAX is 2128−1.
Given a non-negative integer c, I need an efficient algorithm to find the largest integer x such that
x*(x-1)/2 <= c
Equivalently, I need an efficient and reliably accurate algorithm to compute:
x = floor((1 + sqrt(1 + 8*c))/2) (1)
For the sake of defineteness I tagged this question C++, so the answer should be a function written in that language. You can assume that c is an unsigned 32 bit int.
Also, if you can prove that (1) (or an equivalent expression involving floating-point arithmetic) always gives the right result, that's a valid answer too, since floating-point on modern processors can be faster than integer algorithms.
If you're willing to assume IEEE doubles with correct rounding for all operations including square root, then the expression that you wrote (plus a cast to double) gives the right answer on all inputs.
Here's an informal proof. Since c is a 32-bit unsigned integer being converted to a floating-point type with a 53-bit significand, 1 + 8*(double)c is exact, and sqrt(1 + 8*(double)c) is correctly rounded. 1 + sqrt(1 + 8*(double)c) is accurate to within one ulp, since the last term being less than 2**((32 + 3)/2) = 2**17.5 implies that the unit in the last place of the latter term is less than 1, and thus (1 + sqrt(1 + 8*(double)c))/2 is accurate to within one ulp, since division by 2 is exact.
The last piece of business is the floor. The problem cases here are when (1 + sqrt(1 + 8*(double)c))/2 is rounded up to an integer. This happens if and only if sqrt(...) rounds up to an odd integer. Since the argument of sqrt is an integer, the worst cases look like sqrt(z**2 - 1) for positive odd integers z, and we bound
z - sqrt(z**2 - 1) = z * (1 - sqrt(1 - 1/z**2)) >= 1/(2*z)
by Taylor expansion. Since z is less than 2**17.5, the gap to the nearest integer is at least 1/2**18.5 on a result of magnitude less than 2**17.5, which means that this error cannot result from a correctly rounded sqrt.
Adopting Yakk's simplification, we can write
(uint32_t)(0.5 + sqrt(0.25 + 2.0*c))
without further checking.
If we start with the quadratic formula, we quickly reach sqrt(1/4 + 2c), round up at 1/2 or higher.
Now, if you do that calculation in floating point, there can be inaccuracies.
There are two approaches to deal with these inaccuracies. The first would be to carefully determine how big they are, determine if the calculated value is close enough to a half for them to be important. If they aren't important, simply return the value. If they are, we can still bound the answer to being one of two values. Test those two values in integer math, and return.
However, we can do away with that careful bit, and note that sqrt(1/4 + 2c) is going to have an error less than 0.5 if the values are 32 bits, and we use doubles. (We cannot make this guarantee with floats, as by 2^31 the float cannot handle +0.5 without rounding).
In essense, we use the quadratic formula to reduce it to two possibilities, and then test those two.
uint64_t eval(uint64_t x) {
return x*(x-1)/2;
}
unsigned solve(unsigned c) {
double test = sqrt( 0.25 + 2.*c );
if ( eval(test+1.) <= c )
return test+1.
ASSERT( eval(test) <= c );
return test;
}
Note that converting a positive double to an integral type rounds towards 0. You can insert floors if you want.
This may be a bit tangential to your question. But, what caught my attention is the specific formula. You are trying to find the triangular root of Tn - 1 (where Tn is the nth triangular number).
I.e.:
Tn = n * (n + 1) / 2
and
Tn - n = Tn - 1 = n * (n - 1) / 2
From the nifty trick described here, for Tn we have:
n = int(sqrt(2 * c))
Looking for n such that Tn - 1 ≤ c in this case doesn't change the definition of n, for the same reason as in the original question.
Computationally, this saves a few operations, so it's theoretically faster than the exact solution (1). In reality, it's probably about the same.
Neither this solution or the one presented by David are as "exact" as your (1) though.
floor((1 + sqrt(1 + 8*c))/2) (blue) vs int(sqrt(2 * c)) (red) vs Exact (white line)
floor((1 + sqrt(1 + 8*c))/2) (blue) vs int(sqrt(0.25 + 2 * c) + 0.5 (red) vs Exact (white line)
My real point is that triangular numbers are a fun set of numbers that are connected to squares, pascal's triangle, Fibonacci numbers, et. al.
As such there are loads of identities around them which might be used to rearrange the problem in a way that didn't require a square root.
Of particular interest may be that Tn + Tn - 1 = n2
I'm assuming you know that you're working with a triangular number, but if you didn't realize that, searching for triangular roots yields a few questions such as this one which are along the same topic.
It tried to solve this problem, given N,K amd M, find maximum integer T such that N*(K^T) <= M. N,K and M can be the values to 10^18. So long long is sufficient.
I tried to solve it using iteration on T
int T = 0;
long long Kpow = 1;
while(1)
{
long long prod = N*Kpow;
if(prod > M)
break;
T++;
Kpow = Kpow*K;
}
But since N*Kpow may go out of range of long long, there is need to handle the product using some big integer. But I found some other code which smartly handles this case
long long prod = N*Kpow;
if(prod < 0)
break;
Even I have seen always, that in overflow, the value of variable becomes negative. Is it always the case or sometimes even positive values also occur in overflow case?
From the point of view of the language, the behaviour of signed integer overflow is undefined. Which means anythign could happen - it can be negative, it can be unchanged, the program can crash or it can order pizza online.
What will most likely happen in practice depends on the processor architecture on which you're running - so you'd have to consult the platform specs to know.
But I'd guess you can't guarantee overflow to be negative. As a contrived example:
signed char c = 127;
c += 255;
std::cout << (int)c << '\n';
This happens to print 126 on x86. But again, it could actually do anything.
No. The value of variable is not always negative in case of overflow.
With signed integers, C11dr §3.7.1 3 undefined behavior "An example of undefined behavior is the behavior on integer overflow." so there is no test to do after overflow that is certain to work across compilers and platforms.
Detect potential overflow before it can happen.
int T = 0;
long long Kpow = 1;
long long Kpow_Max = LLONG_MAX/K;
long long prod_Max = LLONG_MAX/N;
while(1)
{
if (Kpow > prod_Max) Handle_Overflow();
long long prod = N*Kpow;
if(prod > M)
break;
T++;
if (Kpow > Kpow_Max) Handle_Overflow();
Kpow = Kpow*K;
}
Couldn't this problem be converted to K^T <= (M + N - 1) / N?
As far as overflow detection goes, normally addition and subtraction are performed as if the numbers were unsigned, with the overflow bit set based on signed math, and the carry / borrow bit set based on unsigned math. For multiplication, the low order of the result is the same for signed or unsigned multiply (this is why the ARM cpu only has a signed / unsigned multiply for 64 bit results from 32 bit operands). Overflow occurs if the product is too large to fit in the register that receives the product, like a 32 bit multiply that results in a 39 bit product, that is supposed to go into a 32 bit register. For divide, overflow can occur if the divisor is zero or if the quotient is too large to fit in the register that receives the quotient, for example a 64 bit dividend divided by a 32 bit divisor, resulting in a 40 bit quotient. For multiply and divide, it doesn't matter if the operands are signed or not, only if the size of the result will fit in the register that receives the result.
Just as in any other situation with signed integers of any length...overflow makes the number negative if and only if the specific bit which is overflowed into the sign bit is on.
Meaning if the result of your arithmetic would give you an overflow which, if you were to double the length of your variable word, would leave your current sign bit as off, you could quite possibly come up with an erroneous result which is positive.
This question already has answers here:
What is the purpose of the div() library function?
(6 answers)
Closed 3 years ago.
There is a function called div in C,C++ (stdlib.h)
div_t div(int numer, int denom);
typedef struct _div_t
{
int quot;
int rem;
} div_t;
But C,C++ have / and % operators.
My question is: "When there are / and % operators, Is div function useful?"
Yes, it is: it calculates the quotient and remainder in one operation.
Aside from that, the same behaviour can be achieved with /+% (and a decent optimizer will optimize them into a single div anyway).
In order to sum it up: if you care about squeezing out last bits of performance, this may be your function of choice, especially if the optimizer on your platform is not so advanced. This is often the case for embedded platforms. Otherwise, use whatever way you find more readable.
The div() function returns a structure which contains the quotient and remainder of the division of the first parameter (the numerator) by the second (the denominator). There are four variants:
div_t div(int, int)
ldiv_t ldiv(long, long)
lldiv_t lldiv(long long, long long)
imaxdiv_t imaxdiv(intmax_t, intmax_t (intmax_t represents the biggest integer type available on the system)
The div_t structure looks like this:
typedef struct
{
int quot; /* Quotient. */
int rem; /* Remainder. */
} div_t;
The implementation does simply use the / and % operators, so it's not exactly a very complicated or necessary function, but it is part of the C standard (as defined by [ISO 9899:201x][1]).
See the implementation in GNU libc:
/* Return the `div_t' representation of NUMER over DENOM. */
div_t
div (numer, denom)
int numer, denom;
{
div_t result;
result.quot = numer / denom;
result.rem = numer % denom;
/* The ANSI standard says that |QUOT| <= |NUMER / DENOM|, where
NUMER / DENOM is to be computed in infinite precision. In
other words, we should always truncate the quotient towards
zero, never -infinity. Machine division and remainer may
work either way when one or both of NUMER or DENOM is
negative. If only one is negative and QUOT has been
truncated towards -infinity, REM will have the same sign as
DENOM and the opposite sign of NUMER; if both are negative
and QUOT has been truncated towards -infinity, REM will be
positive (will have the opposite sign of NUMER). These are
considered `wrong'. If both are NUM and DENOM are positive,
RESULT will always be positive. This all boils down to: if
NUMER >= 0, but REM < 0, we got the wrong answer. In that
case, to get the right answer, add 1 to QUOT and subtract
DENOM from REM. */
if (numer >= 0 && result.rem < 0)
{
++result.quot;
result.rem -= denom;
}
return result;
}
The semantics of div() is different than the semantics of % and /, which is important in some cases.
That is why the following code is in the implementation shown in psYchotic's answer:
if (numer >= 0 && result.rem < 0)
{
++result.quot;
result.rem -= denom;
}
% may return a negative answer, whereas div() always returns a non-negative remainder.
Check the WikiPedia entry, particularly "div always rounds towards 0, unlike ordinary integer division in C, where rounding for negative numbers is implementation-dependent."
div() filled a pre-C99 need: portability
Pre C99, the rounding direction of the quotient of a / b with a negative operand was implementation dependent. With div(), the rounding direction is not optional but specified to be toward 0. div() provided uniform portable division. A secondary use was the potential efficiency when code needed to calculate both the quotient and remainder.
With C99 and later, div() and / specifying the same round direction and with better compilers optimizing nearby a/b and a%b code, the need has diminished.
This was the compelling reason for div() and it explains the absence of udiv_t udiv(unsigned numer, unsigned denom) in the C spec: The issues of implementation dependent results of a/b with negative operands are non-existent for unsigned even in pre-C99.
Probably because on many processors the div instruction produces both values and you can always count on the compiler to recognize that adjacent / and % operators on the same inputs could be coalesced into one operation.
It costs less time if you need both value.
CPU always calculate both remainder and quotient when performing division.
If use "/" once and "%" once, cpu will calculate twice both number.
(forgive my poor English, I'm not native)
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Best way to detect integer overflow in C/C++
If I have an expression x + y (in C or C++) where x and y are both of type uint64_t which causes an integer overflow, how do I detect how much it overflowed by (the carry), place than in another variable, then compute the remainder?
The remainder will already be stored in the sum of x + y, assuming you are using unsigned integers. Unsigned integer overflow causes a wrap around ( signed integer overflow is undefined ). See standards reference from Pascal in the comments.
The overflow can only be 1 bit. If you add 2 64 bit numbers, there cannot be more than 1 carry bit, so you just have to detect the overflow condition.
For how to detect overflow, there was a previous question on that topic: best way to detect integer overflow.
For z = x + y, z stores the remainder. The overflow can only be 1 bit and it's easy to detect. If you were dealing with signed integers then there's an overflow if x and y have the same sign but z has the opposite. You cannot overflow if x and y have different signs. For unsigned integers you just check the most significant bit in the same manner.
The approach in C and C++ can be quite different, because in C++ you can have operator overloading work for you, and wrap the integer you want to protect in some kind of class (for which you would overload the necessary operators. In C, you would have to wrap the integer you want to protect in a structure (to carry the remainder as well as the result) and call some function to do the heavy lifting.
Other than that, the approach in the two languages is the same: depending on the operation you want to perform (adding, in your example) you have to figure out the worst that could happen and handle it.
In the case of adding, it's quite simple: if the sum of the two is going to be greater than some maximum value (which will be the case if the difference of that maximum value M and one of the operands is greater than the other operand) you can calculate the remainder - the part that's too big: if ((M - O1) > O2) R = O2 - (M - O1) (e.g. if M is 100, O1 is 80 and O2 is 30, 30 - (100 - 80) = 10, which is the remainder).
The case of subtraction is equally simple: if your first operand is smaller than the second, the remainder is the second minus the first (if (O1 < O2) { Rem = O2 - O1; Result = 0; } else { Rem = 0; Result = O1 - O2; }).
It's multiplication that's a bit more difficult: your safest bet is to do a binary multiplication of the values and check that your resulting value doesn't exceed the number of bits you have. Binary multiplication is a long multiplication, just like you would do if you were doing a decimal multiplication by hand on paper, so, for example, 12 * 5 is:
0110
0100
====
0110
0
0110
0
++++++
011110 = 40
if you'd have a four-bit integer, you'd have an overflow of one bit here (i.e. bit 4 is 1, bit 5 is 0. so only bit 4 counts as an overflow).
For division you only really need to care about division by 0, most of the time - the rest will be handled be your CPU.
HTH
rlc