Mod function fails in python for large numbers - python-2.7

This python code
for x in range(20, 50):
print(x,math.factorial(x),math.pow(2,x), math.factorial(x) % math.pow(2,x) )
calculates fine up to x=22 but the mod when x>22 is always 0.
Wolframalpha says the results for x>22 are nonzero.
For example, when x=23 we get 6815744.
I guess this problem results from how python actually calculates the mod function but was wondering if anyone actually knew.

You are running into floating point limitations; math.pow() returns a floating point number, so both operands are coerced to floats. For x = 23, math.factorial(x) returns an integer larger than what a float can model:
>>> math.factorial(23)
25852016738884976640000
>>> float(math.factorial(23))
2.585201673888498e+22
The right-hand-side operator is a much smaller floating point number (only 7 digits), it is that difference in exponents that causes the modulus operator error out.
Use ** to stick to integers:
for x in range(20, 50):
print(x, math.factorial(x), 2 ** x, math.factorial(x) % (2 ** x))
Integer operations are only limited to how much memory is available, and for x = 23 the correct value is calculated, continuing to work correctly all the way to x = 49:
>>> x = 23
>>> print(x, math.factorial(x), 2 ** x, math.factorial(x) % (2 ** x))
23 25852016738884976640000 8388608 6815744
>>> x = 49
>>> print(x, math.factorial(x), 2 ** x, math.factorial(x) % (2 ** x))
49 608281864034267560872252163321295376887552831379210240000000000 562949953421312 492581209243648
Note that for even for smaller floating point modulus calculations, you really should be using the math.fmod() function, for reasons explained in the documentation. It too fails for this case however, again because you are reaching beyond the limits of floating point math:
>>> print(x, math.factorial(x), math.pow(2, x), math.fmod(math.factorial(x), math.pow(2, x)))
23 25852016738884976640000 8388608.0 0.0

Yes, You are correct for large numbers modulus gives wrong numbers especially with factorial numbers.
for example :
import math
def comb(n,r):
res= math.factorial(n)/(math.factorial(n-r)*math.factorial(r))
return(float(res))
sum1=0
num=888
for r in range(0,num+1):
sum1 +=comb(num,r)
print(sum1 % 1000000)
gives wrong answer 252480 but the correct answer is 789056 .

Related

Is there a value of type `double`, `K`, such that `K * K == 3.0`?

Is there a value of type double (IEEE 64-bit float / binary64), K, such that K * K == 3.0? (The irrational number is of course "square root of 3")
I tried:
static constexpr double Sqrt3 = 1.732050807568877293527446341505872366942805253810380628055806;
static_assert(Sqrt3 * Sqrt3 == 3.0);
but the static assert fails.
(I'm guessing neither the next higher nor next lower floating-point representable number square to 3.0 after rounding? Or is the parser of the floating point literal being stupid? Or is it doable in IEEE standard but fast math optimizations are messing it up?)
I think the digits are right:
$ python
>>> N = 1732050807568877293527446341505872366942805253810380628055806
>>> N * N
2999999999999999999999999999999999999999999999999999999999996\
607078976886330406910974461358291614910225958586655450309636
Update
I've discovered that:
static_assert(Sqrt3 * Sqrt3 < 3.0); // pass
static_assert(Sqrt3 * Sqrt3 > 2.999999999999999); // pass
static_assert(Sqrt3 * Sqrt3 > 2.9999999999999999); // fail
So the literal must produce the next lower value.
I guess I need to check the next higher value. Could bit-dump the representation maybe and then increment the last bit of the mantissa.
Update 2
For posterity: I wound up going with this for the Sqrt3 constant and the test:
static constexpr double Sqrt3 = 1.7320508075688772;
static_assert(0x1.BB67AE8584CAAP+0 == 1.7320508075688772);
static_assert(Sqrt3 * Sqrt3 == 2.9999999999999996);
The answer is no; there is no such K.
The closest binary64 value to the actual square root of 3 is equal to 7800463371553962 × 2-52. Its square is:
60847228810955004221158677897444 × 2-104
This value is not exactly representable. It falls between (3 - 2-51) and 3, which are respectively equal to
60847228810955002264642499117056 × 2-104
and
60847228810955011271841753858048 × 2-104
As you can see, K * K is much closer to 3 - 2-51 than it is to 3. So IEEE 754 requires the result of the operation K * K to yield 3 - 2-51, not 3. (The compiler might convert K to an extended-precision format for the calculation, but the result will still be 3 - 2-51 after conversion back to binary64.)
Furthermore, if we go to the next representable value after K in the binary64 format, we will find that its square is closest to 3 + 2-51, which is the next representable value after 3.
This result should not be too surprising; in general, incrementing a number by 1 ulp will increment its square by roughly 2 ulps, so you have about a 50% chance, given some value x, that there is a K with the same precision as x such that K * K == x.
The C standard does not dictate the default rounding mode. While it is typically round-to-nearest, ties-to-even, it could be round-upward, and some implementations support changing the mode. In such case, squaring 1.732050807568877193176604123436845839023590087890625 while rounding upward produces exactly 3.
#include <fenv.h>
#include <math.h>
#include <stdio.h>
#pragma STDC FENV_ACCESS ON
int main(void)
{
volatile double x = 1.732050807568877193176604123436845839023590087890625;
fesetround(FE_UPWARD);
printf("%.99g\n", x*x); // Prints “3”.
}
x is declared volatile to prevent the compiler from computing x*x at compile-time with a different rounding mode. Some compilers do not support #pragma STDC FENV_ACCESS but may support fesetround once the #pragma line is removed.
Testing with Python is valid I think, since both use the IEEE-754 representation for doubles along with the rules for operations on same.
The closest possible double to the square root of 3 is slightly low.
>>> Sqrt3 = 3**0.5
>>> Sqrt3*Sqrt3
2.9999999999999996
The next available value is too high.
>>> import numpy as np
>>> Sqrt3p = np.nextafter(Sqrt3,999)
>>> Sqrt3p*Sqrt3p
3.0000000000000004
If you could split the difference, you'd have it.
>>> Sqrt3*Sqrt3p
3.0
In the Ruby language, the Float class uses "the native architecture's double-precision floating point representation" and it has methods named prev_float and next_float that let you iterate through different possible floats using the smallest possible steps. Using this, I was able to do a simple test and see that there is no double (at least on x86_64 Linux) that meets your criterion. The Ruby interpreter is written in C, so I think my results should be applicable to the C double type.
Here is the Ruby code:
x = Math.sqrt(3)
4.times { x = x.prev_float }
9.times do
puts "%.20f squared is %.20f" % [x, x * x]
puts "Success!" if x * x == 3
x = x.next_float
end
And the output:
1.73205080756887630500 squared is 2.99999999999999644729
1.73205080756887652704 squared is 2.99999999999999733546
1.73205080756887674909 squared is 2.99999999999999822364
1.73205080756887697113 squared is 2.99999999999999866773
1.73205080756887719318 squared is 2.99999999999999955591
1.73205080756887741522 squared is 3.00000000000000044409
1.73205080756887763727 squared is 3.00000000000000133227
1.73205080756887785931 squared is 3.00000000000000177636
1.73205080756887808136 squared is 3.00000000000000266454
Is there a value of type double, K, such that K * K == 3.0?
Yes.
K = sqrt(n); and K * K == n may be true, even when √n is irrational.
Note that K, the result of sqrt(n), as a double, is a rational number.
Various rounding modes: #Eric
K * K rounds to n
Example: Roots n: 11, 14 and 17 when squared are n.
for (int i = 10; i < 20; i++) {
double x = sqrt(i);
double y = x * x;
printf("%2d %.25g\n", i, y);
}
10 10.00000000000000177635684
11 11
12 11.99999999999999822364316
13 12.99999999999999822364316
14 14
15 15.00000000000000177635684
16 16
17 17
18 17.99999999999999644728632
19 19.00000000000000355271368
Different precision
Rather than 53 bits with common double, say the FP math was done with 24. Roots n: 3, 5 and 10 when squared are n.
for (int i = 2; i < 11; i++) {
float x = sqrtf(i);
printf("%2d %.25g\n", i, x*x);
}
2 1.99999988079071044921875
3 3
4 4
5 5
6 6.000000476837158203125
7 6.999999523162841796875
8 7.999999523162841796875
9 9
10 10
or say the FP math was done with 64 bits. Roots n: 5, 6 and 10 when squared are n.
for (int i = 2; i < 11; i++) {
long double x = sqrtl(i);
printf("%2d %.35Lg\n", i, x*x);
}
2 1.9999999999999999998915797827514496
3 3.0000000000000000002168404344971009
4 4
5 5
6 6
7 6.9999999999999999995663191310057982
8 7.9999999999999999995663191310057982
9 9
10 10
With various precisions, (note C does not specify a fixed precision), K * K == 3.0 is possible.
FLT_EVAL_METHOD == 2
When FLT_EVAL_METHOD == 2, intermediate calculations may be done at higher precession, thus affecting the product of k*k.
(Have yet to come up with a good simple example.)
sqrt(3) is irrational, which means that there is no rational number k such that k*k == 3. A double can only represent rational numbers; therefore, there is no double k such that k*k == 3.
If you can accept a number that is close to satisfying k*k == 3, then you can use std::numeric_limits (in <type_traits>, if memory serves) to see if you’re within some minimal interval around 3. It may look like:
assert( abs(k*k - 3.) <= abs(k*k + 3.) * std::numeric_limits<double>::epsilon * X);
Epsilon is the smallest difference from one that double can represent. We scale it by the sum of the two values to compare in order to bring its magnitude in line with the numbers we’re checking. X is a scaling factor that lets you adjust the precision you accept.
If this is a theoretical question: no. If it’s a practical question: yes, up some level of precision.

How to get a floating point in Python? [duplicate]

This question already has answers here:
How can I force division to be floating point? Division keeps rounding down to 0?
(11 answers)
Closed 4 years ago.
In following code, I always get the result as a closest integer. But I would like to have the result of the division as a float i.e. 12/5 = 1.4, not 2 which is what I get in the program. I am using python2.7
"""Given division of two numbers, the result will print out """
try:
divident = int(raw_input("Enter the divident: "))
divisor = int(raw_input("Enter the divisor: "))
print (" %d devided by %d is %f: " % ( divident, divisor, divident / divisor))
except(ValueError, ZeroDivisionError):
print ("Something went wrong!")
The basic explanation is that in almost all programming languages, dividing 2 variables of numeric type T returns a value of that type T.
Integers division is performed by the processor as an euclidian division, returning the quotient (as an integer).
The print format %f will not perform the variable type conversion for you.
I strongly suggest you read the proposed duplicate question for further understanding of python behaviour.
Example:
12 = (2 * 5) + 2 => 12 / 5 = 2 12 % 5 = 2
12 = (1 * 7) + 5 => 12 / 7 = 1 12 % 7 = 5
In python :
Python 2.7.15 (v2.7.15:ca079a3ea3, Apr 30 2018, 16:30:26) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> 12/5
2
>>> 12%5
2
>>> 12/7
1
>>> 12%7
5
if you want to obtain a float, do as https://stackoverflow.com/users/8569905/banghua-zhao proposed.
cast in float, and then perform a division. The processor will then using floating point division and return a float.
As pointed out in a comment below, if 2 operands have a different type, the operator computation is performed with the most restrictive type : float will take precedence over integer. In the following examples, one cast to float would be sufficient.
>>> float(12)/float(5)
2.4
Note that the % operator still performs an euclidian divison and gives you a the result as a float
>>> float(12)%float(5)
2.0
>>> float(12)%float(7)
5.0
You divident and divisor are int type since you use int() method to convert the value from raw_input() into int type.
As a result, divident / divisor is also an int type. You need to convert int to float (for example: float()) before division.
"""Given division of two numbers, the result will print out """
try:
divident = int(raw_input("Enter the divident: "))
divisor = int(raw_input("Enter the divisor: "))
print (" %d devided by %d is %f: " % ( divident, divisor, float(divident) / float(divisor)))
except(ValueError, ZeroDivisionError):
print ("Something went wrong!")
Output:
Enter the divident: 12
Enter the divisor: 5
12 devided by 5 is 2.400000:
Note, if your inputs are not integers, consider converting them to float at the begining:
divident = float(raw_input("Enter the divident: "))
divisor = float(raw_input("Enter the divisor: "))
You must declare the input type as float in place of int because the input type determines the output type.
You should try:
a=float(input('your prompt string'))
b=float(input('your 2nd prompt'))
print(a/b)

C++ Programming the division alogrithm

The division algorithm states that given two integers a and d, with d ≠ 0, there exists unique integers q and r such that a = qd + r and 0 ≤ r < |d|, where |d| denotes the absolute value of d. The integer q is the quotient, r is the remainder, d is the divisor, and a is the dividend. prompt the user for a dividend and divisor and then display the division algorithm's results:
If a = 17 and d = 3, then q = 5 and r = 2, since 17 = 5 * 3 + 2.
If a = 17 and d = -3, then q = -5 and r = 2, since 17 = -5 * -3 + 2.
The C++ operators for integer division do not conform to the division algorithm. Explain in output displayed to the user of the program when to expect results that disagree with the division algorithm. The program should not attempt to resolve this issue.
Ok, so I've been trying to figure this out for a few days now and I just can't crack it. The only way I can think of solving this is by using mod to find r or using a successive subtraction to find q. However, I'm pretty sure both of those solution don't really count. Is there some other way to solve this problem?
[Edit] I don't think successive subtraction works because that's just Euclid's algorithm so I really wouldn't be using this algorithm and using modulus would just like using the C++ division operator.
Here's a hint. For positive numbers, everything works out ok.
The C++ expression q = 17/3 results in q == 5.
But for negative numbers:
The expression q = -17/3 results in q == -5
With the way the question is worded, I'm pretty sure you are supposed to use mod to find r and the division operator to find q. It says straight up "The C++ operators for integer division do not conform to the division algorithm. Explain in output displayed to the user of the program when to expect results that disagree with the division algorithm. The program should not attempt to resolve this issue." This means, don't over-think it, and instead just demonstrate how the operators don't conform to the algorithm without trying to fix anything.
Suppose your code naively uses C++ operators to calculate q and r as follows:
int q = a / d;
int r = a % d;
You then get wrong (in this case, wrong just means they don't "conform" to your algorithm) values for both q and r in the following two cases:
a = -17, b = 3
The code will result in:
q = -5, r = 2
This does not conform to the division algorithm because
-5 * 3 + 2 = -13 so you clearly have the wrong q and r.
a = -17, b = -3
The code will result in:
q = 5, r = -2
This does not conform to the division algorithm because there is a rule that 0 <=
r < |d|. In other words, r must be positive.
In the other cases, i.e. where both a and d are positive or when a is positive and d is negative, the C++ operators will correctly "conform" to your algorithm.
The C++ integer division operator gives the correct result only if both dividend and divisor are positive. This because it always rounds toward zero
As the question doesn't clearly states the problem I presume you want to output the devision like the examples provided, so a simple code would be:
int a=17;
int b=-3;
cout<<"If a="<<a<<"and b="<<b<<"then q="<<a/b<<"and r="<<a%b;

Modulus operator over int

Is it possible that after performing a modulus(%) of 10^9 + 7 over a number then number might still be out of range.
I was doing this question on CodeChef http://www.codechef.com/problems/FIRESC and was getting a wrong answer, after looking at the authors solution I changed my final answer type to long long int to int and got a correct answer. Why did that happen?
If you perform multiplications like result = (result * x) % MOD where both result and x can be up to MOD - 1, the intermediate expression result * x can be up to (MOD - 1) squared. And for modulo 109 + 7, this surely does not fit into a 32-bit integer type. Thus it is calculated incorrectly: basically, you get not result * x, but the same quantity modulo 232.
For example, from a mathematical point of view, (100,001 * 100,001) modulo 109 + 7 is 199,931, but when calculated in a 32-bit integer, 100,001 * 100,001 becomes 1,410,265,409, and taking it modulo 109 + 7 gives 410,265,402.

Fast Exponentiation when only k digits are required - continued

Where I need help...
What I want to do now is translate this solution, which calculates the mantissaof a number to c++:
n^m = exp10(m log10(n)) = exp(q (m log(n)/q)) where q = log(10)
Finding the first n digits from the result can be done like this:
"the first K digits of exp10(x) = the first K digits of exp10(frac(x))
where frac(x) = the fractional part of x = x - floor(x)."
My attempts (sparked by the math and this code) failed...:
u l l function getPrefix(long double pow /*exponent*/, long double length /*length of prefix*/)
{
long double dummy; //unused but necessary for modf
long double q = log(10);
u l l temp = floor(pow(10.0, exp(q * modf( (pow * log(2)/q), &dummy) + length - 1));
return temp;
}
If anyone out there can correctly implement this solution, I need your help!!
EDIT
Example output from my attempts:
n: 2
m: 0
n^m: 1
Calculated mantissa: 1.16334
n: 2
m: 1
n^m: 2
Calculated mantissa: 2.32667
n: 2
m: 2
n^m: 4
Calculated mantissa: 4.65335
n: 2
m: 98
n^m: 3.16913e+29
Calculated mantissa: 8.0022
n: 2
m: 99
n^m: 6.33825e+29
Calculated mantissa: 2.16596
I'd avoid pow for this. It's notoriously hard to implement correctly. There are lots of SO questions where people got burned by a bad pow implementation in their standard library.
You can also save yourself a good deal of pain by working in the natural base instead of base 10. You'll get code that looks like this:
long double foo = m * logl(n);
foo = fmodl(foo, logl(10.0)) + some_epsilon;
sprintf(some_string, "%.9Lf", expl(foo));
/* boring string parsing code here */
to compute the appropriate analogue of m log(n). Notice that the largest m * logl(n) that can arise is just a little bigger than 2e10. When you divide that by 264 and round up to the nearest power of two, you see that an ulp of foo is 2-29 at worst. This means, in particular, that you cannot get more than 8 digits out of this method using long doubles, even with a perfect implementation.
some_epsilon will be the smallest long double that makes expl(foo) always exceed the mathematically correct result; I haven't computed it exactly, but it should be on the order of 1e-9.
In light of the precision difficulties here, I might suggest using a library like MPFR instead of long doubles. You may also be able to get something to work using a double double trick and quad-precision exp, log, and fmod.