fmodl - Modulus in long double - c++

#include <iostream>
#include <cmath>
using namespace std;
unsigned long long modExp(unsigned long long b, unsigned long long e, unsigned long long m)
{
unsigned long long remainder;
unsigned long long x = 1;
while (e != 0)
{
remainder = e % 2;
e= e/2;
// These lines
if(remainder==1)
x=(unsigned long long)fmodl(((long double)x*(long double)b),(long double)m);
b=(unsigned long long)fmodl(((long double)b*(long double)b),(long double)m);
}
return x;
}
int main()
{
unsigned long long lastTen = 0,netSum=0;
unsigned long long sec(unsigned long long,unsigned long long);
for(int i=1;i<1001;i++)
{
lastTen = modExp(i,i,10000000000);
netSum += lastTen;
netSum %= 10000000000;
cout << lastTen << endl ;
}
cout << netSum%10000000000 << endl;
cout << sizeof(long double) << endl;
return 0;
}
This is my program to compute the last ten digits of a sum of series. It uses Arithmetic Exponentiation technique to compute the last 10 digits. It works well for 10^9. But when I go for a 10^10 it collapses.
So in order to use the higher size data types I have converted the number to be multiplied to long double and multiplied them(which would again yield long double) so if we take modulus on this number we would get the answer correctly. But I did not get the right answer again it causes the same wrong answer.
My thought to make such thing is like this
an unsigned long long is 8 bytes, since I am moding i would get a large number as a 10 digit number so multiplying 2, ten digit numbers would not fit in unsigned long long so it would cycle in unsigned long long
so for the above point I convert the unsigned long long to long double(which is 12 bytes) and since it has large space it is large enough to fit a 20 digit product of 2 ten digit numbers
Can any one say what is the flaw in this logic??

The common long double implementation cannot represent all 20-digit decimal numbers exactly.
The characteristics of long double are not completely determined by the C++ standard, and you do not state what implementation you are using.
One common implementation of long double uses a 64-bit significand. Although it may be stored in twelve bytes, it uses only ten, and 16 of those are used for the sign and exponent, leaving 64 for the significand (including an explicit leading bit).
A 64-bit significand can represent integers without error up to 264, which is about 1.845•1019. Thus, it cannot represent all 20-digit numbers exactly.

Related

Why conversion from unsigned long long to double can lead to data loss?

When I compile this trivial piece of code via Microsoft's VC 2008:
double maxDistance(unsigned long long* a, unsigned long long* b, int n)
{
double maxD = 0, currentD = 0;
for(int i = 0; i < n; ++i)
{
currentD = b[i] - a[i];
if(currentD > maxD)
{
maxD = currentD;
}
}
return maxD;
}
The compiler gives me:
warning C4244 stating: conversion from 'unsigned long long' to 'double', possible loss of data. On the line
currentD = b[i] - a[i]
I know that it's better to rewrite the code somehow, I use double to account for possible negative values of the difference, but I'm just curious, why in the world conversion from unsigned long long to double can lead to data loss if unsigned long long's range is from 0 to 18,446,744,073,709,551,615 and double is
+/- 1.7E +/- 308 ?
An IEEE double-precision floating point number has 53 bits of mantissa. This means that (most) integers greater than 253 can't be stored exactly in a double.
Example program (this is for GCC, use %I64u for MSVC):
#include <stdio.h>
int main() {
unsigned long long ull;
ull = (1ULL << 53) - 1;
printf("%llu %f\n", ull, (double)ull);
ull = (1ULL << 53) + 1;
printf("%llu %f\n", ull, (double)ull);
return 0;
}
Output:
9007199254740991 9007199254740991.000000
9007199254740993 9007199254740992.000000
A double supports a larger range of possible values, but cannot represent all values in that range. Some of the values that cannot be represented are integral values, which a long or a long long can represent.
Trying to assign a value into a floating point variable that it cannot represent means the result is some approximation - a value that is close, but not exactly equal. That represents a potential data loss (depending on what value is being assigned).

Why can't I use a long long int type ? c++

I try
long long int l = 42343254325322343224;
but to no avail. Why does it tell me, "integer constant is too long." I am using the long long int type which should be able to hold more than 19 digits. Am I doing something wrong here or is there a special secret I do not know of just yet?
Because it's more, on my x86_64 system, of 2^64
// 42343254325322343224
// maximum for 8 byte long long int (2^64) 18446744073709551616
// (2^64-1 maximum unsigned representable)
std::cout << sizeof(long long int); // 8
you shouldn't confuse the number of digits with the number of bits necessary to represent a number
Take a look at Boost.Multiprecision at Boost.Multiprecision
It defines templates and classes to handle larger numbers.
Here is the example from the Boost tutorial:
#include <boost/multiprecision/cpp_int.hpp>
using namespace boost::multiprecision;
int128_t v = 1;
// Do some fixed precision arithmetic:
for(unsigned i = 1; i <= 20; ++i)
v *= i;
std::cout << v << std::endl; // prints 20!
// Repeat at arbitrary precision:
cpp_int u = 1;
for(unsigned i = 1; i <= 100; ++i)
u *= i;
std::cout << u << std::endl; // prints 100!
It seems that the value of the integer literal exceeds the acceptable value for type long long int
Try the following program that to determine maximum values of types long long int and unsigned long long int
#include <iostream>
#include <limits>
int main()
{
std::cout << std::numeric_limits<long long int>::max() << std::endl;
std::cout << std::numeric_limits<unsigned long long int>::max() << std::endl;
return 0;
}
I have gotten the following results at www.ideone.com
9223372036854775807
18446744073709551615
You can compare it with the value you specified
42343254325322343224
Take into account that in general case there is no need to specify suffix ll for a integer decimal literal that is so big that can be stored only in type long long int The compiler itself will determine the most appropriate type ( int or long int or long long int ) for the integral decimal literal.

conversion from any base to base 10 c++

I found two ways of conversion from any base to base 10 . the first one is the normal one we do in colleges like 521(base-15) ---> (5*15^2)+(2*15^1)+(1*15^0)=1125+30+1 = 1156 (base-10) . my problem is that i applied both methods to a number (1023456789ABCDE(Base-15)) but i am getting different result . google code jam accepts the value generated from second method only for this particular number (i.e 1023456789ABCDE(Base-15)) . for all other cases both generates same results . whats big deal with this special number ?? can anybody suggest ...
#include <iostream>
#include <math.h>
using namespace std;
int main()
{ //number in base 15 is 1023456789ABCDE
int value[15]={1,0,2,3,4,5,6,7,8,9,10,11,12,13,14};
int base =15;
unsigned long long sum=0;
for (int i=0;i<15;i++)
{
sum+=(pow(base,i)*value[14-i]);
}
cout << sum << endl;
//this prints 29480883458974408
sum=0;
for (int i=0;i<15;i++)
{
sum=(sum*base)+value[i];
}
cout << sum << endl;
//this prints 29480883458974409
return 0;
}
Consider using std::stol(ref) to convert a string into a long.
It let you choose the base to use, here an example for your number wiuth base 15.
int main()
{
std::string s = "1023456789ABCDE";
long n = std::stol(s,0,15);
std::cout<< s<<" in base 15: "<<n<<std::endl;
// -> 1023456789ABCDE in base 15: 29480883458974409
}
pow(base, i) uses floating point and so you loose some precision on some numbers.
Exceeded double precision.
Precision of double, the return value from pow(), is precise for at least DBL_DIG significant decimal digits. DBL_DIG is at least 10 and typically is 15 IEEE 754 double-precision binary.
The desired number 29480883458974409 is 17 digits, so some calculation error should be expected.
In particular, sum += pow(base,i)*value[14-i] is done as a long long = long long + (double * long long) which results in long long = double. The nearest double to 29480883458974409 is 29480883458974408. So it is not an imprecise value from pow() that causes the issue here, but an imprecise sum from the addition.
#Mooing Duck in a comment references code to avoid using pow() and its double limitation`. Following is a slight variant.
unsigned long long ullongpow(unsigned value, unsigned exp) {
unsigned long long result = !!value;
while (exp-- > 0) {
result *= value;
}
return result;
}

Output a 15 digit number

I have a program that is suppose to find the sum of all the numbers between 1 and 75 in the fibonacci sequence that are divisible by three and add them together. I got the program working properly the only problem I am having is being able to display such a large number. I am told that the answer should be 15 digits. I have tried long long, long double, unsigned long long int and none of those produce the right output (they produce a negative number).
code:
long fibNum(int kth, int nth);
int main()
{
int kTerm;
int nTerm;
kTerm = 76;
nTerm = 3;
std::cout << fibNum(kTerm, nTerm) << std::endl;
system("pause");
return 0;
}
long fibNum(int kth, int nth)
{
int term[100];
long firstTerm;
long secondTerm;
long exactValue;
int i;
term[1] = 1;
term[2] = 1;
exactValue = 0;
do
{
firstTerm = term[nth - 1];
secondTerm = term[nth - 2];
term[nth] = (firstTerm + secondTerm);
nth++;
}
while(nth < kth);
for(i = 1; i < kth; i++)
{
if(term[i] % 3 == 0)
{
term[i] = term[i];
}
else
term[i] = 0;
exactValue = term[i] + exactValue;
}
return exactValue;
I found out that the problem has to do with the array. The array cannot store the 47th term which is 10 digits. Now I have no idea what to do
Type long long is guaranteed to be at least 64 bits (and is exactly 64 bits on every implementation I've seen). Its maximum value, LLONG_MAX is at least 263-1, or 9223372036854775807, which is 19 decimal digits -- so long longis more than big enough to represent 15-digit numbers.
Just use type long long consistently. In your code, you have one variable of type long double, which has far more range than long long but may have less precision (which could make it impossible to determine whether a given number is a multiple of 3.)
You could also use unsigned long long, whose upper bound is at least 264-1, but either long long or unsigned long long should be more than wide enough for your purposes.
Displaying a long long value in C++ is straightforward:
long long num = some_value;
std::cout << "num = " << num << "\n";
Or if you prefer printf for some reason, use the "%lld" format for long long, "%llu" for unsigned long long.
(For integers too wide to fit in 64 bits, there are software packages that handle arbitrarily large integers; the most prominent is GNU's GMP. But you don't need it for 15-digit integers.)
What you can do is take char s[15]and int i=14,k,
and then go for while loop till sum!=0
Under while body
k=n%10;
s[i]=k+48;
n=n/10;
i--;
The array cannot store the 47th term which is 10 digits.
This indicates that your architecture has a type long with just 32 bits. That is common for 32-bit architecture. 32 bits cover 9 digit numbers and low 10-digit numbers, to be precise 2.147.483.647 for long and 4.294.967.295 for unsigned long.
Just change your long types to long long or unsigned long long, including the return type of fibNum. That would easily cover 18 digits.

Checking if input file value is bigger than 10^9

So basically, I have something like this -
Input file with 2 integers.
Code, something like this -
#include <iostream>
#include <fstream>
using namespace std;
int main() {
unsigned long long n, k;
ifstream input_file("file.txt");
input_file >> n >> k;
if(n >= 10^9 || k >= 10^9) {
cout << "0" << endl;
}
return 0;
}
So, is there any chance to check if any of theese two integers are bigger than 10^9? Basically, if I assign thoose integers to unsigned long long, and if they are bigger than 10^9, they automatically turn to some random value, that fits inside unsigned long long, am I right, and that means that there is no chance to check it, or am I'm missing something?
I'm bad at counting zeroes. That's the machine's job. What about 1e9 instead of a bit operation 10^9.
On most platforms, an unsigned long long will be able to store 109 with no problem. You just need to say:
if (n >= 1000000000ull)
If an unsigned long long is 64-bits, for example, which is common, you can store up to 264
Read into a string:
std::string s;
input_file >> s;
and check if it's longer than 9 characters. If it's exactly 9, see that it's not exactly "1000000000" (1 and eight 0's).
For 10^9 you need to write 1000000000LL. In C++ ^ is the bitwise XOR operator, not exponentiaion. You also need the LL to ensure that the literal constant is interpreted as long long rather than just int.
if (n >= 1000000000LL || k >= 1000000000LL)
{
...
}
Of course if the user enters a value which is too large to be represented by a long long (greater than 2^63-1, typically) then you have a bigger problem.