I am writing a function in which I have to calculate factorial of numbers and do operations on them.The return value of the function should be long long so I think it would be better to do all operations in long long format. If I am wrong please correct me.
The tgamma() function by itself returns the correct value in scientific notation. But the the value returned by tgamma() is sometimes 1 less than actual answer when the value returned by the function is typecasted to 'long long'.
int main()
{
std::cout<<"11!:"<<tgamma(12)<<std::endl;
std::cout<<"12!"<<tgamma(13)<<std::endl;
std::cout<<"13!"<<tgamma(14)<<std::endl;
std::cout<<"14!"<<tgamma(15)<<std::endl;
std::cout<<"15!"<<tgamma(16)<<std::endl;
std::cout<<"16!"<<tgamma(17)<<std::endl;
std::cout<<"********************************"<<std::endl;
std::cout<<"11!:"<<(long long)tgamma(12)<<std::endl;
std::cout<<"12!"<<(long long)tgamma(13)<<std::endl;
std::cout<<"13!"<<(long long)tgamma(14)<<std::endl;
std::cout<<"14!"<<(long long)tgamma(15)<<std::endl;
std::cout<<"15!"<<(long long)tgamma(16)<<std::endl;
std::cout<<"16!"<<(long long)tgamma(17)<<std::endl;
return 0;
}
I am getting the following output:
11!:3.99168e+07
12!4.79002e+08
13!6.22702e+09
14!8.71783e+10
15!1.30767e+12
16!2.09228e+13
********************************
11!:39916800
12!479001599
13!6227020799
14!87178291199
15!1307674367999
16!20922789888000
The actual value of 15! according to this site is 1307674368000 but when I typecast tgamma(16) to long long, I get only 1307674367999. The thing is this discrepancy only appears for some numbers. The typecasted answer for 16! is correct - 20922789888000.
This function is for a competitive programming problem which is currently going on, so I can't paste the function and the solution I am developing to it here.
I would roll my own factorial function but I want to reduce the number of characters in my program to get bonus points.
Any tips on how to detect this discrepancy in typecasted value and correct it? Or maybe some other function that I can use?
Obviously, unless we have very unusual implementation, not all long long numbers can be exactly represented as double. Therefore, tgamma cannot store double values such that casting to long long would produce exact value. Simply there are more long long values than double values within long long interval.
If you want exact long long factorial, you should implement it yourself.
On top of this, if you want precision, you transform double to long long not as (long long)x, but as (long long)round(x), or (long long)(x+0.5), assuming x is positive.
Casting from a floating point type to an integral type truncates. Try (long long) roundl(tgammal(xxx)) to get rid of integer truncation error. This is also using long doubles so it may give you more digits.
#include <math.h>
#include <iostream>
int main(){
std::cout<<"11!:"<<(long long)roundl(tgammal(12))<<std::endl;
std::cout<<"12!"<<(long long)roundl(tgammal(13))<<std::endl;
std::cout<<"13!"<<(long long)roundl(tgammal(14))<<std::endl;
std::cout<<"14!"<<(long long)roundl(tgammal(15))<<std::endl;
std::cout<<"15!"<<(long long)roundl(tgammal(16))<<std::endl;
std::cout<<"16!"<<(long long)roundl(tgammal(17))<<std::endl;
std::cout<<"********************************"<<std::endl;
std::cout<<"11!:"<<(long long)roundl(tgammal(12))<<std::endl;
std::cout<<"12!"<<(long long)roundl(tgammal(13))<<std::endl;
std::cout<<"13!"<<(long long)roundl(tgammal(14))<<std::endl;
std::cout<<"14!"<<(long long)roundl(tgammal(15))<<std::endl;
std::cout<<"15!"<<(long long)roundl(tgammal(16))<<std::endl;
std::cout<<"16!"<<(long long)roundl(tgammal(17))<<std::endl;
return 0;
}
Gives:
11!:39916800
12!479001600
13!6227020800
14!87178291200
15!1307674368000
16!20922789888000
********************************
11!:39916800
12!479001600
13!6227020800
14!87178291200
15!1307674368000
16!20922789888000
Related
my output is coming wrong. I guess i'm wrong with casting. please help me out.
int n;
cin>>n;
unsigned long long int a,s;
cin>>a;
s=(2*pow(10,n)+a);
But when I am giving large n like 17 or 18 then my output which is s is not coming as expected.
see image for output
e.g: when n=17, a=67576676767676788 then s=267576676767676800 which ideally should be 2*10^17 + 67576676767676788
First you have to understand what is going on.
To be able to use std::pow, compiler silently converts integer types to double and returned value is a double too.
Note that double has 16 significant digits (in decimal representation).
When you do assignment, conversion of double to long long int is silently performed
unsigned long long int - if this type has 64 bits the maximum power of 10 is 19
Now if you want to exceed this limitation you should use an external library. gmp is quite nice.
If it is acceptable to have a limitation from range of unsigned long long int just implement your own power function.
Is it possible to define 999e999 value without using the char type?
I've tried defining it even with unsigned long long, but the compiler keeps giving me constant too big error.
Thanks in advance.
Is it possible to define 999e999 value without using the char type?
No, that's not possible using intrinsic c++ data types. That's a way to big number that could be held in either a unsigned long long type in c++.
A long double type would enable you to use 10 based exponents as large as you want, for modern FPU architectures.
What can be achieved with your current CPU architecture can be explored using the std::numeric_limits facilities like this:
#include <iostream>
#include <limits>
int main() {
std::cout<< "max_exponent10: " << std::numeric_limits<long double>::max_exponent10 << std::endl;
}
Output:
max_exponent10: 4932
See the online demo
You have to use a 3rd party library (like GMP) or write your own algorithms to deal with big numbers like that.
In most (If not all) implementations, that constant is just too big to be represented as a unsigned long long or long double (Though some may just have it be floating point infinity).
You may instead be interested in std::numeric_limits<T>::infinity() (for float, double or long double) or std::numeric_limits<T>::max() instead.
I've tried defining it even with unsigned long long, but the compiler keeps giving me constant too big error.
Of course it does. A long long is typically 64 bits long, which gives you log(2^64) ≅ 19 decimal digits of precision. 999e999 ≅ (10^3)^1000, so is on the order of 3000 decimal digits long, or nearly 10,000 bits long. So 999e999 isn't just too big for a long long, it's too big by an enormous margin.
Is it possible to define 999e999 value without using the char type?
Sure. You could define an integer-like type based on an array of some sort of integers, like long long. You'd still need to write a set of operators to work with your new giant type, though. Also, most of the time when you're working with numbers that large, you don't need an exact representation, which is why floating point types like float and double are useful.
I have two values p=19, q=14. I want to calculate pq using the power function pow(p, q).
Here is my code:
long long p=19,q=14;
cout<<pow(p,q);
The correct answer is: 799006685782884121 but my code gives me 799006685782884096 which is incorrect.
I have also tried doing these calculations using unsigned long long instead of long long, but this didn't help.
The pow function is defined as:
double pow(double x, double y);
Which means that it takes floating point arguments and returns a floating point result. Due to the nature of floating point numbers, some numbers can not be exactly represented. The result you're getting is probably the closest match possible.
Note also that you're doing two (probably lossy) conversions:
converting the arguments from long long to double, and
converting the result of the function from double to long long.
We know that -2*4^31 + 1 = -9.223.372.036.854.775.807, the lowest value you can store in long long, as being said here: What range of values can integer types store in C++.
So I have this operation:
#include <iostream>
unsigned long long pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -pow(4, 31) + 5 -pow(4,31);
std::cout << nr << std::endl;
}
Why does it show -9.223.372.036.854.775.808 instead of -9.223.372.036.854.775.803? I'm using Visual Studio 2015.
This is a really nasty little problem which has three(!) causes.
Firstly there is a problem that floating point arithmetic is approximate. If the compiler picks a pow function returning float or double, then 4**31 is so large that 5 is less than 1ULP (unit of least precision), so adding it will do nothing (in other words, 4.0**31+5 == 4.0**31). Multiplying by -2 can be done without loss, and the result can be stored in a long long without loss as the wrong answer: -9.223.372.036.854.775.808.
Secondly, a standard header may include other standard headers, but is not required to. Evidently, Visual Studio's version of <iostream> includes <math.h> (which declares pow in the global namespace), but Code::Blocks' version doesn't.
Thirdly, the OP's pow function is not selected because he passes arguments 4, and 31, which are both of type int, and the declared function has arguments of type unsigned. Since C++11, there are lots of overloads (or a function template) of std::pow. These all return float or double (unless one of the arguments is of type long double - which doesn't apply here).
Thus an overload of std::pow will be a better match ... with a double return values, and we get floating point rounding.
Moral of the story: Don't write functions with the same name as standard library functions, unless you really know what you are doing!
Visual Studio has defined pow(double, int), which only requires a conversion of one argument, whereas your pow(unsigned, unsigned) requires conversion of both arguments unless you use pow(4U, 31U). Overloading resolution in C++ is based on the inputs - not the result type.
The lowest long long value can be obtained through numeric_limits. For long long it is:
auto lowest_ll = std::numeric_limits<long long>::lowest();
which results in:
-9223372036854775808
The pow() function that gets called is not yours hence the observed results. Change the name of the function.
The only possible explaination for the -9.223.372.036.854.775.808 result is the use of the pow function from the standard library returning a double value. In that case, the 5 will be below the precision of the double computation, and the result will be exactly -263 and converted to a long long will give 0x8000000000000000 or -9.223.372.036.854.775.808.
If you use you function returning an unsigned long long, you get a warning saying that you apply unary minus to an unsigned type and still get an ULL. So the whole operation should be executed as unsigned long long and should give without overflow 0x8000000000000005 as unsigned value. When you cast it to a signed value, the result is undefined, but all compilers I know simply use the signed integer with same representation which is -9.223.372.036.854.775.803.
But it would be simple to make the computation as signed long long without any warning by just using:
long long nr = -1 * pow(4, 31) + 5 - pow(4,31);
As a addition, you have neither undefined cast nor overflow here so the result is perfectly defined per standard provided unsigned long long is at least 64 bits.
Your first call to pow is using the C standard library's function, which operates on floating points. Try giving your pow function a unique name:
unsigned long long my_pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -my_pow(4, 31) + 5 - my_pow(4, 31);
std::cout << nr << std::endl;
}
This code reports an error: "unary minus operator applied to unsigned type, result still unsigned". So, essentially, your original code called a floating point function, negated the value, applied some integer arithmetic to it, for which it did not have enough precision to give the answer you were looking for (at 19 digits of presicion!). To get the answer you're looking for, change the signature to:
long long my_pow(unsigned a, unsigned b);
This worked for me in MSVC++ 2013. As stated in other answers, you're getting the floating-point pow because your function expects unsigned, and receives signed integer constants. Adding U to your integers invokes your version of pow.
When writing a C++ code I suddenly realised that my numbers are incorrectly casted from double to unsigned long long.
To be specific, I use the following code:
#define _CRT_SECURE_NO_WARNINGS
#include <iostream>
#include <limits>
using namespace std;
int main()
{
unsigned long long ull = numeric_limits<unsigned long long>::max();
double d = static_cast<double>(ull);
unsigned long long ull2 = static_cast<unsigned long long>(d);
cout << ull << endl << d << endl << ull2 << endl;
return 0;
}
Ideone live example.
When this code is executed on my computer, I have the following output:
18446744073709551615
1.84467e+019
9223372036854775808
Press any key to continue . . .
I expected the first and third numbers to be exactly the same (just like on Ideone) because I was sure that long double took 10 bytes, and stored the mantissa in 8 of them. I would understand if the third number were truncated compared to first one - just for the case I'm wrong with the floating-point numbers format. But here the values are twice different!
So, the main question is: why? And how can I predict such situations?
Some details: I use Visual Studio 2013 on Windows 7, compile for x86, and sizeof(long double) == 8 for my system.
18446744073709551615 is not exactly representible in double (in IEEE754). This is not unexpected, as a 64-bit floating point obviously cannot represent all integers that are representible in 64 bits.
According to the C++ Standard, it is implementation-defined whether the next-highest or next-lowest double value is used. Apparently on your system, it selects the next highest value, which seems to be 1.8446744073709552e19. You could confirm this by outputting the double with more digits of precision.
Note that this is larger than the original number.
When you convert this double to integer, the behaviour is covered by [conv.fpint]/1:
A prvalue of a floating point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type.
So this code potentially causes undefined behaviour. When undefined behaviour has occurred, anything can happen, including (but not limited to) bogus output.
The question was originally posted with long double, rather than double. On my gcc, the long double case behaves correctly, but on OP's MSVC it gave the same error. This could be explained by gcc using 80-bit long double, but MSVC using 64-bit long double.
It's due to double approximation to long long. Its precision means ~100 units error at 10^19; as you try to convert values around the upper limit of long long range, it overflows. Try to convert 10000 lower value instead :)
BTW, at Cygwin, the third printed value is zero
The problem is surprisingly simple. This is what is happening in your case:
18446744073709551615 when converted to a double is round up to the nearest number that the floating point can represent. (The closest representable number is larger).
When that's converted back to an unsigned long long, it's larger than max(). Formally, the behaviour of converting this back to an unsigned long long is undefined but what appears to be happening in your case is a wrap around.
The observed significantly smaller number is the result of this.