How does std::cout work?
The following code doesn't pass certain test cases for a question on HackerEarth.com:
double n,a,b;
while(t--){
cin>>n>>a>>b;
long long x = round(b*n*1.0/(a+b));
cout<<((a*x*x) + b*(n-x)*(n-x))<<endl;
}
while, the following one passes all of them:
double n,a,b;
while(t--){
cin>>n>>a>>b;
long long x = round(b*n*1.0/(a+b));
long long ans = (a*x*x) + b*(n-x)*(n-x);
cout<<ans<<endl;
}
Why does it happen that when I store the calculated value in a variable, then only the test cases are passed?
Does the value change if directly printed to console?
I am a newbie to C++.
The output format "chosen" by cout (or any std::ostream) depends on the type which is being outputted (specifically, it depends on the implementation of operator<< for that type).
Let's look at what types are at play in both cases.
In your first example, you are outputting a double, since a, b, and n are doubles. Even though x is a long long, the expression is a double due implicit conversions.
In the second example, you are outputting a long long, since that is the type of ans. Note that your calculation of ans may be truncated, since it is being computed as a double (for the reasons explained above) but stored in a long long.
Without knowing the details of the test cases you are talking about, one difference in the output is that doubles will likely be output in decimal notation (e.g. 1.23 or 1.0) whereas a long long (or any integral type) will be output as a whole number (note: there are ways to change this behavior, which I'm omitting here for simplicity).
Related
i have less experience with c++ and i code mainly in python. while solving some programming challenge online there was a part of code where i have to multiply two numbers and reduced it with mod.
v = (u*node) % 100000
where u and node are int values with range 1 - 100000. Due to time limit issues i wrote my code in c++. Here what i wrote
long long v = (u * node) % 100000;
while submitting i got runtime error in all the test cases. I downloaded failed test cases and ran in my local computer and i was getting perfect output.
After seeing editorial, i change that line to something like this
long long v = u;
v = (v*node) % 100000;
and submitted. I passed all the test cases. Please can anyone explain whats the difference between those two lines..
variable data types -
int u
int node
Because u and node are both ints, this expression,
(u * node)
produces an int result. If it overflows—meaning that the result is too large to fit in an int—too bad. Signed integer overflow is undefined behavior, and all bets are off. Chances are, it'll do something like wrap around, but it could also format your hard disk.
When you make u a long long int, then the same expression produces a long long int result. The node multiplicand gets implicitly promoted to a long long int (int to long long int is a widening conversion, so it is always safe), and then these two long long int values get multiplied. This operation won't overflow, so you avoid undefined behavior and get the correct result.
You could have also written the code with an explicit cast to avoid the declaration of a new variable:
(static_cast<long long int>(u) * node)
Note that it doesn't matter which value you promote, the result will be the same because the other value will get implicitly promoted, as described above:
(u * static_cast<long long int>(node))
On the other hand, this won't work:
static_cast<long long int>(u * node)
because it only widens the result of the multiplication operation, after the multiplication has been performed. If that multiplication overflowed the int type, then it is already too late.
It is the same reason that this doesn't work—the promotion to long long happens after the result is evaluated as an int:
long long v = (u * node)
Please can anyone explain whats the difference between those two lines..
first line actually mean:
long long v = (long long) (int * int % int);
so, first you multiply int by int, get overflow, truncate to int, mod, extend int to long long
next line actual mean:
long long v = (long long) int;
v = long long * int % int;
so, first extend int to long, multiple long by int, no overflow, mod, assign to long long
Probably you were running into an overflow in the first case (both u and v as ints). This happens because when multiplying variables, the compiler will keep the result in a temporary variable that has the same type of the highest type (int, floats, doubles, etc) of the variable in the multiplication. So, if you are multiplying two big integers, the result can overflow.
Your modification works because the temporary result is stored in a long long, which does not overflow in your examples.
We know that -2*4^31 + 1 = -9.223.372.036.854.775.807, the lowest value you can store in long long, as being said here: What range of values can integer types store in C++.
So I have this operation:
#include <iostream>
unsigned long long pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -pow(4, 31) + 5 -pow(4,31);
std::cout << nr << std::endl;
}
Why does it show -9.223.372.036.854.775.808 instead of -9.223.372.036.854.775.803? I'm using Visual Studio 2015.
This is a really nasty little problem which has three(!) causes.
Firstly there is a problem that floating point arithmetic is approximate. If the compiler picks a pow function returning float or double, then 4**31 is so large that 5 is less than 1ULP (unit of least precision), so adding it will do nothing (in other words, 4.0**31+5 == 4.0**31). Multiplying by -2 can be done without loss, and the result can be stored in a long long without loss as the wrong answer: -9.223.372.036.854.775.808.
Secondly, a standard header may include other standard headers, but is not required to. Evidently, Visual Studio's version of <iostream> includes <math.h> (which declares pow in the global namespace), but Code::Blocks' version doesn't.
Thirdly, the OP's pow function is not selected because he passes arguments 4, and 31, which are both of type int, and the declared function has arguments of type unsigned. Since C++11, there are lots of overloads (or a function template) of std::pow. These all return float or double (unless one of the arguments is of type long double - which doesn't apply here).
Thus an overload of std::pow will be a better match ... with a double return values, and we get floating point rounding.
Moral of the story: Don't write functions with the same name as standard library functions, unless you really know what you are doing!
Visual Studio has defined pow(double, int), which only requires a conversion of one argument, whereas your pow(unsigned, unsigned) requires conversion of both arguments unless you use pow(4U, 31U). Overloading resolution in C++ is based on the inputs - not the result type.
The lowest long long value can be obtained through numeric_limits. For long long it is:
auto lowest_ll = std::numeric_limits<long long>::lowest();
which results in:
-9223372036854775808
The pow() function that gets called is not yours hence the observed results. Change the name of the function.
The only possible explaination for the -9.223.372.036.854.775.808 result is the use of the pow function from the standard library returning a double value. In that case, the 5 will be below the precision of the double computation, and the result will be exactly -263 and converted to a long long will give 0x8000000000000000 or -9.223.372.036.854.775.808.
If you use you function returning an unsigned long long, you get a warning saying that you apply unary minus to an unsigned type and still get an ULL. So the whole operation should be executed as unsigned long long and should give without overflow 0x8000000000000005 as unsigned value. When you cast it to a signed value, the result is undefined, but all compilers I know simply use the signed integer with same representation which is -9.223.372.036.854.775.803.
But it would be simple to make the computation as signed long long without any warning by just using:
long long nr = -1 * pow(4, 31) + 5 - pow(4,31);
As a addition, you have neither undefined cast nor overflow here so the result is perfectly defined per standard provided unsigned long long is at least 64 bits.
Your first call to pow is using the C standard library's function, which operates on floating points. Try giving your pow function a unique name:
unsigned long long my_pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -my_pow(4, 31) + 5 - my_pow(4, 31);
std::cout << nr << std::endl;
}
This code reports an error: "unary minus operator applied to unsigned type, result still unsigned". So, essentially, your original code called a floating point function, negated the value, applied some integer arithmetic to it, for which it did not have enough precision to give the answer you were looking for (at 19 digits of presicion!). To get the answer you're looking for, change the signature to:
long long my_pow(unsigned a, unsigned b);
This worked for me in MSVC++ 2013. As stated in other answers, you're getting the floating-point pow because your function expects unsigned, and receives signed integer constants. Adding U to your integers invokes your version of pow.
I am writing a function in which I have to calculate factorial of numbers and do operations on them.The return value of the function should be long long so I think it would be better to do all operations in long long format. If I am wrong please correct me.
The tgamma() function by itself returns the correct value in scientific notation. But the the value returned by tgamma() is sometimes 1 less than actual answer when the value returned by the function is typecasted to 'long long'.
int main()
{
std::cout<<"11!:"<<tgamma(12)<<std::endl;
std::cout<<"12!"<<tgamma(13)<<std::endl;
std::cout<<"13!"<<tgamma(14)<<std::endl;
std::cout<<"14!"<<tgamma(15)<<std::endl;
std::cout<<"15!"<<tgamma(16)<<std::endl;
std::cout<<"16!"<<tgamma(17)<<std::endl;
std::cout<<"********************************"<<std::endl;
std::cout<<"11!:"<<(long long)tgamma(12)<<std::endl;
std::cout<<"12!"<<(long long)tgamma(13)<<std::endl;
std::cout<<"13!"<<(long long)tgamma(14)<<std::endl;
std::cout<<"14!"<<(long long)tgamma(15)<<std::endl;
std::cout<<"15!"<<(long long)tgamma(16)<<std::endl;
std::cout<<"16!"<<(long long)tgamma(17)<<std::endl;
return 0;
}
I am getting the following output:
11!:3.99168e+07
12!4.79002e+08
13!6.22702e+09
14!8.71783e+10
15!1.30767e+12
16!2.09228e+13
********************************
11!:39916800
12!479001599
13!6227020799
14!87178291199
15!1307674367999
16!20922789888000
The actual value of 15! according to this site is 1307674368000 but when I typecast tgamma(16) to long long, I get only 1307674367999. The thing is this discrepancy only appears for some numbers. The typecasted answer for 16! is correct - 20922789888000.
This function is for a competitive programming problem which is currently going on, so I can't paste the function and the solution I am developing to it here.
I would roll my own factorial function but I want to reduce the number of characters in my program to get bonus points.
Any tips on how to detect this discrepancy in typecasted value and correct it? Or maybe some other function that I can use?
Obviously, unless we have very unusual implementation, not all long long numbers can be exactly represented as double. Therefore, tgamma cannot store double values such that casting to long long would produce exact value. Simply there are more long long values than double values within long long interval.
If you want exact long long factorial, you should implement it yourself.
On top of this, if you want precision, you transform double to long long not as (long long)x, but as (long long)round(x), or (long long)(x+0.5), assuming x is positive.
Casting from a floating point type to an integral type truncates. Try (long long) roundl(tgammal(xxx)) to get rid of integer truncation error. This is also using long doubles so it may give you more digits.
#include <math.h>
#include <iostream>
int main(){
std::cout<<"11!:"<<(long long)roundl(tgammal(12))<<std::endl;
std::cout<<"12!"<<(long long)roundl(tgammal(13))<<std::endl;
std::cout<<"13!"<<(long long)roundl(tgammal(14))<<std::endl;
std::cout<<"14!"<<(long long)roundl(tgammal(15))<<std::endl;
std::cout<<"15!"<<(long long)roundl(tgammal(16))<<std::endl;
std::cout<<"16!"<<(long long)roundl(tgammal(17))<<std::endl;
std::cout<<"********************************"<<std::endl;
std::cout<<"11!:"<<(long long)roundl(tgammal(12))<<std::endl;
std::cout<<"12!"<<(long long)roundl(tgammal(13))<<std::endl;
std::cout<<"13!"<<(long long)roundl(tgammal(14))<<std::endl;
std::cout<<"14!"<<(long long)roundl(tgammal(15))<<std::endl;
std::cout<<"15!"<<(long long)roundl(tgammal(16))<<std::endl;
std::cout<<"16!"<<(long long)roundl(tgammal(17))<<std::endl;
return 0;
}
Gives:
11!:39916800
12!479001600
13!6227020800
14!87178291200
15!1307674368000
16!20922789888000
********************************
11!:39916800
12!479001600
13!6227020800
14!87178291200
15!1307674368000
16!20922789888000
When writing a C++ code I suddenly realised that my numbers are incorrectly casted from double to unsigned long long.
To be specific, I use the following code:
#define _CRT_SECURE_NO_WARNINGS
#include <iostream>
#include <limits>
using namespace std;
int main()
{
unsigned long long ull = numeric_limits<unsigned long long>::max();
double d = static_cast<double>(ull);
unsigned long long ull2 = static_cast<unsigned long long>(d);
cout << ull << endl << d << endl << ull2 << endl;
return 0;
}
Ideone live example.
When this code is executed on my computer, I have the following output:
18446744073709551615
1.84467e+019
9223372036854775808
Press any key to continue . . .
I expected the first and third numbers to be exactly the same (just like on Ideone) because I was sure that long double took 10 bytes, and stored the mantissa in 8 of them. I would understand if the third number were truncated compared to first one - just for the case I'm wrong with the floating-point numbers format. But here the values are twice different!
So, the main question is: why? And how can I predict such situations?
Some details: I use Visual Studio 2013 on Windows 7, compile for x86, and sizeof(long double) == 8 for my system.
18446744073709551615 is not exactly representible in double (in IEEE754). This is not unexpected, as a 64-bit floating point obviously cannot represent all integers that are representible in 64 bits.
According to the C++ Standard, it is implementation-defined whether the next-highest or next-lowest double value is used. Apparently on your system, it selects the next highest value, which seems to be 1.8446744073709552e19. You could confirm this by outputting the double with more digits of precision.
Note that this is larger than the original number.
When you convert this double to integer, the behaviour is covered by [conv.fpint]/1:
A prvalue of a floating point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type.
So this code potentially causes undefined behaviour. When undefined behaviour has occurred, anything can happen, including (but not limited to) bogus output.
The question was originally posted with long double, rather than double. On my gcc, the long double case behaves correctly, but on OP's MSVC it gave the same error. This could be explained by gcc using 80-bit long double, but MSVC using 64-bit long double.
It's due to double approximation to long long. Its precision means ~100 units error at 10^19; as you try to convert values around the upper limit of long long range, it overflows. Try to convert 10000 lower value instead :)
BTW, at Cygwin, the third printed value is zero
The problem is surprisingly simple. This is what is happening in your case:
18446744073709551615 when converted to a double is round up to the nearest number that the floating point can represent. (The closest representable number is larger).
When that's converted back to an unsigned long long, it's larger than max(). Formally, the behaviour of converting this back to an unsigned long long is undefined but what appears to be happening in your case is a wrap around.
The observed significantly smaller number is the result of this.
I am a programming newbie. I needed a simple function to convert any number with decimal point X.YZ into XYZ. I did it by multiplying it by 10 enough times and using double to int conversion.
int main()
{
std::cout << "Number: " << std::endl;
double a;
// the uninitialized b was pointed out, its not the issue
long b = 0;
std::cin >> a;
while(b!=a)
{
a*=10;
b=a;
}
std::cout << a << std::endl;
return 0;
}
This works like 90 percent of the time. For some numbers like 132.54, the program runs infinitely long. It processes 132.547(which should use more memory then 132.54) the way it should.
So my question is : Why is it not working 100 percent for the numbers in the memory range of long int? Why 132.54 and similar numbers?
I am using Codeblocks and GNU GCC compiler.
Many decimal floating point numbers cannot be exactly represented in binary. You only get a close approximation.
If 132.54 is represented as 132.539999999999999, you will never get a match.
Print the values in the loop, and you will see what happens.
The problem is that most decimal values cannot be represented exactly as floating-point values. So having a decimal value that only has a couple of digits doesn't guarantee that multiplying by ten enough times will produce a floating-point value with no fractional part. To see this, display the value of a each time through the loop. There's lots of noise down in the low bits.
Your problem is that you never initialize b and therefore have undefined behaviour.
You should do this:
long b = 0;
Now you can go compare b with something else and get good behaviour.
Also comparing a float with an integral type should be done like comparing to an appropriate epsilon value:
while(fabs(an_int - a_float) < eps)
Instead of reading it as a double, read it as a string and parse it. You won't run into floating precision problem that way.
long b;
Here you define b. From this point, the variable contains garbage value. The value can be absolutely random, basically it's just what happened to be in the memory when it was allocated. After that you are using this variable in a condition:
while(b!=a)
This will lead to undefined behaviour, which basically means that anything can happen, including an opportunity that the app will seem to be working (if you are lucky), based on the garbage value that is in b.
To avoid this, you will need to initialize the b with some value, for example, long b = 0.