long long value in Visual Studio - c++

We know that -2*4^31 + 1 = -9.223.372.036.854.775.807, the lowest value you can store in long long, as being said here: What range of values can integer types store in C++.
So I have this operation:
#include <iostream>
unsigned long long pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -pow(4, 31) + 5 -pow(4,31);
std::cout << nr << std::endl;
}
Why does it show -9.223.372.036.854.775.808 instead of -9.223.372.036.854.775.803? I'm using Visual Studio 2015.

This is a really nasty little problem which has three(!) causes.
Firstly there is a problem that floating point arithmetic is approximate. If the compiler picks a pow function returning float or double, then 4**31 is so large that 5 is less than 1ULP (unit of least precision), so adding it will do nothing (in other words, 4.0**31+5 == 4.0**31). Multiplying by -2 can be done without loss, and the result can be stored in a long long without loss as the wrong answer: -9.223.372.036.854.775.808.
Secondly, a standard header may include other standard headers, but is not required to. Evidently, Visual Studio's version of <iostream> includes <math.h> (which declares pow in the global namespace), but Code::Blocks' version doesn't.
Thirdly, the OP's pow function is not selected because he passes arguments 4, and 31, which are both of type int, and the declared function has arguments of type unsigned. Since C++11, there are lots of overloads (or a function template) of std::pow. These all return float or double (unless one of the arguments is of type long double - which doesn't apply here).
Thus an overload of std::pow will be a better match ... with a double return values, and we get floating point rounding.
Moral of the story: Don't write functions with the same name as standard library functions, unless you really know what you are doing!

Visual Studio has defined pow(double, int), which only requires a conversion of one argument, whereas your pow(unsigned, unsigned) requires conversion of both arguments unless you use pow(4U, 31U). Overloading resolution in C++ is based on the inputs - not the result type.

The lowest long long value can be obtained through numeric_limits. For long long it is:
auto lowest_ll = std::numeric_limits<long long>::lowest();
which results in:
-9223372036854775808
The pow() function that gets called is not yours hence the observed results. Change the name of the function.

The only possible explaination for the -9.223.372.036.854.775.808 result is the use of the pow function from the standard library returning a double value. In that case, the 5 will be below the precision of the double computation, and the result will be exactly -263 and converted to a long long will give 0x8000000000000000 or -9.223.372.036.854.775.808.
If you use you function returning an unsigned long long, you get a warning saying that you apply unary minus to an unsigned type and still get an ULL. So the whole operation should be executed as unsigned long long and should give without overflow 0x8000000000000005 as unsigned value. When you cast it to a signed value, the result is undefined, but all compilers I know simply use the signed integer with same representation which is -9.223.372.036.854.775.803.
But it would be simple to make the computation as signed long long without any warning by just using:
long long nr = -1 * pow(4, 31) + 5 - pow(4,31);
As a addition, you have neither undefined cast nor overflow here so the result is perfectly defined per standard provided unsigned long long is at least 64 bits.

Your first call to pow is using the C standard library's function, which operates on floating points. Try giving your pow function a unique name:
unsigned long long my_pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -my_pow(4, 31) + 5 - my_pow(4, 31);
std::cout << nr << std::endl;
}
This code reports an error: "unary minus operator applied to unsigned type, result still unsigned". So, essentially, your original code called a floating point function, negated the value, applied some integer arithmetic to it, for which it did not have enough precision to give the answer you were looking for (at 19 digits of presicion!). To get the answer you're looking for, change the signature to:
long long my_pow(unsigned a, unsigned b);
This worked for me in MSVC++ 2013. As stated in other answers, you're getting the floating-point pow because your function expects unsigned, and receives signed integer constants. Adding U to your integers invokes your version of pow.

Related

What data type is used to store intermediate calculations while executing a program in C++?

I was trying to do the following calculations but found out that the calculations do not yield the correct result.
I have the following doubt that when my computer does the calculation a*b, what data type is used to store the result of the calculation temporary before doing the modulus. How is the data type in which it stores the result decided?.
Please do let me know about the source of the information.
#include <iostream>
using namespace std;
int main()
{
long long int a=1000000000000000000; // 18 zeroes
long long int b=1000000000000000000;
long long int c=1000000007;
long long int d=(a*b)%c;
cout<<a<<"\n"<<b<<"\n"<<c<<"\n"<<d;
}
Edit1: This code also gives incorrect output
#include <iostream>
using namespace std;
int main()
{
int a=1000000000; // 9 zeroes
int b=1000000000;
long long int c=1000000007;
long long int d=a*b%c;
cout<<a<<"\n"<<b<<"\n"<<c<<"\n"<<d;
}
How is the data type in which it stores the result decided?
The rules are fairly complicated and convoluted in general, but in this particular case it's simple. a*b is of type long long, and since a*b overflows the programs has Undefined Behavior.
You can use the equivalent formula to compute the correct result (without overflowing):
(a * b) % c == ((a % c) * (b % c)) % c
Could you also suggest on how to decide for mixed data types and post
about your source of information
Of some interest: https://en.cppreference.com/w/cpp/language/implicit_conversion The standard rules are unfortunately even more complicated.
As some suggestions:
never mix unsigned and signed.
pay attentions that types smaller than int will be promoted to int or unsigned.
for a type T equal or larger than int then T op T will have type type T. This is what you should be aiming for in your expressions. (i.e. have both operators of the same type either int, long or long long.
avoid unsigned types. Unfortunately that's impossible with the current Standard Library design (std::size_t sigh)
avoid long as its width differs between current major compilers and platforms
if you care about the width of the integer data type then avoid int long long long and such and always use fixed width integer types (std::int32_t std::int64_t etc.). Completely ignore that technically those types are optional.
My understanding is that long long has to be able to accommodate at least 64 bits but each 1000000000000000000 is a 60 bit number so a*b would yield a result that exceeds any integer representation the compiler supports. Perhaps you were thinking that the 1000000000000000000 was binary?

Why does implicit conversion of int to long long int give unexpected answer in C++?

I read that conversion from int to long long int is promotion and hence thought that there shouldn't be any issue as there is no loss of data, unlike the vice versa conversion.
But when I multiply two ints of large value and store it in long long int, it is showing me negative number.
Eg:
int a=1000000, b=1000000;
long long int c=a*b;
cout<<c;
The above code gives me a negative value. Can someone explain why?
a*b is still of type int. Once it's evaluated, the result is then converted to long long int. At that point it's too late to avoid overflow. Convert one of your values to long long int before preforming the multiplication. Try this :
#include <iostream>
int main()
{
int a = 1000000, b = 1000000;
long long int c = static_cast<long long int>(a)*b;
std::cout << c;
return 0;
};
The multiplication is happening as an int, which overflows, giving Undefined Behaviour (in this case overflow, which is very normal - your combination of compiler+settings may even guarantee it), and after that the result is being converted to long long.
I think you want to do the conversion on one of the arguments before multiplication, so that the multiplication is performed using long longs:
long long c = static_cast<long long>(a)*b;
In this way, b will be promoted to long long before the multiplication takes place, and the whole operation will be performed safely, and with the desired result.
Because multiplying two ints will result in another int that comes with all the overflow problems attached. This int is then (after the fact) promoted to a long long int which still means it's not what you want.
Promote at least one of the operands to have the other promoted and get the result you want.

How to get negative remainder with remainder operator on size_t?

Consider the following code sample:
#include <iostream>
#include <string>
int main()
{
std::string str("someString"); // length 10
int num = -11;
std::cout << num % str.length() << std::endl;
}
Running this code on http://cpp.sh, I get 5 as a result, while I was expecting it to be -1.
I know that this happens because the type of str.length() is size_t which is an implementation dependent unsigned, and because of the implicit type conversions that happen with binary operators that cause num to be converted from a signed int to an unsigned size_t (more here);
this causes the negative value to become a positive one and messes up the result of the operation.
One could think of addressing the problem with an explicit cast to int:
num % (int)str.length()
This might work but it's not guaranteed, for instance in the case of a string with length larger than the maximum value of int. One could reduce the risk using a larger type, like long long, but what if size_t is unsigned long long? Same problem.
How would you address this problem in a portable and robust way?
Since C++11, you can just cast the result of length to std::string::difference_type.
To address "But what if the size is too big?":
That won't happen on 64 bit platforms and even if you are on a smaller one: When was the last time you actually had a string that took up more than half of total RAM? Unless you are doing really specific stuff (which you would know), using the difference_type is just fine; quit fighting ghosts.
Alternatively, just use int64_t, that's certainly big enough. (Though maybe looping over one on some 32 bit processors is slower than int32_t, I don't know. Won't matter for that single modulus operation though.)
(Fun fact: Even some prominent committee members consider littering the standard library with unsigned types a mistake, for reference see
this panel at 9:50, 42:40, 1:02:50 )
Pre C++11, the sign of % with negative values was implementation defined, for well defined behavior, use std::div plus one of the casts described above.
We know that
-a % b == -(a % b)
So you could write something like this:
template<typename T, typename T2>
constexpr T safeModulo(T a, T2 b)
{
return (a >= 0 ? 1 : -1) * static_cast<T>(std::llabs(a) % b);
}
This won't overflow in 99.98% of the cases, because consider this
safeModulo(num, str.length());
If std::size_t is implemented as an unsigned long long, then T2 -> unsigned long long and T -> int.
As pointed out in the comments, using std::llabs instead of std::abs is important, because if a is the smallest possible value of int, removing the sign will overflow. Promoting a to a long long just before won't result in this problem, as long long has a larger range of values.
Now static_cast<int>(std::llabs(a) % b) will always result in a value that is smaller than a, so casting it to int will never overflow/underflow. Even if a gets promoted to an unsigned long long, it doesn't matter because a is already "unsigned" from std::llabs(a), and so the value is unchanged (i.e. didn't overflow/underflow).
Because of the property stated above, if a is negative, multiply the result with -1 and you get the correct result.
The only case where it results in undefined behavior is when a is std::numeric_limits<long long>::min(), as removing the sign overflows a, resulting in undefined behavior. There is probably another way to implement the function, I'll think about it.

Cannot understand the difference between these two code samples

I wanted to write a program that computes the number of zones made by n lines.
The first example is my code, and the second is my friend's code. I think they are trying to do the same thing, but for the case n=65535 my code gives me the wrong answer. Where is the problem in my code?
my code:
#include<iostream>
using namespace std;
int main()
{
int n;
cin >> n;
unsigned long long ans;
ans = (n*(n + 1) / 2) + 1;
cout << ans << endl;
return 0;
}
my friend's code:
#include <iostream>
using namespace std;
int main(void){
double n,sum;
cin>>n;
sum=n*(n+1)/2+1;
cout<<(long)sum<<endl;
return 0;
}
In your code:
int n;
ans = (n*(n + 1) / 2) + 1;
All values in the calculation are ints: n is declared as int, and plain integer constants are ints as well. Therefore the result of this calculation will also be an int. The fact that you later assign this result to a long long variable doesn't change this.
Now the result of the multiplication 65535*65536 does not fit in a 32-bit signed int, so you get a nonsense answer. Fix your program by making n a 64-bit long long.
As #Dithermaster suggests, the problem here is probably one of integer overflow.
As it stands right now, your code doesn't actually make much sense. In particular, since you've defined n as an int, and all the integer literals in the expression: (n*(n + 1) / 2) + 1 are also small enough to fit in an int, the calculation will be carried out on ints, and then (after the calculation is complete) the result will be converted to long long and assigned to ans (because you've defined ans as a long long).
What you almost certainly want is to carry out the entire calculation on long long to avoid overflow. The most obvious way to do this would be to define n as a long long instead of an int.
Your friend has avoided this by defining n as a double. This works up to a point--a typical implementation of double has a 53-bit significand, so it can be used as (essentially) a 53-bit integer type. That's obviously quite a bit more than the 16 bits that's mandated for an int, but equally obviously less than the 64 bits mandated for a long long.
There's also no point in supporting n being negative, so you could consider defining n and ans as unsigned long long instead.

Why can't I divide a large number by a negative number C++

There's no real need for a solution to this, I just want to know why.
Let's take two numbers:
#include <iostream>
using namespace std;
int main()
{
unsigned long long int a = 17446744073709551615;
signed long long int b = -30000000003;
signed int c;
c = a/b;
cout << "\n\n\n" << c << endl;
}
Now, lately the answer I've been getting is zero. The size of my long long is 8 bytes, so more than enough to take it with the unsigned label. The C variable should also be big enough to handle the answer. (It should be -581 558 136, according to Google). So...
Edit I'd like to point out that on my machine...
Using numeric_limits a falls well withing the maximum of 18446744073709551615 and b falls within the minimum limits of -9223372036854775808.
You have a number of implicit conversions happening, most of them unnecessary.
unsigned long long int a = 17446744073709551615;
An unsuffixed decimal integer literal is of type int, long int, or long long int; it's never of an unsigned type. That particular value almost certainly exceeds the maximum value of a long long int (263-1). Unless your compiler has a signed integer type wider than 64 bits, that makes your program ill-formed.
Add a ULL suffix to ensure that the literal is of the correct type:
unsigned long long int a = 17446744073709551615ULL;
The value happens to be between 263-1 and 264-1, so it fits in a 64-bit unsigned type but not in a 64-bit signed type.
(Actually just the U would suffice, but it doesn't hurt to be explicit.)
signed long long int b = -30000000003;
This shouldn't be a problem. 30000000003 is of some signed integer type; if your compiler supports long long, which is at least 64 bits wide, there's no overflow. Still, as long as you need a suffix on the value of a, it wouldn't hurt to be explicit:
signed long long int b = -30000000003LL;
Now we have:
signed int c;
c = a/b;
Dividing an unsigned long long by a signed long long causes the signed operand to be converted to unsigned long long. In this case, the value being converted is negative, so it's converted to a large positive value. Converting -30000000003 to unsigned long long yields 18446744043709551613. Dividing 17446744073709551615 by 18446744043709551613 yields zero.
Unless your compiler supports integers wider than 64 bits (most don't), you won't be able to directly divide 17446744073709551615 by -30000000003 and get a mathematically correct answer, since there's no integer type that can represent both values. All arithmetic operators (other than the shift operators) require operands of the same type, with implicit conversions applied as necessary.
In this particular case, you can divide 17446744073709551615ULL by 30000000003ULL and then account for the sign. (Check the language rules for division of negative integers.)
If you really need to do this in general, you can resort to floating-point (which means you'll probably lose some precision) or use some arbitrary width integer arithmetic package like GMP.
b is getting treated as an unsigned number which is larger than a. Hence you are getting the answer as 0.
Try using it as
c = abs(a) / abs (b)
if ((a < 0 && b > 0 ) || (a> 0 && b < 0))
return -c;
return c;