I am trying to do a division of :-
#include <bits/stdc++.h>
using namespace std;
int main(){
int A = -2147483648;
int B = -1;
int C = A/B;
// this is not working
cout<<C<<endl;
// nor this is working
cout<<A/B<<endl;
// But this is working
cout<<-2147483648/-1<<endl; // printing the result 2147483648;
}
I am confused why this happening. Please explain.
Assuming the int type is 32-bits and uses two's complement representation, the first two cases exhibit undefined behavior because both -2147483648 and -1 fit in a int but 2147483648 does not.
In the third case, the expression -2147483648/-1 contains the integer literal 2147483648 (before being negated), and has the first type in which the value can fit. In this case, that would be long int. The rest of the calculation keeps the type, so no undefined behavior occurs.
You can change the data type to long long.
long long A = -2147483648;
long long B = -1;
long long C = A/B;
If your you need fractional result, try 'double' instead of 'long long'.
This question already has an answer here:
Integer division always zero [duplicate]
(1 answer)
Closed 10 months ago.
So I'm wanting to turn an unsigned integer (Fairly large one, often above half of the unsigned integer limit) into a double that shows how far it is between 0 and the unsigned integer limit. Problem is, dividing it by the unsigned integer limit is always returning 0. Example:
#include <iostream>
;
int main()
{
uint64_t a = 11446744073709551615
double b = a / 18446744073709551615;
std::cout << b;
};
This always returns 0. Is there an alternative method or a way to fix this one?
If it means anything, I'm using GCC with the -O3 optimisation flag.
You have to convert the expression on the right to double, for example, like this:
double b = static_cast<double>(a) / 18446744073709551615;
or
double b = a / 18446744073709551615.0;
We know that -2*4^31 + 1 = -9.223.372.036.854.775.807, the lowest value you can store in long long, as being said here: What range of values can integer types store in C++.
So I have this operation:
#include <iostream>
unsigned long long pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -pow(4, 31) + 5 -pow(4,31);
std::cout << nr << std::endl;
}
Why does it show -9.223.372.036.854.775.808 instead of -9.223.372.036.854.775.803? I'm using Visual Studio 2015.
This is a really nasty little problem which has three(!) causes.
Firstly there is a problem that floating point arithmetic is approximate. If the compiler picks a pow function returning float or double, then 4**31 is so large that 5 is less than 1ULP (unit of least precision), so adding it will do nothing (in other words, 4.0**31+5 == 4.0**31). Multiplying by -2 can be done without loss, and the result can be stored in a long long without loss as the wrong answer: -9.223.372.036.854.775.808.
Secondly, a standard header may include other standard headers, but is not required to. Evidently, Visual Studio's version of <iostream> includes <math.h> (which declares pow in the global namespace), but Code::Blocks' version doesn't.
Thirdly, the OP's pow function is not selected because he passes arguments 4, and 31, which are both of type int, and the declared function has arguments of type unsigned. Since C++11, there are lots of overloads (or a function template) of std::pow. These all return float or double (unless one of the arguments is of type long double - which doesn't apply here).
Thus an overload of std::pow will be a better match ... with a double return values, and we get floating point rounding.
Moral of the story: Don't write functions with the same name as standard library functions, unless you really know what you are doing!
Visual Studio has defined pow(double, int), which only requires a conversion of one argument, whereas your pow(unsigned, unsigned) requires conversion of both arguments unless you use pow(4U, 31U). Overloading resolution in C++ is based on the inputs - not the result type.
The lowest long long value can be obtained through numeric_limits. For long long it is:
auto lowest_ll = std::numeric_limits<long long>::lowest();
which results in:
-9223372036854775808
The pow() function that gets called is not yours hence the observed results. Change the name of the function.
The only possible explaination for the -9.223.372.036.854.775.808 result is the use of the pow function from the standard library returning a double value. In that case, the 5 will be below the precision of the double computation, and the result will be exactly -263 and converted to a long long will give 0x8000000000000000 or -9.223.372.036.854.775.808.
If you use you function returning an unsigned long long, you get a warning saying that you apply unary minus to an unsigned type and still get an ULL. So the whole operation should be executed as unsigned long long and should give without overflow 0x8000000000000005 as unsigned value. When you cast it to a signed value, the result is undefined, but all compilers I know simply use the signed integer with same representation which is -9.223.372.036.854.775.803.
But it would be simple to make the computation as signed long long without any warning by just using:
long long nr = -1 * pow(4, 31) + 5 - pow(4,31);
As a addition, you have neither undefined cast nor overflow here so the result is perfectly defined per standard provided unsigned long long is at least 64 bits.
Your first call to pow is using the C standard library's function, which operates on floating points. Try giving your pow function a unique name:
unsigned long long my_pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -my_pow(4, 31) + 5 - my_pow(4, 31);
std::cout << nr << std::endl;
}
This code reports an error: "unary minus operator applied to unsigned type, result still unsigned". So, essentially, your original code called a floating point function, negated the value, applied some integer arithmetic to it, for which it did not have enough precision to give the answer you were looking for (at 19 digits of presicion!). To get the answer you're looking for, change the signature to:
long long my_pow(unsigned a, unsigned b);
This worked for me in MSVC++ 2013. As stated in other answers, you're getting the floating-point pow because your function expects unsigned, and receives signed integer constants. Adding U to your integers invokes your version of pow.
I am trying to represent some large numbers in c++. In the below code if tried only to print s the compiler would not complain. But If try to make some multiplications and storing it in t the compiler would say integer overflow in expression...
I tried to make it unsigned long long t but again the compiler complains. Is there any way in doing this multiplication without having any overflow?
int main ()
{
long long int s = 320718425168;
long long int t = 4684688*68461; //4684688*68461 = 320718425168
return 0;
}
The literals used as the factors of the products are of type int, which cannot represent the product.
Cast one of the factors to long long first.
long long t = (long long)4684688 * 68461;
Or use the corresponding literal suffix ll or LL to change the literals type. I.e.
long long t = 4684688LL * 68461;
Demo.
I wanted to write a program that computes the number of zones made by n lines.
The first example is my code, and the second is my friend's code. I think they are trying to do the same thing, but for the case n=65535 my code gives me the wrong answer. Where is the problem in my code?
my code:
#include<iostream>
using namespace std;
int main()
{
int n;
cin >> n;
unsigned long long ans;
ans = (n*(n + 1) / 2) + 1;
cout << ans << endl;
return 0;
}
my friend's code:
#include <iostream>
using namespace std;
int main(void){
double n,sum;
cin>>n;
sum=n*(n+1)/2+1;
cout<<(long)sum<<endl;
return 0;
}
In your code:
int n;
ans = (n*(n + 1) / 2) + 1;
All values in the calculation are ints: n is declared as int, and plain integer constants are ints as well. Therefore the result of this calculation will also be an int. The fact that you later assign this result to a long long variable doesn't change this.
Now the result of the multiplication 65535*65536 does not fit in a 32-bit signed int, so you get a nonsense answer. Fix your program by making n a 64-bit long long.
As #Dithermaster suggests, the problem here is probably one of integer overflow.
As it stands right now, your code doesn't actually make much sense. In particular, since you've defined n as an int, and all the integer literals in the expression: (n*(n + 1) / 2) + 1 are also small enough to fit in an int, the calculation will be carried out on ints, and then (after the calculation is complete) the result will be converted to long long and assigned to ans (because you've defined ans as a long long).
What you almost certainly want is to carry out the entire calculation on long long to avoid overflow. The most obvious way to do this would be to define n as a long long instead of an int.
Your friend has avoided this by defining n as a double. This works up to a point--a typical implementation of double has a 53-bit significand, so it can be used as (essentially) a 53-bit integer type. That's obviously quite a bit more than the 16 bits that's mandated for an int, but equally obviously less than the 64 bits mandated for a long long.
There's also no point in supporting n being negative, so you could consider defining n and ans as unsigned long long instead.