Cannot understand the difference between these two code samples - c++

I wanted to write a program that computes the number of zones made by n lines.
The first example is my code, and the second is my friend's code. I think they are trying to do the same thing, but for the case n=65535 my code gives me the wrong answer. Where is the problem in my code?
my code:
#include<iostream>
using namespace std;
int main()
{
int n;
cin >> n;
unsigned long long ans;
ans = (n*(n + 1) / 2) + 1;
cout << ans << endl;
return 0;
}
my friend's code:
#include <iostream>
using namespace std;
int main(void){
double n,sum;
cin>>n;
sum=n*(n+1)/2+1;
cout<<(long)sum<<endl;
return 0;
}

In your code:
int n;
ans = (n*(n + 1) / 2) + 1;
All values in the calculation are ints: n is declared as int, and plain integer constants are ints as well. Therefore the result of this calculation will also be an int. The fact that you later assign this result to a long long variable doesn't change this.
Now the result of the multiplication 65535*65536 does not fit in a 32-bit signed int, so you get a nonsense answer. Fix your program by making n a 64-bit long long.

As #Dithermaster suggests, the problem here is probably one of integer overflow.
As it stands right now, your code doesn't actually make much sense. In particular, since you've defined n as an int, and all the integer literals in the expression: (n*(n + 1) / 2) + 1 are also small enough to fit in an int, the calculation will be carried out on ints, and then (after the calculation is complete) the result will be converted to long long and assigned to ans (because you've defined ans as a long long).
What you almost certainly want is to carry out the entire calculation on long long to avoid overflow. The most obvious way to do this would be to define n as a long long instead of an int.
Your friend has avoided this by defining n as a double. This works up to a point--a typical implementation of double has a 53-bit significand, so it can be used as (essentially) a 53-bit integer type. That's obviously quite a bit more than the 16 bits that's mandated for an int, but equally obviously less than the 64 bits mandated for a long long.
There's also no point in supporting n being negative, so you could consider defining n and ans as unsigned long long instead.

Related

long long int bit representation, C++ [duplicate]

I want to use the following code in my program but gcc won't allow me to left shift my 1 beyond 31.
sizeof(long int) displays 8, so doesn't that mean I can left shift till 63?
#include <iostream>
using namespace std;
int main(){
long int x;
x=(~0 & ~(1<<63));
cout<<x<<endl;
return 0;
}
The compiling outputs the following warning:
left shift `count >= width` of type [enabled by default] `x=(~0 & ~(1<<63))`;
^
and the output is -1. Had I left shifted 31 bits I get 2147483647 as expected of int.
I am expecting all bits except the MSB to be turned on thus displaying the maximum value the datatype can hold.
Although your x is of type long int, the 1 is not. 1 is an int, so 1<<63 is indeed undefined.
Try (static_cast<long int>(1) << 63), or 1L << 63 as suggested by Wojtek.
You can't use 1 (int by default) to shift it beyond the int boundaries.
There's an easier way to get the "all bits except the MSB turned on" for a specific datatype
#include <iostream>
#include <limits>
using namespace std;
int main(){
unsigned long int max = std::numeric_limits<unsigned long int>::max();
unsigned long int max_without_MSB = max >> 1;
cout<< max_without_MSB <<endl;
return 0;
}
note the unsigned type. Without numeric_limits:
#include <iostream>
using namespace std;
int main() {
long int max = -1;
unsigned long int max_without_MSB = ((unsigned long int)max) >> 1;
cout << max_without_MSB << endl;
return 0;
}
Your title is misleading; a long can shift beyond 31 bits if a long is indeed that big. However your code shifts 1, which is an int.
In C++, the type of an expression is determined by the expression itself. An expression XXXXX has the same type regardless; if you later go double foo = XXXXX; it doesn't mean XXXXX is a double - it means a conversion happens from whatever XXXXX was, to double.
If you want to left-shift a long, then do that explicitly, e.g. 1L << 32, or ((long)1) << 32. Note that the size of long varies between platforms, so if you don't want your code to break when run on a different system then you'll have to take further measures, such as using fixed-width types, or shifting by CHAR_BIT * sizeof(long) - 1.
There is another issue with your intended code: 1L << 63 causes undefined behaviour if long is 64-bit or less. This is because of signed integer overflow; left-shift is defined the same as repeated multiplication of two, so attempting to "shift into the sign bit" causes an overflow.
To fix this, use unsigned types where it is fine to shift into the MSB, e.g. 1ul << 63.
Technically there is another issue in that ~0 doesn't do what you want if you are not on a 2's complement system, but these days it's pretty safe to ignore that case.
Looking at your overall intention with long x = ~0 & ~(1 << 63). A shorter way to write this is:
long x = LONG_MAX;
which is defined by <climits>. If you wanted 64-bit on all platforms then
int64_t x = INT64_MAX;
NB. If you do not intend to work with negative values then use unsigned long x and uint64_t respectively.
First let me state a few things about the shift, which is the source of your problem:
There is no guarantee that long int is actually 64 bit wide.
The most generic way I can think of is using std::numeric_limits:
static_cast<long int>(1) << (std::numeric_limits<long int>::digits - 1);
Now you can even make that a constexpr templated function:
template <typename Integer>
constexpr Integer foo()
{
return static_cast<Integer>(1) << (std::numeric_limits<Integer>::digits - 1);
}
So replacing the shift with static_cast<long int>(1) << (std::numeric_limits<long int>::digits - 1) will fix your issue, however there is a far better way:
std::numeric_limits includes a bunch of useful stuff, including:
std::numeric_limits<T>::max(); // the maximum value T can hold
std::numeric_limits<T>::min(); // the minimum value T can hold
std::numeric_limits<T>::digits; // the number of binary digits
std::numeric_limits<T>::is_signed(); // well, do I have to explain? ;-)
See cppreference.com for a complete list. You should prefer the facilities provided by the standard library, because it will most likely have fewer mistakes and other developers immediately know it.
The default datatype for a numeric value in C is integer unless explicitly mentioned.
Here you have to type cast the 1 as long int which would otherwise be an int.

What data type is used to store intermediate calculations while executing a program in C++?

I was trying to do the following calculations but found out that the calculations do not yield the correct result.
I have the following doubt that when my computer does the calculation a*b, what data type is used to store the result of the calculation temporary before doing the modulus. How is the data type in which it stores the result decided?.
Please do let me know about the source of the information.
#include <iostream>
using namespace std;
int main()
{
long long int a=1000000000000000000; // 18 zeroes
long long int b=1000000000000000000;
long long int c=1000000007;
long long int d=(a*b)%c;
cout<<a<<"\n"<<b<<"\n"<<c<<"\n"<<d;
}
Edit1: This code also gives incorrect output
#include <iostream>
using namespace std;
int main()
{
int a=1000000000; // 9 zeroes
int b=1000000000;
long long int c=1000000007;
long long int d=a*b%c;
cout<<a<<"\n"<<b<<"\n"<<c<<"\n"<<d;
}
How is the data type in which it stores the result decided?
The rules are fairly complicated and convoluted in general, but in this particular case it's simple. a*b is of type long long, and since a*b overflows the programs has Undefined Behavior.
You can use the equivalent formula to compute the correct result (without overflowing):
(a * b) % c == ((a % c) * (b % c)) % c
Could you also suggest on how to decide for mixed data types and post
about your source of information
Of some interest: https://en.cppreference.com/w/cpp/language/implicit_conversion The standard rules are unfortunately even more complicated.
As some suggestions:
never mix unsigned and signed.
pay attentions that types smaller than int will be promoted to int or unsigned.
for a type T equal or larger than int then T op T will have type type T. This is what you should be aiming for in your expressions. (i.e. have both operators of the same type either int, long or long long.
avoid unsigned types. Unfortunately that's impossible with the current Standard Library design (std::size_t sigh)
avoid long as its width differs between current major compilers and platforms
if you care about the width of the integer data type then avoid int long long long and such and always use fixed width integer types (std::int32_t std::int64_t etc.). Completely ignore that technically those types are optional.
My understanding is that long long has to be able to accommodate at least 64 bits but each 1000000000000000000 is a 60 bit number so a*b would yield a result that exceeds any integer representation the compiler supports. Perhaps you were thinking that the 1000000000000000000 was binary?

Looping through bytes of a long prints out the bytes twice on 64-bit systems [duplicate]

I want to use the following code in my program but gcc won't allow me to left shift my 1 beyond 31.
sizeof(long int) displays 8, so doesn't that mean I can left shift till 63?
#include <iostream>
using namespace std;
int main(){
long int x;
x=(~0 & ~(1<<63));
cout<<x<<endl;
return 0;
}
The compiling outputs the following warning:
left shift `count >= width` of type [enabled by default] `x=(~0 & ~(1<<63))`;
^
and the output is -1. Had I left shifted 31 bits I get 2147483647 as expected of int.
I am expecting all bits except the MSB to be turned on thus displaying the maximum value the datatype can hold.
Although your x is of type long int, the 1 is not. 1 is an int, so 1<<63 is indeed undefined.
Try (static_cast<long int>(1) << 63), or 1L << 63 as suggested by Wojtek.
You can't use 1 (int by default) to shift it beyond the int boundaries.
There's an easier way to get the "all bits except the MSB turned on" for a specific datatype
#include <iostream>
#include <limits>
using namespace std;
int main(){
unsigned long int max = std::numeric_limits<unsigned long int>::max();
unsigned long int max_without_MSB = max >> 1;
cout<< max_without_MSB <<endl;
return 0;
}
note the unsigned type. Without numeric_limits:
#include <iostream>
using namespace std;
int main() {
long int max = -1;
unsigned long int max_without_MSB = ((unsigned long int)max) >> 1;
cout << max_without_MSB << endl;
return 0;
}
Your title is misleading; a long can shift beyond 31 bits if a long is indeed that big. However your code shifts 1, which is an int.
In C++, the type of an expression is determined by the expression itself. An expression XXXXX has the same type regardless; if you later go double foo = XXXXX; it doesn't mean XXXXX is a double - it means a conversion happens from whatever XXXXX was, to double.
If you want to left-shift a long, then do that explicitly, e.g. 1L << 32, or ((long)1) << 32. Note that the size of long varies between platforms, so if you don't want your code to break when run on a different system then you'll have to take further measures, such as using fixed-width types, or shifting by CHAR_BIT * sizeof(long) - 1.
There is another issue with your intended code: 1L << 63 causes undefined behaviour if long is 64-bit or less. This is because of signed integer overflow; left-shift is defined the same as repeated multiplication of two, so attempting to "shift into the sign bit" causes an overflow.
To fix this, use unsigned types where it is fine to shift into the MSB, e.g. 1ul << 63.
Technically there is another issue in that ~0 doesn't do what you want if you are not on a 2's complement system, but these days it's pretty safe to ignore that case.
Looking at your overall intention with long x = ~0 & ~(1 << 63). A shorter way to write this is:
long x = LONG_MAX;
which is defined by <climits>. If you wanted 64-bit on all platforms then
int64_t x = INT64_MAX;
NB. If you do not intend to work with negative values then use unsigned long x and uint64_t respectively.
First let me state a few things about the shift, which is the source of your problem:
There is no guarantee that long int is actually 64 bit wide.
The most generic way I can think of is using std::numeric_limits:
static_cast<long int>(1) << (std::numeric_limits<long int>::digits - 1);
Now you can even make that a constexpr templated function:
template <typename Integer>
constexpr Integer foo()
{
return static_cast<Integer>(1) << (std::numeric_limits<Integer>::digits - 1);
}
So replacing the shift with static_cast<long int>(1) << (std::numeric_limits<long int>::digits - 1) will fix your issue, however there is a far better way:
std::numeric_limits includes a bunch of useful stuff, including:
std::numeric_limits<T>::max(); // the maximum value T can hold
std::numeric_limits<T>::min(); // the minimum value T can hold
std::numeric_limits<T>::digits; // the number of binary digits
std::numeric_limits<T>::is_signed(); // well, do I have to explain? ;-)
See cppreference.com for a complete list. You should prefer the facilities provided by the standard library, because it will most likely have fewer mistakes and other developers immediately know it.
The default datatype for a numeric value in C is integer unless explicitly mentioned.
Here you have to type cast the 1 as long int which would otherwise be an int.

please explain this weird behaviour c++ multiplying numbers then mod

i have less experience with c++ and i code mainly in python. while solving some programming challenge online there was a part of code where i have to multiply two numbers and reduced it with mod.
v = (u*node) % 100000
where u and node are int values with range 1 - 100000. Due to time limit issues i wrote my code in c++. Here what i wrote
long long v = (u * node) % 100000;
while submitting i got runtime error in all the test cases. I downloaded failed test cases and ran in my local computer and i was getting perfect output.
After seeing editorial, i change that line to something like this
long long v = u;
v = (v*node) % 100000;
and submitted. I passed all the test cases. Please can anyone explain whats the difference between those two lines..
variable data types -
int u
int node
Because u and node are both ints, this expression,
(u * node)
produces an int result. If it overflows—meaning that the result is too large to fit in an int—too bad. Signed integer overflow is undefined behavior, and all bets are off. Chances are, it'll do something like wrap around, but it could also format your hard disk.
When you make u a long long int, then the same expression produces a long long int result. The node multiplicand gets implicitly promoted to a long long int (int to long long int is a widening conversion, so it is always safe), and then these two long long int values get multiplied. This operation won't overflow, so you avoid undefined behavior and get the correct result.
You could have also written the code with an explicit cast to avoid the declaration of a new variable:
(static_cast<long long int>(u) * node)
Note that it doesn't matter which value you promote, the result will be the same because the other value will get implicitly promoted, as described above:
(u * static_cast<long long int>(node))
On the other hand, this won't work:
static_cast<long long int>(u * node)
because it only widens the result of the multiplication operation, after the multiplication has been performed. If that multiplication overflowed the int type, then it is already too late.
It is the same reason that this doesn't work—the promotion to long long happens after the result is evaluated as an int:
long long v = (u * node)
Please can anyone explain whats the difference between those two lines..
first line actually mean:
long long v = (long long) (int * int % int);
so, first you multiply int by int, get overflow, truncate to int, mod, extend int to long long
next line actual mean:
long long v = (long long) int;
v = long long * int % int;
so, first extend int to long, multiple long by int, no overflow, mod, assign to long long
Probably you were running into an overflow in the first case (both u and v as ints). This happens because when multiplying variables, the compiler will keep the result in a temporary variable that has the same type of the highest type (int, floats, doubles, etc) of the variable in the multiplication. So, if you are multiplying two big integers, the result can overflow.
Your modification works because the temporary result is stored in a long long, which does not overflow in your examples.

long long value in Visual Studio

We know that -2*4^31 + 1 = -9.223.372.036.854.775.807, the lowest value you can store in long long, as being said here: What range of values can integer types store in C++.
So I have this operation:
#include <iostream>
unsigned long long pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -pow(4, 31) + 5 -pow(4,31);
std::cout << nr << std::endl;
}
Why does it show -9.223.372.036.854.775.808 instead of -9.223.372.036.854.775.803? I'm using Visual Studio 2015.
This is a really nasty little problem which has three(!) causes.
Firstly there is a problem that floating point arithmetic is approximate. If the compiler picks a pow function returning float or double, then 4**31 is so large that 5 is less than 1ULP (unit of least precision), so adding it will do nothing (in other words, 4.0**31+5 == 4.0**31). Multiplying by -2 can be done without loss, and the result can be stored in a long long without loss as the wrong answer: -9.223.372.036.854.775.808.
Secondly, a standard header may include other standard headers, but is not required to. Evidently, Visual Studio's version of <iostream> includes <math.h> (which declares pow in the global namespace), but Code::Blocks' version doesn't.
Thirdly, the OP's pow function is not selected because he passes arguments 4, and 31, which are both of type int, and the declared function has arguments of type unsigned. Since C++11, there are lots of overloads (or a function template) of std::pow. These all return float or double (unless one of the arguments is of type long double - which doesn't apply here).
Thus an overload of std::pow will be a better match ... with a double return values, and we get floating point rounding.
Moral of the story: Don't write functions with the same name as standard library functions, unless you really know what you are doing!
Visual Studio has defined pow(double, int), which only requires a conversion of one argument, whereas your pow(unsigned, unsigned) requires conversion of both arguments unless you use pow(4U, 31U). Overloading resolution in C++ is based on the inputs - not the result type.
The lowest long long value can be obtained through numeric_limits. For long long it is:
auto lowest_ll = std::numeric_limits<long long>::lowest();
which results in:
-9223372036854775808
The pow() function that gets called is not yours hence the observed results. Change the name of the function.
The only possible explaination for the -9.223.372.036.854.775.808 result is the use of the pow function from the standard library returning a double value. In that case, the 5 will be below the precision of the double computation, and the result will be exactly -263 and converted to a long long will give 0x8000000000000000 or -9.223.372.036.854.775.808.
If you use you function returning an unsigned long long, you get a warning saying that you apply unary minus to an unsigned type and still get an ULL. So the whole operation should be executed as unsigned long long and should give without overflow 0x8000000000000005 as unsigned value. When you cast it to a signed value, the result is undefined, but all compilers I know simply use the signed integer with same representation which is -9.223.372.036.854.775.803.
But it would be simple to make the computation as signed long long without any warning by just using:
long long nr = -1 * pow(4, 31) + 5 - pow(4,31);
As a addition, you have neither undefined cast nor overflow here so the result is perfectly defined per standard provided unsigned long long is at least 64 bits.
Your first call to pow is using the C standard library's function, which operates on floating points. Try giving your pow function a unique name:
unsigned long long my_pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -my_pow(4, 31) + 5 - my_pow(4, 31);
std::cout << nr << std::endl;
}
This code reports an error: "unary minus operator applied to unsigned type, result still unsigned". So, essentially, your original code called a floating point function, negated the value, applied some integer arithmetic to it, for which it did not have enough precision to give the answer you were looking for (at 19 digits of presicion!). To get the answer you're looking for, change the signature to:
long long my_pow(unsigned a, unsigned b);
This worked for me in MSVC++ 2013. As stated in other answers, you're getting the floating-point pow because your function expects unsigned, and receives signed integer constants. Adding U to your integers invokes your version of pow.