C++ did wrong mathematics? - c++

I compiled this basic calculation in C++ but got wrong answers when did it on calculator. How is this possible even though i declared "ans" as "long long int"?
#include <iostream>
#include <cstdlib>
#include<vector>
#include<string>
#include <unordered_map>
#include <utility>
using namespace std;
int main(){
long long int ans=1-1000000000+1-1000000000+1-1000000000;
cout<<ans; //-2999999997
return 0;
}
the expected answer is commented against cout<<ans;
but what compiler returned is: 1294967299
please let me know where I went wrong in this.

A more modern C++ compiler will tell you what the problem is, like mine did:
warning: integer overflow in expression of type ‘int’ results in
‘1294967299’
These numbers are too big for the default size of ints on your C++ implementation. You must tell your C++ compiler that your numbers are long longs:
long long int ans=1-1000000000LL+1-1000000000LL+1-1000000000LL;
All that long long int does is tell your C++ compiler that the result of the mathematical expression is a long long int. Your problem is that you must tell your C++ compiler that the numbers it uses to compute the expression are also long longs.

It is taking ints and calculating them as ints and then storing the result in a long long. So an overflow happens before you store in long long
In order to do it right put LL at the end of each number.
Like this: 1LL instead of 1.

You went wrong not enabling compiler warnings. Assuming you're using g++ or clang++, use -Wall and you will be enlightened.

Related

Writing "enum: int64_t" value to std::ostringstream truncates it to int

This code behaves in an unexpected way with MSVC compiler (v141 toolset, /std:c++17):
#include <iostream>
#include <limits>
#include <sstream>
#include <stdint.h>
int main() {
std::ostringstream ss;
enum Enum : int64_t {muchos_digitos = std::numeric_limits<int64_t>::max() - 1000};
ss << muchos_digitos;
std::cout << ss.str();
return 0;
}
Specifically, it prints "-1001". It is only after much head scratching and enabling /W4 warning level that I discovered the cause:
warning C4305: 'argument': truncation from 'main::Enum' to 'int'
But why does it happen? Indeed, the debugger confirms that int overload is called instead of long long, but why? And how can I circumvent this in generic code? I could cast muchos_digitos to int64_t, but I receive the value as typename T. I can figure out that it's an enum, but how can I know that it's a strongly typed enum, and can I find out its underlying type? I don't think it's directly possible...
The output is correct under GCC, but I need the code to work with all three of GCC, clang and MSVC.
Online demo
P. S. It was a mistake that /W4 was not set for my project in the first place. I recommend everyone to use this level with MSVC and -pedantic-errors with GCC / clang, it really saves you time with bizzare errors and surprising behavior when you notice it at compile time as you write the code.
This was confirmed to be a bug by the Microsoft team and it's now fixed in VS 2019: https://developercommunity.visualstudio.com/content/problem/475488/wrong-ostringstreamoperator-overload-selected-for.html

Change all the int type to another type using #define for large program

I want to change all my int type in my program to support arbitrary position integer. I chose to use GMP.
I am thinking about is it possible to do a #define to replace all int to mpz_class.
I start by a small program
#include <iostream>
#define int long long int
using namespace std;
int main(){
// .... code
}
The compiler is already complaining about main have to return an int type.
Is it possible to add exception to #define? or this is a really bad idea to do so?
Redefining a keyword is prohibited iff you include any standard headers. Here, you included <iostream> so your program is ill-formed.
Otherwise, knock yourself out! Wait, no, don't, because this would still be really silly.
Instead, refactor your code to use some new type called, say, my_integer (but with a much better name):
typedef int my_integer;
Then, when you want to change from int to mpz_class, you just change the definition of my_integer:
typedef mpz_class my_integer;
use main without int like this:
#include <iostream>
#define int long long int
using namespace std;
main(){
// .... code
}
The simple answer: although technically possible you are not allowed to #define any of the reserved identifiers.

Same code but different output in Netbeans C/C++ Project

I came across this weird situation in Netbeans C/C++. Here is the situation:
In my project explorer, under Source Files, I have main.c and problem3.c
In main.c
#include <stdio.h>
#include <stdlib.h>
// long BigNumber(){
// return 600851475143;
// }
int main(int argc, char* argv[]) {
printf("%lu", BigNumber() );
return (EXIT_SUCESS);
}
In problem3.c
long BigNumber(){
return 600851475143;
}
My case is, when I use BigNumber() from problem3.c, it will output 403282979527, which is incorrect. But if I use BigNumber() from main.c, it will print 600851475143.
Can anyone explain the magic behind? Is it because of the platform, or tools such as make? I'm using Windows 7 32-bit, NetBeans 7.3.1, with MinGW.
This is actually overflow as Windows 32-bit follows the LP32 or 4/4/4 model where int, long and pointer are all 32-bits (4 bytes) long and the number you are storing is larger than 32-bits, signed or not. The fact that it works at all in the first case is actually just a coincidence. Likely the linking step caused by moving it to the other file "brings out" some other behavior that causes the problem you are seeing. gcc even warns of overflow here.
You have a few options, but a simple one is to use int64_t instead of long (this is why all those intxx_t types exist after all!). You should also use a LL suffix on the literal to inform the compiler that it is a long long literal, and also change your printf to use "llu" instead of "lu" (long long again)
The fix altogether:
#include <stdio.h>
#include <stdlib.h>
int64_t BigNumber() {
return 600851475143LL;
}
int main(int argc, char* argv[]) {
printf("%llu", BigNumber() );
return 0;
}
You should be able to safely move this function, as it is now well defined.

Why gcc in 64 bit ubuntu doesn't detect the following array overflow?

So my code is
#include <stdio.h>
#include <string.h>
int main()
{
const char *a="123456789abcdef";
char b[10];
int i=0;
while((b[i]=a[i])!='\0')
++i;
printf("%s, %d\n",b,strlen(b));
return 0;
}
The code exists a array overflow with array b, but when I compile it with gcc(version 4.6.3) in my system (64bit ubuntu 12.04 lts),it succeed. The output of this program is 123456789abcdef, 15 and returns 0 means this program exits normally. I don't know whether it's my compiler's problem or my system's, is there anyone can tell me?
P.S. It seems like it only appears in 64-bit linux with gcc. Is this a bug?
Array accesses are not checked in C. If you overflow a buffer like this, the result is undefined behavior. It is the programmer's responsibility to guard against this, not the compiler's.
There are tools though to assist in checking for invalid memory access. Like Valgrind for doing so at runtime, and Clang's static analyzer for compile-time checking.

How do I fix an "ambiguous" function call?

I'm working on a C++ program for class, and my compiler is complaining about an "ambiguous" function call. I suspect that this is because there are several functions defined with different parameters.
How can I tell the compiler which one I want? Aside from a case-specific fix, is there a general rule, such as typecasting, which might solve these kinds of problems?
Edit:
In my case, I tried calling abs() inside of a cout statement, passing in two doubles.
cout << "Amount is:" << abs(amountOrdered-amountPaid);
Edit2:
I'm including these three headers:
#include <iostream>
#include <fstream>
#include <iomanip>
using namespace std;
Edit3:
I've finished the program without this code, but in the interest of following through with this question, I've reproduced the problem. The verbatim error is:
Call to 'abs' is ambiguous.
The compiler offers three versions of abs, each taking a different datatype as a parameter.
What's happened is that you've included <cstdlib> (indirectly, since it's included by iostream) along with using namespace std;. This header declares two functions in std with the name abs(). One takes and returns long long, and the other returns long. Plus, there's the one in the global namespace (that returns int) that comes from <stdlib.h>.
To fix: well, the abs() that takes double is in <cmath>, and that will actually give you the answer you want!
The abs function included by <cstdlib> is overloaded for int and long and long long. Since you give a double as the argument, the compiler does not have an exact fit, so it tries to convert the double to a type that abs accepts, but it does not know if it should try to convert it to int, long, or long long, hence it's ambiguous.
But you probably really want the abs that takes a double and returns a double. For this you need to include <cmath>. Since the double argument matches exactly, the compiler will not complain.
It seems that <cstdlib> gets included automatically when you include the other headers which should not happen. The compiler should have given error: ‘abs’ was not declared in this scope or something similar.
Try using fabs defined in <cmath>. It takes float, double and long double as arguments. abs is defined both in <cmath> and <cstdlib>. The difference is abs(int), abs(long) and abs(long long) are defined in <cstdlib> while other versions are defined in <cmath>.
Not sure why this isn't calling the int version of abs but you could try type casting the expression (amountOrdered - amountPaid) as int i.e.
cout <<"Amount is: "<< abs( (int)(amountOrdered - amountPaint) );