I came across this weird situation in Netbeans C/C++. Here is the situation:
In my project explorer, under Source Files, I have main.c and problem3.c
In main.c
#include <stdio.h>
#include <stdlib.h>
// long BigNumber(){
// return 600851475143;
// }
int main(int argc, char* argv[]) {
printf("%lu", BigNumber() );
return (EXIT_SUCESS);
}
In problem3.c
long BigNumber(){
return 600851475143;
}
My case is, when I use BigNumber() from problem3.c, it will output 403282979527, which is incorrect. But if I use BigNumber() from main.c, it will print 600851475143.
Can anyone explain the magic behind? Is it because of the platform, or tools such as make? I'm using Windows 7 32-bit, NetBeans 7.3.1, with MinGW.
This is actually overflow as Windows 32-bit follows the LP32 or 4/4/4 model where int, long and pointer are all 32-bits (4 bytes) long and the number you are storing is larger than 32-bits, signed or not. The fact that it works at all in the first case is actually just a coincidence. Likely the linking step caused by moving it to the other file "brings out" some other behavior that causes the problem you are seeing. gcc even warns of overflow here.
You have a few options, but a simple one is to use int64_t instead of long (this is why all those intxx_t types exist after all!). You should also use a LL suffix on the literal to inform the compiler that it is a long long literal, and also change your printf to use "llu" instead of "lu" (long long again)
The fix altogether:
#include <stdio.h>
#include <stdlib.h>
int64_t BigNumber() {
return 600851475143LL;
}
int main(int argc, char* argv[]) {
printf("%llu", BigNumber() );
return 0;
}
You should be able to safely move this function, as it is now well defined.
Related
I compiled this basic calculation in C++ but got wrong answers when did it on calculator. How is this possible even though i declared "ans" as "long long int"?
#include <iostream>
#include <cstdlib>
#include<vector>
#include<string>
#include <unordered_map>
#include <utility>
using namespace std;
int main(){
long long int ans=1-1000000000+1-1000000000+1-1000000000;
cout<<ans; //-2999999997
return 0;
}
the expected answer is commented against cout<<ans;
but what compiler returned is: 1294967299
please let me know where I went wrong in this.
A more modern C++ compiler will tell you what the problem is, like mine did:
warning: integer overflow in expression of type βintβ results in
β1294967299β
These numbers are too big for the default size of ints on your C++ implementation. You must tell your C++ compiler that your numbers are long longs:
long long int ans=1-1000000000LL+1-1000000000LL+1-1000000000LL;
All that long long int does is tell your C++ compiler that the result of the mathematical expression is a long long int. Your problem is that you must tell your C++ compiler that the numbers it uses to compute the expression are also long longs.
It is taking ints and calculating them as ints and then storing the result in a long long. So an overflow happens before you store in long long
In order to do it right put LL at the end of each number.
Like this: 1LL instead of 1.
You went wrong not enabling compiler warnings. Assuming you're using g++ or clang++, use -Wall and you will be enlightened.
We have a very large C++ codebase that we would like to compile using gcc with the "FORTIFY_SOURCE=2" option to improve security and reduce the risk of buffer overflows. The problem is when we compile the system using FORTIFY_SOURCE, the binary sizes drastically increase. (From a total of 4GB to over 25GB) This causes issues when we need to deploy the code because it takes 5x as long to zip it up and deploy it.
In an attempt to figure out what was going on, I made a simple test program that does a bunch of string copies with strcpy (one of the functions FORTIFY_SOURCE is supposed to enhance and compiled it both with and without "FORTIFY_SOURCE".
#include <cstring>
#include <iostream>
using namespace std;
int main()
{
char buf1[100];
char buf2[100];
char buf3[100];
char buf4[100];
char buf5[100];
char buf6[100];
char buf7[100];
char buf8[100];
char buf9[100];
char buf10[100];
strcpy(buf1, "this is a string");
strcpy(buf2, "this is a string");
strcpy(buf3, "this is a string");
strcpy(buf4, "this is a string");
strcpy(buf5, "this is a string");
strcpy(buf6, "this is a string");
strcpy(buf7, "this is a string");
strcpy(buf8, "this is a string");
strcpy(buf9, "this is a string");
strcpy(buf10, "this is a string");
}
Compilation:
g++ -o main -O3 fortify_test.cpp
and
g++ -o main -D_FORTIFY_SOURCE=2 -O3 fortify_test.cpp
I discovered that using "FORTIFY_SOURCE" on a simple example had no noticeable impact on binary size (the resulting binary was 8.4K with and without fortifying the source.)
When there's no noticeable impact with a simple example, I wouldn't expect to see such a drastic size increase in more complex examples. What could FORTIFY_SOURCE possibly be doing to increase our binary sizes so drastically?
Your example is actually a not very good one because there's no fortifiable code on it. Code fortification is not magical, and the compiler can only do it under some specific conditions.
Lets take a sample of code with 2 functions, one can be fortified by the compiler (because from the code itself it can determine the maximum size of the buffer), the other cannot (because same information is missing):
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <limits.h>
#include <errno.h>
int f_protected(char *in)
{
char buffer[256];
memcpy(buffer, in, strlen(in));
printf("Hello %s !\n", buffer);
return 0;
}
int f_not_protected(char *in, int sz)
{
char buffer[sz];
memcpy(buffer, in, strlen(in));
printf("Hello %s !\n", buffer);
return 0;
}
int main (int argc, char **argv, char **envp)
{
if(argc < 2){
printf("Usage: %s <some string>\n", argv[0]);
exit(EXIT_SUCCESS);
}
f_protected(argv[1]);
f_not_protected(argv[1], strlen(argv[1]));
return 0;
}
There's an amazing online tool that allows you compared compiled code at https://godbolt.org/
You can actually compare both compiled versions of this sample here.
As you will be able to see in the ASM output, the fortified version of this function does perform more checks than the unfortified one, requiring extra ASM code, actually increasing file size.
However, it's hard to think of a case where it would increment code size so much. Is it possible that maybe you're not stripping debug info?
I had a typo (|| instead of |) and noticed such a code fails with GCC and compiles with Visual.
I know that the second parameter of std::ifstream is an int. So theoretically, a bool has to be converted implicitely to an int. So why it fails?
Example inducing the error (I just used some ints instead of the flags).
#include <fstream>
int main(int argc, char * argv[]) {
std::ifstream("foo", 2 | 3 || 4)
}
std::ifstream's constructor takes as second argument an std::ios_base::openmode which is typedefed from an implementation defined type:
typedef /*implementation defined*/ openmode;
It seems Visual uses integers, GCC does not, and it's why your code fails on GCC.
How to configure the compiler of VS 2012 to change the type of int variable to become 2 byte instead of 4 bytes?
i have tried:
#include <iostream>
#include <stdint.h>
int main(int argc,char* argv[])
{
typedef __int16 int16_t;
int16_t x=5;
std::cout<<"Size of integer number= "<<sizeof(x)<<" Bytes\n";
system("pause");
return 0;
}
Is this what the compiler configuration mean?
I think the answer of my question is not a piece of code , it is about changing some setting in VS am i right?
You can't, and if you could you would break compatibility with every library you are using, so the result wouldn't work anyway.
Use the 16-bit compiler, if they still ship it, or int16_t.
So my code is
#include <stdio.h>
#include <string.h>
int main()
{
const char *a="123456789abcdef";
char b[10];
int i=0;
while((b[i]=a[i])!='\0')
++i;
printf("%s, %d\n",b,strlen(b));
return 0;
}
The code exists a array overflow with array b, but when I compile it with gcc(version 4.6.3) in my system (64bit ubuntu 12.04 lts),it succeed. The output of this program is 123456789abcdef, 15 and returns 0 means this program exits normally. I don't know whether it's my compiler's problem or my system's, is there anyone can tell me?
P.S. It seems like it only appears in 64-bit linux with gcc. Is this a bug?
Array accesses are not checked in C. If you overflow a buffer like this, the result is undefined behavior. It is the programmer's responsibility to guard against this, not the compiler's.
There are tools though to assist in checking for invalid memory access. Like Valgrind for doing so at runtime, and Clang's static analyzer for compile-time checking.