We have a very large C++ codebase that we would like to compile using gcc with the "FORTIFY_SOURCE=2" option to improve security and reduce the risk of buffer overflows. The problem is when we compile the system using FORTIFY_SOURCE, the binary sizes drastically increase. (From a total of 4GB to over 25GB) This causes issues when we need to deploy the code because it takes 5x as long to zip it up and deploy it.
In an attempt to figure out what was going on, I made a simple test program that does a bunch of string copies with strcpy (one of the functions FORTIFY_SOURCE is supposed to enhance and compiled it both with and without "FORTIFY_SOURCE".
#include <cstring>
#include <iostream>
using namespace std;
int main()
{
char buf1[100];
char buf2[100];
char buf3[100];
char buf4[100];
char buf5[100];
char buf6[100];
char buf7[100];
char buf8[100];
char buf9[100];
char buf10[100];
strcpy(buf1, "this is a string");
strcpy(buf2, "this is a string");
strcpy(buf3, "this is a string");
strcpy(buf4, "this is a string");
strcpy(buf5, "this is a string");
strcpy(buf6, "this is a string");
strcpy(buf7, "this is a string");
strcpy(buf8, "this is a string");
strcpy(buf9, "this is a string");
strcpy(buf10, "this is a string");
}
Compilation:
g++ -o main -O3 fortify_test.cpp
and
g++ -o main -D_FORTIFY_SOURCE=2 -O3 fortify_test.cpp
I discovered that using "FORTIFY_SOURCE" on a simple example had no noticeable impact on binary size (the resulting binary was 8.4K with and without fortifying the source.)
When there's no noticeable impact with a simple example, I wouldn't expect to see such a drastic size increase in more complex examples. What could FORTIFY_SOURCE possibly be doing to increase our binary sizes so drastically?
Your example is actually a not very good one because there's no fortifiable code on it. Code fortification is not magical, and the compiler can only do it under some specific conditions.
Lets take a sample of code with 2 functions, one can be fortified by the compiler (because from the code itself it can determine the maximum size of the buffer), the other cannot (because same information is missing):
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <limits.h>
#include <errno.h>
int f_protected(char *in)
{
char buffer[256];
memcpy(buffer, in, strlen(in));
printf("Hello %s !\n", buffer);
return 0;
}
int f_not_protected(char *in, int sz)
{
char buffer[sz];
memcpy(buffer, in, strlen(in));
printf("Hello %s !\n", buffer);
return 0;
}
int main (int argc, char **argv, char **envp)
{
if(argc < 2){
printf("Usage: %s <some string>\n", argv[0]);
exit(EXIT_SUCCESS);
}
f_protected(argv[1]);
f_not_protected(argv[1], strlen(argv[1]));
return 0;
}
There's an amazing online tool that allows you compared compiled code at https://godbolt.org/
You can actually compare both compiled versions of this sample here.
As you will be able to see in the ASM output, the fortified version of this function does perform more checks than the unfortified one, requiring extra ASM code, actually increasing file size.
However, it's hard to think of a case where it would increment code size so much. Is it possible that maybe you're not stripping debug info?
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
C++ newbie and have been writing a C++ program, but it finally breaks at the point of calling a lib function call from ctime.
The error shows info like this:
malloc(): memory corruption
AFAIK, this error(memory corruption) should be resulted from operate on out-of-bound memory address. And the print format represents YYYY-MM-DD-HH-MM, which is listed here, shows that the length should be definitively less than 100.
Additional info:
- The program is compiled with flags: "-O3 -g -Wall -Wextra -Werror -std=c++17"
- Compiler: g++ 7.4.0
- System: WSL Ubuntu-18
NOTE: This Code DOES NOT compiles and is NOT REPRODUCIBLE for the problem, See updates below
/** class file **/
#include <sys/wait.h>
#include <sys/types.h>
#include <unistd.h>
#include <errno.h>
#include <cstdlib>
#include <iostream>
#include <sstream>
#include <string>
#include <ios>
#include <fcntl.h>
#include <algorithm>
#include <cctype>
#include <ctime>
#include <limits>
#include "cache-proxy.hpp"
static int PROXY_CONFIG = 0;
void get_timestamp(char *buffer, int len);
std::string get_cwd(void);
CacheProxy::CacheProxy(__attribute__((unused)) const std::string& node)
{
curr_dir = fs::get_cwd();
Logger::get().info("curr_dir " + curr_dir);
proxy_path = "/usr/sbin/squid";
std::string squid("squid");
char buff[200];
get_timestamp(buff, 200); // error pops
std::string proxy_config_path;
/** plenty of codes following, but commented**/
}
void ~CacheProxy(){}
void get_timestamp(char *buffer, int len)
{
time_t raw_time;
struct tm *time_info;
time(&raw_time);
time_info = std::localtime(&raw_time);
std::strftime(buffer, len, "%F-%H-%M", time_info);
return;
}
// originally from other files, for convenient to be moved into this file
std::string get_cwd(void)
{
char path[PATH_MAX];
std::string retval;
if (getcwd(path, sizeof(path)) != NULL) {
retval = std::string(path);
} else {
Logger::get().err("current_path", errno);
}
return retval;
}
/** header file **/
#pragma once
#include <string>
class CacheProxy:
{
private:
int server_pid;
std::string proxy_path;
std::string curr_dir;
std::string squid_pid_path;
;
public:
CacheProxy(const std::string&);
~CacheProxy() override;
};
/** main file **/
int main(){
Node node(); // the parameter is never used in the CacheProxy constructor though
CacheProxy proxy(node); // error pops
proxy.init();
}
Thanks for any advices or thoughts.
Updates:
code updated as above and there are three major files. The code shows the exact same sequences of the logic of my original codebase by leaving out irrelevant codes(I commented them out when run into the errors), but please forgive me in giving out such rough codes.
Basically the the error pops during the object initialization and I currently assume that problems be either in get_cwd or localtime.
Please indicate if you need more infomations, though I think other codes are non-relevant indeed.
Updates Dec 21:
After commenting out different parts of the original code, I managed to locate the error part but cannot fix the bug. Opinions from comments are indeed true that the memory corruption error should originated from somewhere beforehand, however, what I am going to do to fix this problem, is somewhat different from other answers since I use setcap for my program and cannot use valgrind in this scenarios.
I used another tool called ASan(Address Sanitizer) to do the memory check. It was really easy to find out where the memory corruption is originated from with the tool and it has comprehensive analysis when the error occurs at runtime. I added support in the compiler and found out the main problem in my case is the memory allocation for string variables in CacheProxy class.
So far, it has turned out to be another problem which is "why there are indirect memory leakages originated from allocating memory for string objects when
constructor of this class is called", which I will not collapsed here in this question.
But it is really a good lesson for me that Memory problems actually have various types and causes, you cannot stare onto the source code for solving a problem which is not "index out of bound" or "illegal address access"(segfault) problem. Many tools are really handy and specialized in dealing with these things, so go and grab your tools.
Any crash inside malloc or free is probably cause of a earlier heap corruption.
Your memory is probably corrupted earlier.
If you're using Linux, try running your program under valgrind. Valgrind can help you find out this kind of error.
The 'obvious fixes' refered to by David is:
#include <iostream>
#include <ctime>
#include <cstdio>
void get_timestamp(char *buffer, int len)
{
time_t raw_time;
struct tm *time_info;
time(&raw_time);
time_info = localtime(&raw_time); // the line of code that breaks
strftime(buffer, len, "%F-%H-%M", time_info);
return;
}
int main() {
char buff[100];
get_timestamp(buff, 100);
std::cout << std::string(buff);
return 0;
}
Some of this code may seem foreign to you since I make 3ds homebrew programs for fun but it's essentially the same but with extra lines of code you can put in. I'm trying to read a file called about.txt in a separate folder. I made it work when I put it in the same folder but i lost that file and then my partner said he wanted it in Scratch3ds-master\assets\english\text and not in Scratch3ds-master\source I keep getting the error I coded in. I'm new to stack-overflow so this might be too much code but well here's the code:
#include <fstream>
#include <string>
#include <iostream>
int main()
{
// Initialize the services
gfxInitDefault();
consoleInit(GFX_TOP, NULL);
int version_major;
int version_minor;
int version_patch;
version_major = 0;
version_minor = 0;
version_patch = 2;
printf("This is the placeholder for Scratch3ds\n\n");
std::ifstream about_file;
about_file.open("../assets/english/text/about.txt");
if (about_file.fail())
{
std::cerr << "file has failed to load\n";
exit(1);
}
Chance are that you're using devkitpro packages. And chances are that the devkitpro team provide an equivalent of the NDS 'ARGV protocol' for 3DS programming. In which case, if you use
int main(int argc, char* argv[]);
you should have the full path to your executable in argv[0] if argc is non-zero.
https://devkitpro.org/wiki/Homebrew_Menu might help.
Your program has no a priori knowledge of what sort of arguments main() should receive, and in your question, you're using a main function that receives no argument at all.
Established standard for C/C++ programming is that main() will receive an array of constant C strings (typically named argv for arguments values) and the number of valid entries in that array (typically named argc for count). If you replace your original code with
#include <fstream>
#include <string>
#include <iostream>
int main(int argc, char* argv[])
{
// Initialize the services
// ... more code follows
then you're able to tell whether you received argument by testing argc > 0 and you'll be able to get these arguments values with argv[i].
With homebrew development, it is unlikely that you can pass arguments such as --force or --directory=/boot as on typical command-line tools, but there is one thing that is still useful: the very first entry in argv is supposed to be a full path for the running program. so you're welcome to try
std::cerr << ((argc > 0) ? argv[0] : "<no arguments>");
and see what you get.
I have to run this code for my class, most of what we use is java, I don't really know c++, but the code I have to run is c++, so I'm finding it difficult to debug or know what's going wrong. To compile it, I'm using a unix virtual machine. I've compiled it and have the a.out file in my directory. When I run the a.out file it says "segmentation fault". I've read that means it's trying to access something it can't, but I don't know what that would be. Is it a problem with the code they gave us, or could it be something like a setting on my machine?
#include <iostream>
#include <fstream>
#include <iterator>
#include <vector>
#include <algorithm>
#include <iostream>
#include <stdio.h>
using namespace std;
int main(int argc, char *argv[])
{
int N;
sscanf(argv[1], "%d", &N);
vector<double> data(N);
for(unsigned int i=0; i<N; i++) {
data[i] = rand()/(RAND_MAX+1.0);
}
sort(data.begin(), data.end());
copy(data.begin(), data.end(), ostream_iterator<double>(cout,"\n"));
}
This seems to be a matter of how you invoke the compiled binary. Say the executable is a.out, you should execute the program as
./a.out 42
as in this snippet
sscanf(argv[1], "%d", &N);
the size of the std::vector is parsed from the command line arguments. If you don't pass any arguments, argv has only one element (the executable name), and argv[1] is an out of bounds access, yielding undefined behavior. Note that you can use the argc variable to do some rudimentary error handling up front:
int N = 42; // some sensible default value
if (argc == 2)
sscanf(argv[1], "%d", &N);
This still won't protect you from trouble if the given argument is not parsable as an integer, but if you want to get into this, consider using a library for parsing command line options.
I came across this weird situation in Netbeans C/C++. Here is the situation:
In my project explorer, under Source Files, I have main.c and problem3.c
In main.c
#include <stdio.h>
#include <stdlib.h>
// long BigNumber(){
// return 600851475143;
// }
int main(int argc, char* argv[]) {
printf("%lu", BigNumber() );
return (EXIT_SUCESS);
}
In problem3.c
long BigNumber(){
return 600851475143;
}
My case is, when I use BigNumber() from problem3.c, it will output 403282979527, which is incorrect. But if I use BigNumber() from main.c, it will print 600851475143.
Can anyone explain the magic behind? Is it because of the platform, or tools such as make? I'm using Windows 7 32-bit, NetBeans 7.3.1, with MinGW.
This is actually overflow as Windows 32-bit follows the LP32 or 4/4/4 model where int, long and pointer are all 32-bits (4 bytes) long and the number you are storing is larger than 32-bits, signed or not. The fact that it works at all in the first case is actually just a coincidence. Likely the linking step caused by moving it to the other file "brings out" some other behavior that causes the problem you are seeing. gcc even warns of overflow here.
You have a few options, but a simple one is to use int64_t instead of long (this is why all those intxx_t types exist after all!). You should also use a LL suffix on the literal to inform the compiler that it is a long long literal, and also change your printf to use "llu" instead of "lu" (long long again)
The fix altogether:
#include <stdio.h>
#include <stdlib.h>
int64_t BigNumber() {
return 600851475143LL;
}
int main(int argc, char* argv[]) {
printf("%llu", BigNumber() );
return 0;
}
You should be able to safely move this function, as it is now well defined.
I am trying to include a function from another file inside a "main" file. I'm following this paradigm:
http://www.learncpp.com/cpp-tutorial/18-programs-with-multiple-files/
Here is my main file, digispark.cpp:
#include <iostream>
using namespace std;
int send(int argc, char **argv);
int main()
{
char* on;
*on = '1';
char* off;
*off = '0';
send(1,&on);
return 0;
}
And here is my send.cpp:
#include <stdio.h>
#include <iostream>
#include <string.h>
#if defined WIN
#include <lusb0_usb.h> // this is libusb, see http://libusb.sourceforge.net/
#else
#include <usb.h> // this is libusb, see http://libusb.sourceforge.net/
#endif
// I've simplified the contents of send for my debugging and your aid, but the
// complicated arguments are a part of the function that will eventually need
// to be here.
int send (int argc, char **argv)
{
std::cout << "Hello";
return 0;
}
I'm compiling on Ubuntu 12.10 using the g++ compiler like so:
g++ digispark.cpp send.cpp -o digispark
It compiles successfully.
However, when I run the program, "Hello" does not come up. Therefore I don't believe the function is being called at all. What am I doing wrong? Any help would be great! Thanks!
EDIT:
How I dealt with the issue:
int send(int argc, char **argv);
int main()
{
char* on[4];
on[0] = (char*)"send";
on[1] = (char*)"1";
char* off[4];
off[0] = (char*)"send";
off[1] = (char*)"0";
send(2,on);
return 0;
}
For those of you who were confused as to why I insisted doing this, as I said before, the send function was already built to accept the char** argv (or char* argv[]). My point was to try to mimic that in my main function.
It would have been much more difficult to rewrite the function that actually goes in the send function to take a different type of argument than just to send in what it wanted. Thanks everyone!
So if this helps anyone trying something similar feel free to use it!
Your problem is not the one you think it is. It's here:
char* on;
*on = '1';
You declared a char pointer, but did not initialize it. Then you dereferenced it. Bang, you're dead. This is what is known as Undefined Behavior. Once you invoke U.B., anything can happen. If you're lucky, it's a crash. But I guess you weren't lucky this time.
Look, if you want to start storing things in memory, you have to allocate that memory first. The best way, as hetepeperfan said, is to just use std::string and let that class take care of all the allocating/deallocating for you. But if for some reason you think you have to use C-style strings and pointers, then try this:
char on[128]; //or however much room you think you'll need. Don't know? Maybe you shoulda used std::string ...
*on = '1';
*(on+1) = '\0'; //if you're using C-strings, better null terminate.
char off[128];
*off = '0';
*(off+1) = '\0';
send(1,&on);
ok I think you try to do something like the following, I tried to make it a bit more in the Style of C++ and prevent the use of pointers since they should not be necessary in the code that you showed.
digispark.cpp
#include "send.h"
int main (int argc, char** argv){
string on = "1";
string off = "0";
send ( on );
send ( off );
return 0;
}
send.cpp
#include <iostream>
#include <string>
void send( const std::string& s) {
std::cout << s << std::endl;
}
send.h
void send(const std::string& s);