I'm profiling code of a game I wrote and I'm wondering how it is possible that the following snippet causes an heap increase of 4kb (I'm profiling with Heapshot Analysis of Xcode) every time it is executed:
u8 WorldManager::versionOfMap(FILE *file)
{
char magic[4];
u8 version;
fread(magic, 4, 1, file); <-- this is the line
fread(&version,1,1,file);
fseek(file, 0, SEEK_SET);
return version;
}
According to the profiler the highlighted line allocates 4.00Kb of memory with a malloc every time the function is called, memory which is never released. This thing seems to happen with other calls to fread around the code, but this was the most eclatant one.
Is there anything trivial I'm missing? Is it something internal I shouldn't care about?
Just as a note: I'm profiling it on an iPhone and it's compiled as release (-O2).
If what you're describing is really happening and your code has no bugs elsewhere, it is a bug in the implementation, I think.
More likely I think, is the possibility that you don't close the file. Stdio streams use buffering by default if the device is non-interactive, and the buffer is allocated either at the time the file is opened or when I/O is performed. While only one buffer should be allocated, you can certainly leak the buffer by forgetting to close the file. But certainly, closing the file should free the buffer. Don't forget to check the value returned by fclose.
Supposing for the sake of argument that you are correctly closing the file there are a couple of other nits in your code which won't be causing this problem but I'll mention anyway.
First your fread call reads an object having one member of size 4. You actually have an object having 4 members of size 1. In other words the numeric arguments to fread are swapped. This makes a difference only in the meaning of the return value (important in the case of a partial read).
Second, while your first call to fread correctly hard-codes the size of char as 1 (in C, that is the definition of 'size'), it's probably better stylistically to use sizeof(u8) in the second call to fread.
If the idea that this really is a memory leak is a correct interpretation (and there aren't any bugs elsewhere) then you may be able to work around the problem by turning off the stdio buffering for this particular file:
bool WorldManager::versionOfMap(FILE *file, bool *is_first_file_io, u8 *version)
{
char magic[4];
bool ok = false;
if (*is_first_file_io)
{
// we ignore failure of this call
setvbuf(file, NULL, _IONBF, 0);
*is_first_file_io = false;
}
if (sizeof(magic) == fread(magic, 1, sizeof(magic), file)
&& 1 == fread(version, sizeof(*version), 1, file))
{
ok = true;
}
if (-1 == fseek(file, 0L, SEEK_SET))
{
return false;
}
else
{
return ok && 0 == memcmp(magic, EXPECTED_MAGIC, sizeof(magic));
}
}
Even if we're going with the hypothesis that this really is a bug, and the leak is real, it is well worth condensing your code to the smallest possible example that still demonstrates the problem. If doing that reveals the true bug, you win. Otherwise, you will need the minimal example to report the bug in the implementation.
Related
I'm fairly new to c and I'm reading a book regarding Software Vulnerabilities and I came across this buffer overflow sample, it mentions that this can cause a buffer overflow. I am trying to determine how this is the case.
int handle_query_string(char *query_string)
{
struct keyval *qstring_values, *ent;
char buf[1024];
if(!query_string) {
return 0;
}
qstring_values = split_keyvalue_pairs(query_string);
if((ent = find_entry(qstring_values, "mode")) != NULL) {
sprintf(buf, "MODE=%s", ent->value);
putenv(buf);
}
}
I am paying close attention to this block of code because this appears to be where the buffer overflow is caused.
if((ent = find_entry(qstring_values, "mode")) != NULL)
{
sprintf(buf, "MODE=%s", ent->value);
putenv(buf);
}
I think here is it, because your buf is only 1024 and because ent->value can have more than 1024, then this may overflow.
sprintf(buf, "MODE=%s", ent->value);
But depends of implementations of split_keyvalue_pairs(query_string). If this function already checks the value and threat it (which I doubt).
klutt provided a good fix for the problem in a previous answer, so I'll try to go a bit more specific and in-depth on the exact nature of the overflow in your code.
char buf[1024];
This line allocates 1024 bytes on the stack, addressed by the pointer named buf. The big problem here is that it is on the stack. If you dynamically allocate using malloc (or my favorite: calloc), it will be on the heap. The location doesn't necessarily prevent or fix an overflow. But it can change the effect. Right above (give or take some bytes) this space on the stack would be the return address from the function, and an overflow can change that causing the program to redirect when it returns.
sprintf(buf, "MODE=%s", ent->value);
This line is what actually performs the overflow. sprintf = "string print format." This means that the destination is a string (char *), and you are printing a formatted string. It doesn't care about the length, it will just take the starting memory address of the destination string, and keep writing until it has finished. If there's more than 1024 characters to be written (in this case), then it will go past the end of your buffer and overflow into other parts of memory. The solution is to use the function snprint instead. The "n" tells you that it will limit the amount to be written to the destination, and avoid an overflow.
The ultimate thing to understand is that a "buffer" does not actually exist. It's simply not a thing. It is a concept we use to order the area in memory, but the computer has no idea what a buffer is, where it starts, or where it ends. So in writing, the computer doesn't really care if it is inside or outside of the buffer, and doesn't know where to stop writing. So, we need to tell it very explicitly how far it is allowed to write, or it will just keep writing.
A very big thing here is that you passed a pointer to a local variable to putenv. The buffer will cease to exist when handle_query_string returns. After that it will contain garbage variables. Note that what putenv does require that the string passed to it remains unchanged for the rest of the program. From the documentation for putenv (emphasis mine):
int putenv(char *string);
The putenv() function adds or changes the value of environment variables. The argument string is of the form name=value. If name does not already exist in the environment, then string is added to the environment. If name does exist, then the value of name in the environment is changed to value. The string pointed to by string becomes part of the environment, so altering the string changes the environment.
This can be corrected by using dynamic allocation. char *buf = malloc(1024) instead of char buf[1024]
Another thing is that sprintf(buf, "MODE=%s", ent->value); might overflow. That would happen if the string ent->value is too long. A solution there is to use snprintf instead.
snprintf(buf, sizeof buf, "MODE=%s", ent->value);
This prevents overflow, but it might still cause problems, because if ent->value is too big to fit in buf, then buf will for obvious reasons not contain the full string.
Here is a way to rectify both issues:
int handle_query_string(char *query_string)
{
struct keyval *qstring_values, *ent;
char *buf = NULL;
if(!query_string)
return 0;
qstring_values = split_keyvalue_pairs(query_string);
if((ent = find_entry(qstring_values, "mode")) != NULL)
{
// Make sure that the buffer is big enough instead of using
// a fixed size. The +5 on size is for "MODE=" and +1 is
// for the string terminator
const char[] format_string = "MODE=%s";
const size_t size = strlen(ent->value) + 5 + 1;
buf = malloc(size);
// Always check malloc for failure or chase nasty bugs
if(!buf) exit(EXIT_FAILURE);
sprintf(buf, format_string, ent->value);
putenv(buf);
}
}
Since we're using malloc the allocation will remain after the function exits. And for the same reason, we make sure that the buffer is big enough beforehand, and thus, using snprintf instead of sprintf is not necessary.
Theoretically, this has a memory leak unless you use free on all strings you have allocated, but in reality, not freeing before exiting main is very rarely a problem. Might be good to know though.
It can also be good to know that even though this code now is fairly protected, it's still not thread safe. The content of query_string and thus also ent->value may be altered. Your code does not show it, but it seems highly likely that find_entry returns a pointer that points somewhere in query_string. This can of course also be solved, but it can get complicated.
in the man pages of GNU/Linux the read function is described with following synopsis:
ssize_t read(int fd, void *buf, size_t count);
I would like to use this function to read data from a socket or a serial port. If the count is greater than one, the pointer supplied in the function argument will point to the last byte that was read from the port in the memory so pointer decrement is necessary for bringing the pointer to the first byte of data. This is dangerous because using it in a language like C++ with it's dynamic memory allocation of containers based on their size and space needs could corrupt data at the point of return from read() function. I thought of using a C-style array instead of a pointer. Is this the correct approach? If not, what is the correct way to do this? The programming language I'm using is C++.
EDIT:
The code that caused the described situation is as follows:
QSerialPort class was used to configure and open the port with following parameters:
Baudrate of 115200
8 data bits
No parity
One stop bit
No flow control
and for the reading part as long as the stackoverflow is concerned the read is performed exactly like this:
A std::vector containing a number of structs defined this way:
struct DataMember
{
QString name;
size_t count;
char *buff;
}
then within a while loop until the end of the mentioned std::vector is reached, a read() is performed based on count member variable of the said struct and the data is stored in the same struct's buff:
ssize_t nbytes = read(port->handle(), v.at(i).buff, v.at(i).count);
and then the data is printed on the console. In my test case as long as the data is one byte the value printed is correct but for more than one byte the value displayed is the last value that was read from the port plus some garbage values. I don't know why is this happening. Note that the correct result is obtained when the char *buff is changed to char buff[count].
If the count is greater than one, the pointer supplied in the function argument will point to the last byte that was read from the port in the memory
No. The pointer is passed to the read() method by value, so it is therefore completely and utterly impossible for the value to be any different after the call than it was before, regardless of the count.
so pointer decrement is necessary for bringing the pointer to the first byte of data.
The pointer already points to the first byte of data. No decrement is necessary.
This is dangerous because using it in a language like C++ with it's dynamic memory allocation of containers based on their size and space needs could corrupt data at the point of return from read() function.
This is all nonsense based on an impossibility.
You are mistaken about all this.
In my test case as long as the data is one byte the value printed is correct but for more than one byte the value displayed is the last value that was read from the port plus some garbage values.
From the read(2) manpage:
On success, the number of bytes read is returned (zero indicates end of file),
and the file position is advanced by this number. It is not an error if this number is
smaller than the number of bytes requested; this may happen for example because fewer
bytes are actually available right now (maybe because we were close to end-of-file, or
because we are reading from a pipe, or from a terminal), or because read() was interrupted
by a signal. On error, -1 is returned, and errno is set appropriately. In this case it
is left unspecified whether the file position (if any) changes.
In the case of pipes, sockets and character devices (that includes serial ports) and a blocking file descriptor (default) read will, in practice, not wait for the full count. In your case read() blocks until a byte comes in on the serial port and returns. That is why in the output the first byte is correct and the rest is garbage (uninitialized memory). You have to add a loop around the read() that repeats until count bytes have been read if you need the full count.
I don't know why is this happening.
But I know. char * is just a pointer, but that pointer needs to be initialized to something before you can use it. Without doing so you're invoking undefined behavior and everything might happen.
Instead of the size_t count; and char *buff elements you should just use a std::vector<char>, before making the read call, resize it to the number of bytes you want to read, then take the address of the first element of that vector and pass that to read:
struct fnord {
std::string name;
std::vector data;
};
and use it like this; note that using read requires some additional work to properly deal with signal and error conditions.
size_t readsomething(int fd, size_t count, fnord &f)
{
// reserve memory
f.data.reserve(count);
int rbytes = 0;
int rv;
do {
rv = read(fd, &f.data[rbytes], count - rbytes);
if( !rv ) {
// End of File / Stream
break;
}
if( 0 > rv ) {
if( EINTR == errno ) {
// signal interrupted read... restart
continue;
}
if( EAGAIN == errno
|| EWOULDBLOCK == errno ) {
// file / socket is in nonblocking mode and
// no more data is available.
break;
}
// some critical error happened. Deal with it!
break;
}
rbytes += rv;
} while(rbytes < count);
return rbyteS;
}
Looking at your first paragraph of gibberish:
If the count is greater than one, the pointer supplied in the function argument will point to the last byte that was read from the port in the memory
What makes you think so? This is not how it works. Most likely you passed some invalid pointer that wasn't properly initialized. Anything can happen.
so pointer decrement is necessary for bringing the pointer to the first byte of data.
Nope. That's not how it works.
This is dangerous because using it in a language like C++ with it's dynamic memory allocation of containers based on their size and space needs could corrupt data at the point of return from read() function.
Nope. That's not how it works!
C and C++ are an explicit languages. Everything happens in plain sight and nothing happens without you (the programmer) explicitly requesting it. No memory is allocated without you requesting this to happen. It can either be an explicit new, some RAII, automatic storage or the use of a container. But nothing happens "out of the blue" in C and C++. There's no built-in garbage collection^1 in C nor C++. Objects don't move around in memory or resize without you explicitly coding something into your program that makes this happen.
[1]: There are GC libraries you can use, but those never will stomp onto anything that can be reached by code that's executing. Essentially garbage collector libraries for C and C++ are memory leak detectors, which will free memory that can no longer be reached by normal program flow.
I am trying to write a wrapper around Windows file functions, one would read num bytes amount of data from the file and retrun it. For some reason I fail to allocate the memory properly, but I just can't find the reason why:
PBYTE Read(int num_bytes, HANDLER hFile){
PBYTE bBuffer;
DWORD new_size = sizeof(BYTE)*num_bytes;
//after the allocation the debugger already displays a 16 char wide placeholder
bBuffer = (PBYTE)malloc(new_size);
OVERLAPPED o = { 0 };
o.Offset = 0;
BOOL bReadDone = ReadFile(hFile, (LPVOID)bBuffer, sizeof(BYTE)*num_bytes, NULL, &o);
return bBuffer;
}
Data gets copied, but the allocated buffer is always too wide and contains extra wierd filler characters. Can sby please explain what am I doing wrong?
"what am I doing wrong?"
sizeof(BYTE) is 1 so you can remove it everywhere and eliminate the redundant new_size variable.
You tagged your question C++ but used malloc to allocate the buffer. Your design makes the caller responsible for freeing the buffer, which is a poor design approach, and even more so by using malloc/free in C++ program. A good C++ solution to this quandry would be to return a
std::vector.
It is vital that you provide the lpNumberOfBytesRead parameter to ReadFile. Without it you don't know how many bytes were read. And if you don't know how many bytes were read you can't tell the difference between "extra wierd filler characters" and unused memory at the end of the buffer. If the data is characters then character-oriented output routines (and debugger tools) don't know the difference either, since there is no null terminator at the end of the data that was actually read. You could use NumberOfBytesRead to put in a nul terminator so you and the debugger don't read beyond the real data.
I wrote a function, using the C header stdio.h that returns content of a file (text or html). Can anyone please go through it and suggest if I have done the memory management efficiently. I shall be so pleased to hear better suggestions that I can improve my codes.
char *getFileContent(const char *filePath)
{
//Prepare read file
FILE *pReadFile;
long bufferReadSize;
char *bufferReadFileHtml;
size_t readFileSize;
char readFilePath[50];
sprintf_s(readFilePath, "%s", filePath);
pReadFile = fopen (readFilePath, "rb");
if (pReadFile != NULL)
{
// Get file size.
fseek (pReadFile , 0 , SEEK_END);
bufferReadSize = ftell (pReadFile);
rewind (pReadFile);
// Allocate RAM to contain the whole file:
bufferReadFileHtml = (char*) malloc (sizeof(char) * bufferReadSize);
if (bufferReadFileHtml != NULL)
{
// Copy the file into the buffer:
readFileSize = fread (bufferReadFileHtml, sizeof(char), bufferReadSize, pReadFile);
if (readFileSize == bufferReadSize)
{
return bufferReadFileHtml;
} else {
char errorBuffer[50];
sprintf_s(errorBuffer, "Error! Buffer overflow for file: %s", readFilePath);
}
} else {
char errorBuffer[50];
sprintf_s(errorBuffer, "Error! Insufficient RAM for file: %s", readFilePath);
}
fclose (pReadFile);
free (bufferReadFileHtml);
} else {
char errorBuffer[50];
sprintf_s(errorBuffer, "Error! Unable to open file: %s", readFilePath);
}
}
This looks like a C program, not a C++ program. While it will compile using most C++ compilers, it doesn't take advantage of any C++ features (e.g. new/new[], delete/delete[], explicit casting, stream operators, strings, nullptr etc.)
Your code almost looks like a safe C function, although I think sprintf_s is a Microsoft-only function, and thus probably won't compile using GCC, Clang, Intel, etc. as it isn't a part of the standard.
Your function should also return a value at all times. Turn compiler warnings on to catch these kinds of things; they make debugging a lot easier :)
There's not much to be said without knowing how you will be using the buffer you've created. Here are some possible considerations:
1) When you are reading the file into a buffer, your processor is doing nothing else for this program. It might be better to read and begin analyzing the already read portion in parallel.
2) If you need really fast-efficient-low memory file IO, consider converting your program to a state machine and forget about the buffer altogether.
3) If you don't really have a very demanding application, you are killing yourself by writing in C. C#, python, etc-- almost any other language has better string manipulation libraries.
Btw, you should use snprintf for portability and safety, as others have pointed out.
I'm currently unpacking one of blizzard's .mpq file for reading.
For accessing the unpacked char buffer, I'm using a boost::interprocess::stream::memorybuffer.
Because .mpq files have a chunked structure always beginning with a version header (usually 12 bytes, see http://wiki.devklog.net/index.php?title=The_MoPaQ_Archive_Format#2.2_Archive_Header), the char* array representation seems to truncate at the first \0, even if the filesize (something about 1.6mb) remains constant and (probably) always allocated.
The result is a streambuffer with an effective length of 4 ('REVM' and byte nr.5 is \0). When attempting to read further, an exception is thrown. Here an example:
// (somewhere in the code)
{
MPQFile curAdt(FilePath);
size_t size = curAdt.getSize(); // roughly 1.6 mb
bufferstream memorybuf((char*)curAdt.getBuffer(), curAdt.getSize());
// bufferstream.m_buf.m_buffer is now 'REVM\0' (Debugger says so),
// but internal length field still at 1.6 mb
}
//////////////////////////////////////////////////////////////////////////////
// wrapper around a file oof the mpq_archive of libmpq
MPQFile::MPQFile(const char* filename) // I apologize my naming inconsistent convention :P
{
for(ArchiveSet::iterator i=gOpenArchives.begin(); i!=gOpenArchives.end();++i)
{
// gOpenArchives points to MPQArchive, wrapper around the mpq_archive, has mpq_archive * mpq_a as member
mpq_archive &mpq_a = (*i)->mpq_a;
// if file exists in that archive, tested via hash table in file, not important here, scroll down if you want
mpq_hash hash = (*i)->GetHashEntry(filename);
uint32 blockindex = hash.blockindex;
if ((blockindex == 0xFFFFFFFF) || (blockindex == 0)) {
continue; //file not found
}
uint32 fileno = blockindex;
// Found!
size = libmpq_file_info(&mpq_a, LIBMPQ_FILE_UNCOMPRESSED_SIZE, fileno);
// HACK: in patch.mpq some files don't want to open and give 1 for filesize
if (size<=1) {
eof = true;
buffer = 0;
return;
}
buffer = new char[size]; // note: size is 1.6 mb at this time
// Now here comes the tricky part... if I step over the libmpq_file_getdata
// function, I'll get my truncated char array, which I absolutely don't want^^
libmpq_file_getdata(&mpq_a, hash, fileno, (unsigned char*)buffer);
return;
}
}
Maybe someone could help me. I'm really new to STL and boost programming and also inexperienced in C++ programming anyways :P Hope to get a convenient answer (plz not suggest to rewrite libmpq and the underlying zlib architecture^^).
The MPQFile class and the underlying uncompress methods are acutally taken from a working project, so the mistake is either somewhere in the use of the buffer with the streambuffer class or something internal with char array arithmetic I haven't a clue of.
By the way, what is the difference between using signed/unsigned chars as data buffers? Has it anything to do with my problem (you might see, that in the code randomly char* unsigned char* is taken as function arguments)
If you need more infos, feel free to ask :)
How are you determining that your char* array is being 'truncated' as you call it? If you're printing it or viewing it in a debugger it will look truncated because it will be treated like a string, which is terminated by \0. The data in 'buffer' however (assuming libmpq_file_getdata() does what it's supposed to do) will contain the whole file or data chunk or whatever.
Sorry, messed up a bit with these terms (not memorybuffer actually, streambuffer is meant as in the code)
Yeah you where right... I had a mistake in my exception handling. Right after that first bit of code comes this:
// check if the file has been open
//if (!mpf.is_open())
pair<char*, size_t> temp = memorybuf.buffer();
if(temp.first)
throw AdtException(ADT_PARSEERR_EFILE);//Can't open the File
notice the missing ! before temp.first . I was surprized by the exception thrown, looked at the streambuffer .. internal buffer at was confused of its length (C# background :P).
Sorry for that, it's working as expected now.