Valgrind claiming I am using malloc when using new - c++

Running Valgrind against an existing codebase, I am getting a lot of "Mismatched free / delete/ delete[]" errors. Many of them are a repeat of one problem: it claims that at line XXX a delete operation is being used, whereas at line YYY a malloc operation is used. However, when I open the file that it complains about and navigate to the line numbers indicated, I find that the memory was not allocated with malloc but with new. The allocated object was an standard ifstream and neither new[] nor delete[] are being used.
I'm running Valgrind 3.5. Does anyone have any idea what is happening? I cannot see how this can be a real error, but I've seen some people claim that Valgrind doesn't turn up many false positives, so I want to have some confidence that this is fake before suppressing it.

You don't provide a sample program, so this is a crystal-ball guess.
Your program provides an operator new but is missing an operator delete. The following sample program produces the same error message you are seeing:
#include <new>
#include <cstdlib>
/*
* Sample program that provides `operator new`, but not `operator delete`.
*/
// minimal version of new for demonstration purpose only
void* operator new(size_t numBytes) {
return malloc(numBytes);
}
int main () {
int *p = new int;
delete p;
}

Related

Using malloc instead of new causes free(): invalid pointer error

For some reason compiling the following code with gcc and running the binary it produces on Ubuntu gives a free(): invalid pointer error:
#include <stdlib.h>
#include <fstream>
#include <string>
#include <iostream>
#include <sstream>
#include <ios>
#include <new>
struct arr_double_size {
double *array;
int size;
};
struct Record {
int ID;
std::string str1;
std::string str2;
int num;
struct arr_double_size values;
};
struct Record_array {
struct Record *array;
int size;
};
void Copy_Read(void) {
std::ifstream file{"in_file"};
std::ofstream new_file{"out_file"};
std::string line;
while (std::getline(file,line)) {
new_file << line << std::endl;
}
file.close();
new_file.close();
}
int main(void) {
Copy_Read();
struct Record rec;
struct arr_double_size values;
values.size = 1;
values.array = (double *)malloc(1 * sizeof(double));
values.array[0] = 72.12;
rec.ID = 2718;
rec.str1 = "Test1";
rec.str2 = "Test2";
rec.num = 1;
rec.values = values;
struct Record_array record_list;
record_list.size = 1;
record_list.array = (struct Record *)malloc(1 * sizeof(struct Record));
record_list.array[0] = rec;
return 0;
}
The contents of in_file are:
TEST TEST TEST
Strangely, commenting out the call in main to Copy_Read solves the problem, as does replacing the calls to malloc with calls to new. Running the program with gdb shows that the error occurs when attempting to assign rec to record_list.array[0]. Why does this occur? I have tried to give a minimal example here; previous, expanded versions of this code resulted in segmentation faults instead of the free(): invalid pointer error. I am aware that this is horrible code which should never be used in a serious program (I should be using standard library vector and new), but there seems to be something I do not understand and which is not well-documented (in resources accessible to beginners, anyway) about the difference between malloc and new which is the source of the problems with this code.
I do not understand about the difference between malloc and new
The you need to read much more about C++, e.g. some C++ programming book, and a good C++ reference site. Yes, C++ is a very difficult language (you'll need years of work to master it). Later, you could dive into the C++11 standard n3337 (or some more recent C++ standard). You certainly need to understand precisely the role of constructors and destructors (and explaining that takes many pages, much more than what you can reasonably expect in any StackOverflow answer).
You need to have the code of your constructors to be executed (and that is done with new but not with malloc) - and later the destructors should also be executed, before releasing the memory. Destructors are called by delete (and in many other cases) but of course not by free. Read also about the rule of five and about RAII.
You should, when possible, prefer to use smart pointers. Sometimes (e.g. for circular references) they are not enough.
Be afraid of undefined behavior.
The valgrind tool is useful to hunt memory related bugs. You should also compile with all warnings and debug info, so g++ -Wall -Wextra -g with GCC. You may also want to use static source code analyzers tools such as clang-analyzer or Frama-C. Using them may require a lot of expertise; there is no silver bullet.
Your struct Record_array is wrong: prefer to use std::vector<Record>. Read much more about standard C++ containers.
The constructor of your Record will call the constructor of str1 and of str2 (that it, the constructor of std::string-s applied to two different locations). If you don't call that Record constructor, str1 and str2 stay in some undefined state (so you have some undefined behavior as soon as you use them).
A major difference between malloc & free (for C) and new and delete (for C++) is the way constructors and destructors are involved. Of course, malloc & free are ignoring them, but not new & delete. Failure of memory allocation (e.g. when virtual memory is exhausted) is also handled differently.
PS. In practice, you should never use malloc in C++ code, except -in rare cases only- when defining your own operator new. Because malloc does not call C++ constructors (but new does). Also, please understand that C and C++ are different programming languages, and malloc is for C, not C++. Many C++ standard library implementations are using malloc in their implementation of the standard ::operator new and using free in their ::operator delete.

C++: delete this; return x;

Hey I am curious about some C++ behaviour as the code I am working on would benefit greatly from this in terms of simplicity if this behaviour is consistent. Basically the idea is for a specific function inside my object A to compute a complex calculation returning a float, but just before returning the float, occasionally, calling delete this.
1
here is a code example of the functionality i am trying to verify is consistent.
#include <iostream>
#include <stdio.h>
#include <cstdlib>
using namespace std;
struct A
{
float test(float a) {delete this; return a;}
};
int main()
{
A *a = new A();
cout << a->test(1.f) << endl;
cout << "deleted?" << endl;
cout << a->test(1.f) << endl;
}
the output becomes:
1.0
deleted?
*** Error in `./test': double free or corruption (fasttop): 0x0105d008 *** Aborted (core dumped)
I think this means the object was deleted correctly (what is left in memory? an uncallable skeleton of A? A typed pointer? A null pointer?), but am not sure whether I am right about that. If so, is this behaviour going to be consistent (my functions will only be returning native types (floats))
2
Additionally I am curious as to why this doesn't seem to work:
struct A
{
float test(float a) {delete this; return a;}
};
int main()
{
A a;
cout << a.test(1.f) << endl;
}
this compiles but throws the following error before returning anything.
*** Error in `./test': free(): invalid pointer: 0xbe9e4c64 *** Aborted (core dumped)
NOTE Please don't tell reply with a long list of explanations as to why this is bad coding/etiquette or whatever, don't care, I am simply interested in the possibilities.
It is safe for a member function to call delete this; if you know that the object was allocated using scalar new and that nothing else will use the object afterward.
In your first example, after the first call to a->test(1.f), a becomes a "dangling pointer". You invoke Undefined Behavior when you dereference it to call test a second time.
In your second example, the delete this; statement is Undefined Behavior because the object was not created using new.
The behavior is undefined, but in a typical modern implementation the practical "possibilities" of accessing deallocated memory include (but not limited to):
delete releases memory at run-time library level (RTL), but does not return it to the OS. I.e. OS-level memory protection is not engaged and OS continues to see that memory as allocated. However, internal RTL data stored in freed memory blocks clobbers your data. The result: access through the pointer does not cause your code to crash, but the data looks meaningless (clobbered)
Same as 1, but internal RTL data happens not to overlap your critical data. The code does not crash and continues to work "as if" everything is "fine".
delete releases memory to the OS. OS-level memory protection is engaged. Any attempt to access though the pointer causes an immediate crash.
Your examples proceed in accordance with the second scenario, i.e. the data stored in the object appears to remain untouched even after you free the memory.
The crashes you observe in your code happen because RTL detects a double free attempt (or an attempt to free a non-dynamic memory, as in the second example), which is kinda besides the point in the context of your question.

Run time error while using realloc: " _CrtIsValidHeapPointer(pUserData), dbgheap.c"

The following code is written in C++ but using realloc from stdlib.h because I don't know much about std::vector.
Anyway, I get this weird run time error " " _CrtIsValidHeapPointer(pUserData), dbgheap.c".
If you would like to see the whole method or code please let me know.
I have 2 classes, Student and Grades. Student contains
char _name[21];
char _id[6];
int _numOfGrades;
int* _grades;
float _avg;
and Grade simply contains
Student* _students;
int _numOfStudents;
while the following works
_grades = (int *)realloc(_grades,(sizeof(int)*(_numOfGrades+1)));
this will create that weird run time error:
_students = (Student *)realloc(_students,(sizeof(Student)*(_numOfStudents+1)));
Both _grades and _students are created with new with no problem at all. The problem is only while trying to realloc _students.
Any input will be welcome.
You cannot mix allocators—if you allocate memory with operator new[], you must deallocate it with operator delete[]. You cannot use free(), realloc(), or any other memory allocator (e.g. Windows' GlobalFree()/LocalFree()/HeapFree() functions).
realloc() can only reallocate memory regions which were allocated with the malloc() family of functions (malloc(), calloc(), and realloc()). Attempting to realloc any other memory block is undefined behavior—in this case, you got lucky and the C runtime was able to catch your error, but if you were unlucky, you might silently corrupt memory and then later crash at some random point in an "impossible" state.

deleting an array the wrong way [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How could pairing new[] with delete possibly lead to memory leak only?
I was always told that it's not safe to call delete on an array allocated with new[]. You should always pair new with delete and new[] with delete[].
So I was surprised to discover that the following code compiles and runs ok, in both Debug and Release mode under VS2008.
class CBlah
{
public:
CBlah() : m_i(0) {}
private:
int m_i;
};
int _tmain(int argc, _TCHAR* argv[])
{
for(;;)
{
CBlah * p = new CBlah[1000]; // with []
delete p; // no []
}
return 0;
}
It took me a while to figure out why this works at all, and I think it's just luck and some undefined behaviour.
BUT... it made me wonder... why doesn't Visual Studio pick this up, at least in the Debug memory manager? Is it because there's lots of code out there that makes this mistake and they don't want to break it, or do they feel it's not the job of the Debug memory manager to catch this kind of mistake?
Any thoughts? Is this kind of misuse common?
It will certainly compile ok, because there is no information in the pointer (compile-time) which will see if pointer points to array or what. For example:
int* p;
cin>>x;
if(x == 0)
p = new int;
else
p = new int [10];
delete p; //correct or not? :)
Now , about running ok. This is called undefined behavior in C++, that is, there is no guarantee what will happen - everything can run OK, you can get a segfault, you can get just wrong behavior, or your computer may decide to call 911. UB <=> no guarantee
It's undefined behavior and everything is fair in love, war and undefined behavior...:)
According to MDSN, it translates delete to delete[] when trying to delete an array. (See there, for instance). Though you should have a warning after compiling.
The reason the Debug Memory Manager does not pick up on this error is probably because it it not implemented at the level of new/delete, but at the level of the memory manager that gets invoked by new/delete to allocate the required memory.
At that point, the distinction between array new and scalar new is gone.
You can read these SO answers and links about delete and delete[]: About delete, operator delete, delete[], ...
I don't know what makes you think it "works ok". It compiles and completes without crashing. That does not mean necessarily there was no leak or heap corruption. Also if you got away with it this time, it doesn't necessarily make it a safe thing to do.
Sometimes even a buffer overwrite is something you will "get away with" because the bytes you have written to were not used (maybe they are padding for alignment). Still you should not go around doing it.
Incidentally new T[1] is a form of new[] and still requires a delete[] even though in this instance there is only one element.
Interesting point.
Once I did a code review and tried to convince programmers to fix new[]-delete mismatch.
I've argumented with "Item 5" from Effective C++ by Scott Meyers. However, they argumented with "What do you want, it works well!" and proved, that there was no memory leakage.
However, it worked only with POD-types. Looks like, MS tries to fix the mismatch as pointed out by Raveline.
What would happen, if you added a destructor?
#include <iostream>
class CBlah
{
static int instCnt;
public:
CBlah() : m_i(0) {++instCnt;}
~CBlah()
{
std::cout <<"d-tor on "<< instCnt <<" instance."<<std::endl;
--instCnt;
}
private:
int m_i;
};
int CBlah::instCnt=0;
int main()
{
//for(;;)
{
CBlah * p = new CBlah[10]; // with []
delete p; // no []
}
return 0;
}
Whatever silly"inteligence" fix is added to VS, the code is not portable.
Remember that "works properly" is within the universe of "undefined behavior". It is quite possible for a particular version of a particular compiler to implement this in such a way that it works for all intents and purposes. The important thing to remember is that this is not guaranteed and you can't really ever be sure it's working 100%, and you can't know that it will work with the next version of the compiler. It's also not portable, since another compiler might work in a different fashion.
This works because the particular C++ runtime library it was linked with uses the same heap for both operator new and operator new[]. Many do, but some don't, which is why the practice is not recommended.
The other big difference is that if CBlah had a non-trivial destructor, the delete p; would only call it for the first object in the array, whereas delete[] p; is sure to call it for all the objects.

What happens to class members when malloc is used instead of new?

I'm studying for a final exam and I stumbled upon a curious question that was part of the exam our teacher gave last year to some poor souls. The question goes something like this:
Is the following program correct, or not? If it is, write down what the program outputs. If it's not, write down why.
The program:
#include<iostream.h>
class cls
{ int x;
public: cls() { x=23; }
int get_x(){ return x; } };
int main()
{ cls *p1, *p2;
p1=new cls;
p2=(cls*)malloc(sizeof(cls));
int x=p1->get_x()+p2->get_x();
cout<<x;
return 0;
}
My first instinct was to answer with "the program is not correct, as new should be used instead of malloc". However, after compiling the program and seeing it output 23 I realize that that answer might not be correct.
The problem is that I was expecting p2->get_x() to return some arbitrary number (whatever happened to be in that spot of the memory when malloc was called). However, it returned 0. I'm not sure whether this is a coincidence or if class members are initialized with 0 when it is malloc-ed.
Is this behavior (p2->x being 0 after malloc) the default? Should I have expected this?
What would your answer to my teacher's question be? (besides forgetting to #include <stdlib.h> for malloc :P)
Is this behavior (p2->x being 0 after malloc) the default? Should I have expected this?
No, p2->x can be anything after the call to malloc. It just happens to be 0 in your test environment.
What would your answer to my teacher's question be? (besides forgetting to #include for malloc :P)
What everyone has told you, new combines the call to get memory from the freestore with a call to the object's constructor. Malloc only does half of that.
Fixing it: While the sample program is wrong. It isn't always wrong to use "malloc" with classes. It is perfectly valid in a shared memory situation you just have to add an in-place call to new:
p2=(cls*)malloc(sizeof(cls));
new(p2) cls;
new calls the constructor, malloc will not. So your object will be in an unknown state.
The actual behaviour is unknown, because new acts pretty the same like malloc + constructor call.
In your code, the second part is missing, hence, it could work in one case, it could not, but you can't say exactly.
Why can't 0 be an arbitrary number too? Are you running in Debug mode? What compiler?
VC++ pre-fills newly allocated memory with a string of 0xCC byte values (in debug mode of-course) so you would not have obtained a zero for an answer if you were using it.
Malloc makes no guarantuee to zero out the memory it allocated and the result of the programm is undefined.
Otherwise there are many other things that keep this program from being correct C++. cout is in namespace std, malloc needs to included through#include <cstdlib> and iostream.h isn't standard compliant either.