In general, what could cause a double free in a program that is does not contain any dynamic memory allocation?
To be more precise, none of my code uses dynamic allocation. I'm using STL, but it's much more likely to be something I did wrong than for it to be a broken implmentation of G++/glibc/STL.
I've searched around trying to find an answer to this, but I wasn't able to find any example of this error being generated without any dynamic memory allocations.
I'd love to share the code that was generating this error, but I'm not permitted to release it and I don't know how to reduce the problem to something small enough to be given here. I'll do my best to describe the gist of what my code was doing.
The error was being thrown when leaving a function, and the stack trace showed that it was coming from the destructor of a std::vector<std::set<std::string>>. Some number of elements in the vector were being initialized by emplace_back(). In a last ditch attempt, I changed it to push_back({{}}) and the problem went away. The problem could also be avoided by setting the environment variable MALLOC_CHECK_=2. By my understanding, that environment variable should have caused glibc to abort with more information rather than cause the error to go away.
This question is only being asked to serve my curiosity, so I'll settle for a shot in the dark answer. The best I have been able to come up with is that it was a compiler bug, but it's always my fault.
In general, what could cause a double free in a program that is does not contain any dynamic memory allocation?
Normally when you make a copy of a type which dynamically allocates memory but doesn't follow rule of three
struct Type
{
Type() : ptr = new int(3) { }
~Type() { delete ptr; }
// no copy constructor is defined
// no copy assign operator is defined
private:
int * ptr;
};
void func()
{
{
std::vector<Type> objs;
Type t; // allocates ptr
objs.push_back(t); // make a copy of t, now t->ptr and objs[0]->ptr point to same memory location
// when this scope finishes, t will be destroyed, its destructor will be called and it will try to delete ptr;
// objs go out of scope, elements in objs will be destroyed, their destructors are called, and delete ptr; will be executed again. That's double free on same pointer.
}
}
I extracted a presentable example showcasing the fault I made that led to the "double free or corruption" runtime error. Note that the struct doesn't explicitly use any dynamic memory allocations, however internally std::vector does (as its content can grow to accommodate more items). Therefor, this issue was a bit hard to diagnose as it doesn't violate 'the rule of 3' principle.
#include <vector>
#include <string.h>
typedef struct message {
std::vector<int> options;
void push(int o) { this->options.push_back(o); }
} message;
int main( int argc, const char* argv[] )
{
message m;
m.push(1);
m.push(2);
message m_copy;
memcpy(&m_copy, &m, sizeof(m));
//m_copy = m; // This is the correct method for copying object instances, it calls the default assignment operator generated 'behind the scenes' by the compiler
}
When main() returns m_copy is destroyed, which calls the std::vector destructor. This tries to delete memory that was already freed when the m object was destroyed.
Ironically, I was actually using memcpy to try and achieve a 'deep copy'. This is where the fault lies in my case. My guess is that by using the assignment operator, all the members of message.options are actually copied to "newly allocated memory" whereas memcpy would only copy those members that were allocated at compile time (e.g. a uint32_t size member). See Will memcpy or memmove cause problems copying classes?. Obviously this also applies to structs with non-fundamental typed members (as is the case here).
Maybe you also copied a std::vector incorrectly and saw the same behavior, maybe you didn't. In the end, it was my entirely my fault :).
Related
Probably this question was already asked but I couldn't find it. Please redirect me if you you saw something.
Question :
what is the benefit of using :
myClass* pointer;
over
myClass* pointer = new(myClass);
From reading on other topics, I understand that the first option allocates a space on the stack and makes the pointer point to it while the second allocates a space on the heap and make a pointer point to it.
But I read also that the second option is tedious because you have to deallocate the space with delete.
So why would one ever use the second option.
I am kind of a noob so please explain in details.
edit
#include <iostream>
using namespace std;
class Dog
{
public:
void bark()
{
cout << "wouf!!!" << endl;
}
};
int main()
{
Dog* myDog = new(Dog);
myDog->bark();
delete myDog;
return 0;
}
and
#include <iostream>
using namespace std;
class Dog
{
public:
void bark()
{
cout << "wouf!!!" << endl;
}
};
int main()
{
Dog* myDog;
myDog->bark();
return 0;
}
both compile and give me "wouf!!!". So why should I use the "new" keyword?
I understand that the first option allocates a space on the stack and
makes the pointer point to it while the second allocates a space on
the heap and make a pointer point to it.
The above is incorrect -- the first option allocates space for the pointer itself on the stack, but doesn't allocate space for any object for the pointer to point to. That is, the pointer isn't pointing to anything in particular, and thus isn't useful to use (unless/until you set the pointer to point to something)
In particular, it's only pure blind luck that this code appears to "work" at all:
Dog* myDog;
myDog->bark(); // ERROR, calls a method on an invalid pointer!
... the above code is invoking undefined behavior, and in an ideal world it would simply crash, since you are calling a method on an invalid pointer. But C++ compilers typically prefer maximizing efficiency over handling programmer errors gracefully, so they typically don't put in a check for invalid-pointers, and since your bark() method doesn't actually use any data from the Dog object, it is able to execute without any obvious crashing. Try making your bark() method virtual, OTOH, and you will probably see a crash from the above code.
the second allocates a space on the heap and make a pointer point to
it.
That is correct.
But I read also that the second option is tedious because you have to
deallocate the space with delete.
Not only tedious, but error-prone -- it's very easy (in a non-trivial program) to end up with a code path where you forgot to call delete, and then you have a memory leak. Or, alternatively, you could end up calling delete twice on the same pointer, and then you have undefined behavior and likely crashing or data corruption. Neither mistake is much fun to debug.
So why would one ever use the second option.
Traditionally you'd use dynamic allocation when you need the object to remain valid for longer than the scope of the calling code -- for example, if you needed the object to stick around even after the function you created the object in has returned. Contrast that with a stack allocation:
myClass someStackObject;
... in which someStackObject is guaranteed to be destroyed when the calling function returns, which is usually a good thing -- but not if you need someStackObject to remain in existence even after your function has returned.
These days, most people would avoid using raw/C-style pointers entirely, since they are so dangerously error-prone. The modern C++ way to allocate an object on the heap would look like this:
std::shared_ptr<myClass> pointer = std::make_shared<myClass>();
... and this is preferred because it gives you a heap-allocated myClass object whose pointed-to-object will continue to live for as long as there is at least one std::shared_ptr pointing to it (good), but also will automagically be deleted the moment there are no std::shared_ptr's pointing to it (even better, since that means no memory leak and no need to explicitly call delete, which means no potential double-deletes)
I am trying to append a blank object to a list using the push_back method.
main.cpp
vector<FacialMemory> facial_memory;
printf("2\n");
// Add people face memories based on number of sections
for (int i = 0; i < QuadrantDict::getMaxFaceAreas(); i++)
{
printf("i %d\n", i);
FacialMemory n_fm;
facial_memory.push_back(n_fm); // NOTE: Breaks here
}
At the push_back method call, the program crashes with a segmentation fault. I have looked around at similar questions and they point to the solution I have here. I have also tried to pass FacialMemory() into the push_back call but still the same issue.
The FacialMemory class is defined as such:
FacialMemory.h
class FacialMemory
{
private:
vector<FaceData> face_memory;
public:
FacialMemory();
~FacialMemory();
void pushData(FaceData face);
bool isEmpty();
vector<FaceData> getFaces();
FaceData getRecent();
};
Constructor and destructor
FacialMemory::FacialMemory()
{
}
FacialMemory::~FacialMemory()
{
delete[] & face_memory;
}
When you push_back an item into a vector, the item is copied. Sometimes this triggers more work as the vector is resized: Its current contents are copied, and the now-copied elements that used to belong to the vector are destroyed. destruction invokes the destructor.
Unfortunately, FacialMemory's destructor contains a fatal error:
FacialMemory::~FacialMemory()
{
delete[] & face_memory; <<== right here
}
It tries to delete[] data that was not allocated by new[], and whatever is managing the program's memory threw a fit because the expected book-keeping structures that keep track of dynamically allocated storage (memory allocated with new or with new[]) for the storage being returned were not found or not correct.
Further, face_memory is a std::vector, an object designed to look after its memory for you. You can create, copy, resize, and delete a vector without any intervention in most cases. The most notable counter case is a vector of pointers where you may have to release the pointed-at data when removing the pointer from the vector.
The solution is to do nothing in the FacialMemory class destructor. In fact, the Rule of Zero recommends that you not have a destructor at all because FacialMemory has no members or resources that require special handling. The compiler will generate the destructor for you with approximately zero chance of a mistake being made.
While reading the link for the Rule of Zero, pay attention to the Rules of Three and Five because they handle the cases where a class does require special handling and outline the minimum handling you should provide.
One of the reason to occur Segmentation fault is when ever you access a memory part which is invalid and In your program you are deallocating memory(delete keyword in your destructor) which is not allocated by new keyword.
Please refer to vector-push-back
Try adding a meaningful copy constructor to FacialMemory class
I have tried some interesting code(at least for me !). Here it is.
#include <iostream>
struct myStruct{
int one;
/*Destructor: Program crashes if the below code uncommented*/
/*
~myStruct(){
std::cout<<"des\n";
}
*/
};
struct finalStruct {
int noOfChars;
int noOfStructs;
union {
myStruct *structPtr;
char *charPtr;
}U;
};
int main(){
finalStruct obj;
obj.noOfChars = 2;
obj.noOfStructs = 1;
int bytesToAllocate = sizeof(char)*obj.noOfChars
+ sizeof(myStruct)*obj.noOfStructs;
obj.U.charPtr = new char[bytesToAllocate];
/*Now both the pointers charPtr and structPtr points to same location*/
delete []obj.U.structPtr;
}
I have allocated memory to charPtr and deleted with structPtr. It is crashing when I add a destructor to myStruct otherwise no issues.
What exactly happens here. As I know delete[] will call the destructor as many times as number given in new[]. Why it is not crashing when there is no destructor in myStruct?
First off, storing one member of a union and then reading another in the way you're doing it is Undefined Behaviour, plain and simple. It's just wrong and anything could happen.
That aside, it's quite likely the type pun you're attempting with the union actually works (but remember it's not guaranteed). If that's the case, the following happens:
You allocate an array of bytesToAllocate objects of type char and store the address in the unionised pointer.
Then, you call delete[] on the unionised pointer typed as myStruct*. Which means that it assumes it's an array of myStruct objects, and it will invoke the destructor on each of these objects. However, the array does not contain any myStruct objects, it contains char objects. In fact, the size in bytes of the array is not even a multiple of the size of myStruct! The delete implementation must be thoroughly confused. It probably interprets the first sizeof(myStruct) bytes as one myStruct object and calls the destructor in those bytes. Then, there's less than sizeof(myStruct) bytes left, but there are still some left, so the destructor is called on those incomplete bytes, reaches beyond the array, and hilarity ensues.
Of course, since this is just UB, my guess at the behaviour above could be way off. Plain and simple, you've confused it, so it acts confused.
delete makes two things, call destructor and deallocate memory.
You allocate data for one type, but delete if faking another type.
You shouldn't do it. There are many things one could do in C/C++, take a look at IOCCC for more inspirations :-)
A struct in C++ without any function and having only plain old data is itself a POD. It never calls a constructor/destructor when created/deleted.
Even not standard c-tors/d-tors. Just for performance reasons.
A Struct having (EDIT) user-defined copy-assignment operator, virtual function or d-tor is internally a little bit more complicated. It has a table of member function pointers.
If you allocate the memory block with chars, this table is not initialized. When you try to delete this memory block using a not POD-type, it first calls the destructor. And as the destructor function pointer is not initialized, it calls any memory block in your memory space, thinking it was the function. That's why it crashes.
It works because myStruct does not have a destructor. [Edit: I now see that you tried that, and it does crash. I would find the question interesting why it crashes with that dtor, since the dtor does not access any memory of the object.]
As others said, the second function of free[] besides potentially calling the elements' dtors (which doesn't happen here, as described) is to free the memory.
That works perfectly in your implementation because typically free store implementations just allocate a block of memory for that purpose whose size is kept in a book keeping location in that very memory. The size (once allocated) is type independent, i.e. is not derived from the pointer type on free. Cf. How does delete[] "know" the size of the operand array?. The malloc like, type agnostic allocator returns the chunk of memory and is happy.
Note that, of course, what you do is bogous and don't do that at home and don't publish it and make ppl sign non liability disagreements and don't use it in nuclear facilities and always return int from main().
the problem is that obj.U.structPtr points to a struct, which can have a constructor and destructor.
delete also requires the correct type, otherwise it cannot call the destructor.
So it is illegal to create a char array with new and delete it as an struct pointer.
It would be okay if you use malloc and free. This won't call the constructor and destructor.
Say I have an AbstractBaseClass and a ConcreteSubclass.
The following code creates the ConcreteSubclass and then disposes of it perfectly fine and without memory leaks:
ConcreteSubclass *test = new ConcreteSubclass(args...);
delete test;
However, as soon as I push this pointer into a vector, I get a memory leak:
std::vector<AbstractBaseClass*> arr;
arr.push_back(new ConcreteSubclass(args...));
delete arr[0];
I get fewer memory leaks with delete arr[0]; than with no delete at all, but still some memory gets leaked.
I did plenty of looking online and this seems to be a well understood problem - I'm deleting the pointer to the memory but not the actual memory itself. So I tried a basic dereference... delete *arr[0]; but that just gives a compiler error. I also tried a whole load of other references and dereferences but each just gives either a compiler error or program crash. I'm just not hitting on the right solution.
Now, I can use a shared_ptr to get the job done without a leak just fine (Boost as I don't have C++11 available to me):
std::vector<boost::shared_ptr<AbstractBaseClass>> arr2;
arr2.push_back(boost::shared_ptr<AbstractBaseClass>(new ConcreteSubclass(args...)));
but I can't get the manual method to work. It's not really important - it's perfectly easy to just use Boost, but I really want to know what I'm doing wrong so I can learn from it rather than just move without finding out my error.
I tried tracing through Boost's shared_ptr templates, but I keep getting lost because each function has so many overloads and I'm finding it very difficult to follow which branch to take each time.
I think it's this one:
template<class Y>
explicit shared_ptr( Y * p ): px( p ), pn() // Y must be complete
{
boost::detail::sp_pointer_construct( this, p, pn );
}
But I keep ending up at checked_array_delete, but that can't be right as that uses delete[]. I need to get down to just
template<class T> inline void checked_delete(T * x)
{
// intentionally complex - simplification causes regressions
typedef char type_must_be_complete[ sizeof(T)? 1: -1 ];
(void) sizeof(type_must_be_complete);
delete x;
}
And that's just calling delete, nothing special. Trying to follow through all of the reference and dereference operators was a disaster from the start. So basically I'm stuck, and I couldn't figure out how Boost made it work whilst my code didn't. So back to the start, how can I re-write this code to make the delete not leak?
std::vector<AbstractBaseClass*> arr;
arr.push_back(new ConcreteSubclass(args...));
delete arr[0];
Thank you.
shared_ptr stores a "deleter", a helper function1 that casts the pointer back to its original type (ConcreteSubclass*) before calling delete. You haven't done this.
If AbstractBaseClass has a virtual destructor, then calling delete arr[0]; is fine and works just as well as delete (ConcreteSubclass*)arr[0];. If it doesn't, then deletion through a base subobject is undefined behavior, and can cause far worse things to happen than memory leaks.
Rule of thumb: Every abstract base class should have a user-declared (explicitly defaulted is ok) destructor that is
virtual
OR
modified by protected: accessibility
or both.
1 You've found the implementation, it is checked_delete. But it is instantiated with the return type from new -- checked_delete<ConcreteSubclass>(ConcreteSubclass* x). So it is using ordinary delete, but on a pointer to type ConcreteSubclass, and that makes it possible for the compiler to find the right destructor even without the help of virtual dispatch.
I'm trying to answer some past paper questions that I've been given for exam practice but not really sure on these two, any help be greatly appreciated. (Typed code up from image, think it's all right).
Q1: Identify the memory leaks in the C++ code below and explain how to fix them. [9 marks]
#include <string>
class Logger {
public:
static Logger &get_instance () {
static Logger *instance = NULL;
if (!instance){
instance = new Logger();
}
return *instance;
}
void log (std::string const &str){
// ..log string
}
private:
Logger(){
}
Logger(Logger const&) {
}
Logger& operator= (Logger const &) {
}
~Logger() {
}
};
int main(int argcv, char *argv[]){
int *v1 = new int[10];
int *v2 = new int[20];
Logger::get_instance() . log ("Program Started");
// .. do something
delete v1;
delete v2;
return 0;
}
My answer is that if main never finishes executing due to an early return or an exception being thrown that the deletes will never run causing the memory to never be freed.
I've been doing some reading and I believe an auto_ptr would solve the problems? Would this be as simple as changing lines to?? :
auto_ptr<int> v1 = new int[10];
auto_ptr<int> v2 = new int[20];
v1.release();
delete v1;
Q2: Why do virtual members require more memory than objects of a class without virtual members?
A: Because each virtual member requires a pointer to be stored also in a vtable requiring more space. Although this equates to very little increase in space.
Q1: Note that v1 and v2 are int pointers that refer to an array of 10 and 20, respectively. The delete operator does not match - ie, since it is an array, it should be
delete[] v1;
delete[] v2;
so that the whole array is freed. Remember to always match new[] and delete[] and new and delete
I believe you're already correct on Q2. The vtable and corresponding pointers that must be kept track of do increase the memory consumption.
Just to summarize:
the shown program has undefined behavior using incorrect form of delete, so talking about leaks for the execution is immaterial
if the previous was fixed, leaks wold come from:
new Logger(); // always
the other two new uses, if subsequent new throws or string ctor throws or the ... part in log throws.
to fix v1 and v2 auto_ptr is no good ad you allocated with new[]. you could use boost::auto_array or better make v array<int, 10> or at least vector<int>. And you absolutely don't use release() and then manual delete, but leade that to the smart pointer.
fixing instance is interesting. What is presented is called the 'leaky singleton' that is supposed to leak the instance. But be omnipresent after creation in case something wants to use it during program exit. If that was not intended, instance shall not be created using new, but be directly, being local static or namespace static.
the question is badly phrased comparing incompatible things. Assuming it is sanitized the answer is that a for a class with virtual members instances are (very likely) to carry an extra pointer to the VMT. Plus the the VMT itself has one entry per virtual member after some general overhead. The latter is indeed insignificant, but the former may be an issue, as a class with 1 byte of state may pick up a 8 byte pointer, and possibly another 7 bytes of padding.
Your first answer is correct to get credit, but what the examiner was probably looking for is the freeing up of Logger *instance
In the given code, memory for instance is allocated, but never deallocated.
The second answer looks good.
instance is never deleted and you need to use operator delete[] in main().
Q1:
few gotchyas -
singleton pattern is very dangerous, for example it is not thread safe, two threads could come in and create two classes - causing a memory leak, surround with EnterCriticalSection or some other thread sync mechanism, and still unsafe and not recommended to use.
singleton class does not release they memory, singleton should be ref counted to really act properly.
you're using a static variable inside the function, even worse than using a static member for the class.
you allocate with new [] and delete without the delete[]
I suspect your question is two things:
- free the singleton pointer
- use delete[]
In general however the process cleanup will clean the dangling stuff..
Q2:
your second question is right, because virtual members require a vtable which makes the class larger