Ignoring programming style and design, is it "safe" to call delete on a variable allocated on the stack?
For example:
int nAmount;
delete &nAmount;
or
class sample
{
public:
sample();
~sample() { delete &nAmount;}
int nAmount;
}
No, it is not safe to call delete on a stack-allocated variable. You should only call delete on things created by new.
For each malloc or calloc, there should be exactly one free.
For each new there should be exactly one delete.
For each new[] there should be exactly one delete[].
For each stack allocation, there should be no explicit freeing or deletion. The destructor is called automatically, where applicable.
In general, you cannot mix and match any of these, e.g. no free-ing or delete[]-ing a new object. Doing so results in undefined behavior.
Well, let's try it:
jeremy#jeremy-desktop:~$ echo 'main() { int a; delete &a; }' > test.cpp
jeremy#jeremy-desktop:~$ g++ -o test test.cpp
jeremy#jeremy-desktop:~$ ./test
Segmentation fault
So apparently it is not safe at all.
Keep in mind that when you allocate a block of memory using new (or malloc for that matter), the actual block of memory allocated will be larger than what you asked for.
The memory block will also contain some bookkeeping information so that when you free the block, it can easily be put back into the free pool and possibly be coalesced with adjacent free blocks.
When you try to free any memory that you didn't receive from new, that bookkeeping information wont be there but the system will act like it is and the results are going to be unpredictable (usually bad).
Yes, it is undefined behavior: passing to delete anything that did not come from new is UB:
C++ standard, section 3.7.3.2.3:
The value of the first argument supplied to one of thea deallocation functions provided in the standard library may be a null pointer value; if so, and if the deallocation function is one supplied in the standard library, the call to the deallocation function has no effect. Otherwise, the value supplied to operator delete(void*) in the standard library shall be one of the values returned by a previous invocation of either operator new(std::size_t) or operator new(std::size_t, const std::nothrow_t&) in the standard library.
The consequences of undefined behavior are, well, undefined. "Nothing happens" is as valid a consequence as anything else. However, it's usually "nothing happens right away": deallocating an invalid memory block may have severe consequences in subsequent calls to the allocator.
After playing a bit with g++ 4.4 in windows, I got very interesting results:
calling delete on a stack variable doesn't seem to do anything. No errors throw, but I can access the variable without problems after deletion.
Having a class with a method with delete this successfully deletes the object if it is allocated in the heap, but not if it is allocated in the stack (if it is in the stack, nothing happens).
Nobody can know what happens. This invokes undefined behavior, so literally anything can happen. Don't do this.
No,
Memory allocated using new should be deleted using delete operator
and that allocated using malloc should be deleted using free.
And no need to deallocate the variable which are allocated on stack.
An angel loses its wings... You can only call delete on a pointer allocated with new, otherwise you get undefined behavior.
here the memory is allocated using stack so no need to delete it exernally but if you have allcoted dynamically
like
int *a=new int()
then you have to do delete a and not delete &a(a itself is a pointer), because the memory is allocated from free store.
You already answered the question yourself. delete must only be used for pointers optained through new. Doing anything else is plain and simple undefined behaviour.
Therefore there is really no saying what happens, anything from the code working fine through crashing to erasing your harddrive is a valid outcome of doing this. So please never do this.
It's UB because you must not call delete on an item that has not been dynamically allocated with new. It's that simple.
Motivation: I have two objects, A and B. I know that A has to be instantiated before B, maybe because B needs information calculated by A. Yet, I want to destruct A before B. Maybe I am writing an integration test, and I want server A to shut-down first. How do I accomplish that?
A a{};
B b{a.port()};
// delete A, how?
Solution: Don't allocate A on the stack. Instead, use std::make_unique and keep a stack-allocated smart pointer to a heap-allocated instance of A. That way is the least messy option, IMO.
auto a = std::make_unique<A>();
B b{a->port()};
// ...
a.reset()
Alternatively, I considered moving the destruction logic out of A's destructor and calling that method explicitly myself. The destructor would then call it only if it has not been called previously.
Related
I came across an article on new / operator new:
The many faces of operator new in C++
I couldn't understand the following example:
int main(int argc, const char* argv[])
{
char mem[sizeof(int)];
int* iptr2 = new (mem) int;
delete iptr2; // Whoops, segmentation fault!
return 0;
}
Here, the memory for int wasn't allocated using new, hence the segfault for delete.
What exactly does delete not like here? Is there some additional structure hidden when an object is initialized with new, that delete looks for?
EDIT:
I'll take out of comments several responses which helped me to understand the situation better:
As #463035818_is_not_a_number and #HolyBlackCat pointed out, mem was allocated on the stack, while delete tries to free memory on the heap. It's a pretty clear cut error and should lead to a segfault on most architectures.
If mem was allocated on the heap without an appropriate new:
The only way to do it I know would be, say, to allocate an int* on a heap, then reinterpret_cast it to an array of chars and give delete a char pointer. On a couple of architectures I tried this on, it actually works, but leads to a memory leak. In general, the C++ standard makes no guarantees in this case, because doing so would make binding assumption on the underlying architecture.
delete deletes objects from the heap. Your object is on the stack, not on the heap.
new (new T, not the placement-new you used) does two things: allocates heap memory (similar to malloc()), then performs initialization (for classes, calls a constructor).
Placement-new (what you used) performs initialization in existing memory, it doesn't allocate its own.
And delete does two things: calls the destructor (for class types), then frees the heap memory (similar to free()).
Since your object is not on the heap, delete can't delete its memory.
There's no "placement-delete" that only calls the destructor. Instead, we have manual destructor calls:
If you had a class type, you'd do iptr2->MyClass::~MyClass(); to call the destructor. And freeing the memory is then unnecessary since stack memory is automatically deallocated when leaving the current scope.
Also note that you forgot alignas(int) on your char array.
What exactly delete doesn't like here?
The fact that it was called for a pointer that did not come from a real new.
Is there some additional structure hidden when an object is initialized
with new, that it looks for?
That is a near certainty for your C++ implementation, but it's completely immaterial. This is undefined behavior, full stop. delete is defined only for pointers to objects that were created with a non-placement new operator. Otherwise this is undefined behavior. This is an important distinction. The "your C++ implementation" part is relevant. It's certainly possible that a different C++ compiler or operating system will produce code that doesn't crash, and does nothing at all. Or it may draw a funny face on your monitor screen. Or play a tune that you hate, on your speakers. This is what "undefined behavior" means. And in this case, "undefined behavior" means a crash, in your case.
You can only delete what has been allocated via new.
As explained in the article, placement new, skips the allocation:
Calling placement new directly skips the first step of object allocation. We don't ask for memory from the OS. Rather, we tell it where there's memory to construct the object in [3]. The following code sample should clarify this:
You cannot delete mem because it has not been allocated via new. mem has automatic storage duration and gets freed when main returns.
Placement new in your code creates an int in already allocated memory. If int had a destructor you would need to call the destructor (but not deallcoate the memory). Placing the int in the memory of mem does not change the fact that mem is allocated on the stack.
Actually the placement new in the code is not that relevant for the issue in the code. Also this code
int main(int argc, const char* argv[])
{
char mem[sizeof(int)];
delete iptr2; // Whoops, Undefined
return 0;
}
Has undefined behavior just as your code has.
There are several flavours of new and delete.
The (non-placement) new operator allocates memory by calling the operator new() function. The delete operator frees memory by calling the operator delete() function. They are a pair.
(Confused yet? Read this, or maybe this).
The new[] operator allocates memory by calling the operator new[]() function. The delete[] operator frees memory by calling the operator delete[]() function. They are a different pair.
The placement new operator does not allocate memory and does not call any kind of operator new()-like function. There is no corresponding delete operator or operator delete()-like function.
You cannot mix new of one flavour with delete of a different flavour, it makes no sense and the behaviour is undefined.
Considering the following:
tbModelHFrame = new TbModelHeaderFrame(this, storage->getDataBase());
I guess the correct way to delete tbModelHFrame memory will be
delete tbModelHFrame;
Right?
How do I check that the memory was really released?
How do I check that the memory was really released?
You don't.
C++ has no means of telling whether a pointer points to a valid object or a random region in memory. The latter includes a region that was valid at some point, but has been deleted since.
It is up to the developer to organize their code in a way that this cannot happen.
The only guarantee that the language gives you to help you out here, is that a delete call never fails. So if you call delete once on the object, you can be reasonably sure that the object destroyed properly and the memory was released. Just don't attempt to access it again afterwards, or you'll be in trouble.
Yes, what is allocated with new should be freed with delete.
A way to check if every dinamically allocated memory has been freed is to use Valgrind's Memcheck
Anyway, it is usually safer to use smart pointers (See here).
According to the delete operator reference:
[..] In all cases, if ptr is a null pointer, the standard library
deallocation functions do nothing.
If the pointer passed to the
standard library deallocation function was not obtained from the
corresponding standard library allocation function, the behavior is
undefined.
After the standard library deallocation function returns,
all pointers referring to any part of the deallocated storage become
invalid.
Any use of a pointer that became invalid in this manner, even
copying the pointer value into another variable, is undefined
behavior. (until C++14)
Indirection through a pointer that became
invalid in this manner and passing it to a deallocation function
(double-delete) is undefined behavior. Any other use is
implementation-defined.
Thus, in case of a problems of deleting the pointer, it is undefined behavior.
The premise of the question is wrong: if delete doesn't release the memory, your heap is corrupted and your application can already do anything, including formatting your hard drive. So you've got bigger problems than a mere delete being a no-op if it comes to that. So, don't worry about it. As long as haven't messed up your heap due to memory errors in your own code, you'll be fine.
In any case, you should not use naked pointers as owning pointers. This is C++, not C.
Use a smart pointer:
QScopedPointer<TbModelHeaderFrame> tbModelHFrame(
new TbModelHeaderFrame(this, storage->getDataBase())
);
...
tbModelHFrame->something(); // do something with it
And that's it. The memory will be released when the pointer goes out of scope. You don't have to worry about it.
The pointer can also be a class member:
class Foo {
QScopedPointer<TbModelHeaderFrame> m_modelHFrame;
...
};
Foo::Foo() :
m_modelHFrame(new TbModelHeaderFrame(this, storage->getDataBase())) {
...
}
or
Foo::Foo() : ... {
m_modelHFrame.reset(new TbModelHeaderFrame(this, storage->getDataBase()));
...
}
Modern C++ code should be designed to not to use manual memory management except where absolutely necessary for well understood reasons. In most cases, naked pointers and manual memory management in modern C++ are a sign of bad design, not necessity.
TL;DR: Modern C++/Qt code can and should read a bit like Python :)
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How does delete[] “know” the size of the operand array?
How does the delete in C++ know how many memory locations to delete
I know it's a rather simple question but I a not sure about the difference (if any) between this lines :
double * a = new double[100];
delete[] a;
delete a;
free ((void*)a);
First off, would all of these calls (used each without the others) work the same way and free sizeof(double)*100 bytes?
Which lead me to the 2nd question, how does the program keep track of the size of the allocated memory? For instance if I send my a pointer to a function, then delete[] this pointer from within my function, would I also free the same amount of memory?
Thanks
The difference, oversimplified, is this:
delete[] a;
Is correct. All others are incorrect, and will exhibit Undefined Behavior.
Now, in reality, on all the compilers I use daily, delete a; will do the right thing every time. But you should still not do it. Undefined Behavior is never correct.
The free call will also probably do the right thing in the real world, but only because the thing you're freeing doesn't have a non-default destructor. If you tried to free something that was a class with a destructor, for example, it definitely wouldn't work -- the destructor would never be called.
That's one of the big differences (not the only difference) between new/delete and malloc/free -- the former call the constructors and destructors, while the latter meerly allocate and dealocate space.
Incorporating something #Rob said in his now-deleted post:
The simple rule is this: every new[] requires exactly one delete[].
Every new requires exactly one delete. malloc requires free. No
mix-and-match is allowed.
As to the question of how delete[] knows how many elements to delete, please see this response to a previous duplicate question.
Someone correct me, if I'm wrong, but as far as I understand, you use delete when you previously allocated memory with new and delete[] after using new Type[]. And free is used when you have allocated memory using malloc.
See c++ reference on delete and free.
Regarding the array of doubles, the result of all forms is the same -- all the allocated memory is returned to the system. The difference in calling free vs. delete vs. delete[] is:
free only releases the memory (the size of memory allocated for a was stored by memory manager when calling new)
delete calls the destructor of allocated object before releasing the memory
delete[] calls the destructor of each element in the array before releasing the memory
The difference is important if destructor of the allocated object contains cleanup code which releases other memory or system resource such as file or socket descriptor allocated during the lifetime of an object.
It is a good habit in C++ to allways use deleteon single instances and delete[] on array of objects/primitives regardless of the content of destructor.
Someone on IRC claimed that, although allocating with new[] and deleting with delete (not delete[]) is UB, on Linux platforms (no further details about the OS) it would be safe.
Is this true? Is it guaranteed? Is it to do with something in POSIX that specifies that dynamically-allocated blocks should not have metadata at the start?
Or is it just completely untrue?
Yes, I know I shouldn't do it. I never would.I am curious about the veracity of this idea; that's it!
By "safe", I mean: "will not cause behaviour other than were the original allocation performed by new, or were the de-allocation performed by delete[]". This means that we might see 1 "element" destruction or n, but no crashing.
Of course it's not true. That person is mixing up several different concerns:
how does the OS handle allocations/deallocations
correct calls to constructors and destructors
UB means UB
On the first point, I'm sure he's correct. It is common to handle both in the same way on that level: it is simply a request for X bytes, or a request to release the allocation starting at address X. It doesn't really matter if it's an array or not.
On the second point, everything falls apart. new[] calls the constructor for each element in the allocated array. delete calls the destructor for the one element at the specified address. And so, if you allocate an array of objects, and free it with delete, only one element will have its destructor invoked. (This is easy to forget because people invariably test this with arrays of ints, in which case this difference is unnoticeable)
And then there's the third point, the catch-all. It's UB, and that means it's UB. The compiler may make optimizations based on the assumption that your code does not exhibit any undefined behavior. If it does, it may break some of these assumptions, and seemingly unrelated code might break.
Even if it happens to be safe on some environment, don't do it. There's no reason to want to do it.
Even if it did return the right memory to the OS, the destructors wouldn't be called properly.
It's definitely not true for all or even most Linuxes, your IRC friend is talking bollocks.
POSIX has nothing to do with C++. In general, this is unsafe. If it works anywhere, it's because of the compiler and library, not the OS.
This question discusses in great details when exactly mixing new[] and delete looks safe (no observable problems) on Visual C++. I suppose that by "on Linux" you actually mean "with gcc" and I've observed very similar results with gcc on ideone.com.
Please note that this requires:
global operator new() and operator new[]() functions to be implemented identically and
the compiler optimizing away the "prepend with number of elements" allocation overhead
and also only works for types with trivial destructors.
Even with these requirements met there's no guarantee it will work on a specific version of a specific compiler. You'll be much better off simply not doing that - relying on undefined behavior is a very bad idea.
It is definitely not safe as you can simply try out with the following code:
#include<iostream>
class test {
public:
test(){ std::cout << "Constructor" << std::endl; }
~test(){ std::cout << "Destructor" << std::endl; }
};
int main() {
test * t = new test[ 10 ];
delete t;
return 1;
}
Have a look at http://ideone.com/b8BiQ . It fails misserably.
It may work when you do not use classes, but only fundamental types, but even that is not guaranteed.
EDIT: Some explanations for those of you who want to know why this crashes:
new and delete mainly serve as wrappers around malloc(), hence calling free() on a newed pointer is most of the time "safe" (remember to call the destructor), but you should not rely on it. For new[] and delete[] however the situation is more complicated.
When an array of classes gets constructed using new[] each default constructor will be called in turn. When you do delete[] each destructor gets called. However each destructor also has to be supplied a this pointer to use inside as a hidden parameter. So before calling the destructor the program has to find the locations of all objects within the reserved memory, to pass these locations as this pointers to the destructor. So all information that is later needed to reconstruct this information needs to be stored somewhere.
Now the easiest way would be to have a global map somewhere around, which stores this information for all new[]ed pointers. In this case if you delete is called instead of delete[] only one of the destructors would be called and the entry would not be removed from a map. However this method is usually not used, because maps are slow and memory management should be as fast as possible.
Hence for the stdlibc++ a different solution is used. Since only a few bytes are needed as additional information, it is the fastest to just over-allocate by these few bytes, store the information at the beginning of the memory and return the pointer to the memory after the bookkeeping. So if you allocate an array of 10 objects of 10 bytes each, the programm will allocate 100+X bytes where X is the size of the data which is needed to reconstruct the this.
So in this case it looks something like this
| Bookkeeping | First Object | Second Object |....
^ ^
| This is what is returned by new[]
|
this is what is returned by malloc()
So in case you pass the pointer you have recieved from new[] to delete[] it will call all destructors, then substract X from the pointer and give that one to free(). However if you call delete instead, it will call a destructor for the first object and then immediately pass that pointer to free(), which means free() has just been passed a pointer which was never malloced, which means the result is UB.
Have a look at http://ideone.com/tIiMw , to see what gets passed to delete and delete[]. As you can see, the pointer returned from new[] is not the pointer which was allocated inside, but 4 is added to it before it is being returned to main(). When calling delete[] correctly the same four is substracted an we get the correct pointer within delete[] however this substraction is missing when calling delete and we get the wrong pointer.
In case of calling new[] on a fundamental type, the compiler immediately knows that it will not have to call any destructors later and it just optimizes the bookkeeping away. However it is definitely allowed to write bookkeeping even for fundamental types. And it is also allowed to add bookkeeping in case you call new.
This bookkeeping in front of the real pointer is actually a very good trick, in case you ever need to write your own memory allocation routines as a replacement of new and delete. There is hardly any limit on what you can store there , so one should never assume that anything returned from new or new[] was actually returned from malloc().
I expect that new[] and delete[] just boil down to malloc() and free() under Linux (gcc, glibc, libstdc++), except that the con(de)structors get called. The same for new and delete except that the con(de)structors get called differently. This means that if his constructors and destructors don't matter, then he can probably get away with it. But why try?
I am working on modifying a relatively large C++ program, where unfortunately it is not always clear whether someone before me used C or C++ syntax (this is in the electrical engineering department at a university, and we EEs are always tempted to use C for everything, and unfortunately in this case, people can actually get away with it).
However, if someone creates an object:
Packet* thePacket = new Packet();
Does it matter whether it is destroyed with delete thePacket; or free(thePacket); ?
I realize that delete calls the destructor while free() does not, but Packet does not have a destructor. I am having a terrible time stuck in a memory management swamp here and I'm thinking this may be one of the many problems.
Yes it does matter.
For memory obtained using new you must use delete.
For memory obtained using malloc you must use free.
new and malloc may use different data structures internally to keep track of what and where it has allocated memory. So in order to free memory, you have to call that corresponding function that knows about those data structures. It is however generally a bad idea to mix these two types of memory allocation in a piece of code.
If you call free(), the destructor doesn't get called.
Also, there's no guarantee that new and free operate on the same heap.
You can also override new and delete to operate specially on a particular class. If you do so, but call free() instead of the custom delete, then you miss whatever special behavior you had written into delete. (But you probably wouldn't be asking this question if you had done that, because you'd know what behaviors you were missing..)
Packet has a destructor, even if you haven't explicitly declared one. It has a default destructor. The default destructor probably doesn't actually do much, but you can't count on that being the case. It's up to the compiler what it does.
new and malloc also may have wildly different implementations. For example, delete is always called in a context where it has perfect information about the size of the data structure it's deleting at compile time. free does not have this luxury. It's possible that the allocator that new is using may not store the bytes at the beginning of the memory area stating how many bytes it occupies. This would lead free to do entirely the wrong thing and crash your program when freeing something allocated with new.
Personally, if getting people to do the right thing or fixing the code yourself is completely impossible, I would declare my own global operator new that called malloc so then free would definitely not crash, even though it would still not call the destructor and be generally really ugly.
In short, it is as bad as undefined behavior.
This is quiet self explanatory.
C Standard ($7.20.3.2/2) - "The free
function causes the space pointed to
by ptr to be deallocated, that is,
made available for further allocation.
If ptr is a null pointer, no action
occurs. Otherwise, if the argument
does not match a pointer earlier
returned by the calloc, malloc, or
realloc function, or if the space has
been deallocated by a call to free or
realloc, the behavior is undefined."
You are absolutely right, it is NOT correct. As you said yourself, free won't call the destructor. Even if Packet doesn't have an explicit destructor, it's using an inherited one.
Using free on an object created with new is like destroying only what a shallow-copy would reach. Deep-destroying NEEDS the destructor function.
Also, I'm not sure objects created with new() are on the same memory map as malloc()'d memory. They are not guaranteed to be, I think.
if someone creates an object:
Packet* thePacket = new Packet();
Does it matter whether is is destroyed with delete thePacket; or free(thePacket); ?
Yes it does matter. free (thePacket) would invoke Undefined Behaviour but delete thePacket would not and we all know Undefined Behaviour may have disastrous consequences.