It is said that the memory allocated by new should be freed by delete, but a modern desktop OS will reclaim the memory even though you don't delete it. So why should we delete the memory allocated by new?
Also assert is known as not calling the destructors, and it seems like it's widely used in STL (at least VS2015 does that). If it's advised to delete the memory allocated by new (classes like string, map and vector use the destructor to delete the allocated memory), why the developers still use lots of assert then?
Why should we delete the memory allocated by new?
Because otherwise
the memory is leaked. Not leaking memory is absolutely crucial for long running software such as servers and daemons because the leaks will accumulate and consume all available memory.
the destructors of the objects will not be called. The logic of the program may depend on the destructors being called. Not calling some destructors may cause non-memory resources being leaked as well.
Also assert is known as not calling the destructors
A failed assert terminates the entire process, so it doesn't really matter much whether the logic of the program remains consistent, nor whether memory or other resources are leaked since the process isn't going to reuse those resources anyway.
and it seems like it's widely used in STL (at least VS2015 does that)
To be accurate, I don't think the standard library is specified to use the assert macro. The only situation where it could use it is if you have undefined behaviour. And if you have UB, then leaked memory is the least of your worries.
If you know that the destructor of the object is trivial, and you know that the object is used throughout the program (so, it's essentially a singleton), then it's quite safe to leak the object on purpose. This does have a drawback that it will be detected by a memory leak sanitizer that you would probably want to use to detect accidental memory leaks.
It is said that the memory allocated by new should be freed by
delete, but a modern desktop OS will reclaim the memory even though
you don't delete it. So why should we delete the memory allocated
by new?
Careful! The OS reclaims the memory only after your program has finished. This is not like garbage collection in Java or C#, which frees memory while the program is running.
If you don't delete (or more precisely, if you don't make sure that delete is called by resource-managing classes like std::unique_ptr, std::string or std::vector), then memory usage will continue to grow until you run out of memory.
Not to mention that destructors will not run, which matters if you have objects of types whose destructors perform more than just releasing memory.
Also assert is known as not calling the destructors,
More precisely, assert causes the program to terminate in a way that destructors are not called, unless the corresponding translation unit was compiled with the NDEBUG preprocessor macro defined, in which case the assert does nothing.
and it seems like it's widely used in STL (at least VS2015 does that).
Yes, the standard-library implementation of Visual C++ 2015 does that a lot. You should also use it liberally in your own code to detect bugs.
The C++ standard itself does not specify how and if asserts should appear in the implementation of a standard-library function. It does specify situations where the behaviour of the program is undefined; those often correspond to an assert in the library implementation (which makes sense, because if the behaviour is undefined anyway, then the implementation is free to do anything, so why not use that liberty in a useful way to give you bug detection?).
If it's advised to delete the memory allocated by new (classes like
string, map and vector use the destructor to delete the allocated
memory), why the developers still use lots of assert then?
Because if an assertion fails, then you want your program to terminate immediately because you have detected a bug. If you have detected a bug, then your program is by definition in an unknown state. Allowing your program to continue in an unknown state by running destructors is possibly dangerous and may compromise system integrity and data consistency.
After all, as I said above, destructors may not only call delete a few times. Destructors close files, flush buffers, write into logs, close network connections, clear the screen, join on a thread or commit or rollback database transactions. Destructors can do a lot of things which can modify and possibly corrupt system resources.
It is a common pattern that applications - in the course of their execution -dynamically create objects that will not be used throughout the program execution. If an application creates a lot of such objects of temporary lifetime, it somehow has to manage memory in order not to run out of it. Note that memory is still limited, since operating systems usually do not assign all available memory to an application. Operating systems, especially those driving limited devices like mobile phones, may even kill applications once the produce a too high pressure on memory.
Hence, you should free the memory of those objects that are not used any more. And C++ offers storage class specifiers to make this handling easier. automatic storage duration, which is the default, deletes objects once they run out of scope (i.e. their enclosing block, e.g. the function in which they are defined, finishes). static objects remain until the end of normal program execution (if reached), and dynamically allocated objects remain until you call delete.
Note that - in no way - any object will survive the end of program execution, as the operating system will free the complete application memory. For normal program terminations, destructors of static objects will be called (but not for objects of dynamically created objects that have not been deleted before). For abnormal program terminations, like triggered by assert, exit or the operating system, no destructors are called; you can rather think of a program terminates because you turn off the power.
If you don't delete You introduce a memory leak. Each time this operator is invoked the process will waste some portion of its address space until it ultimately runs out of memory.
After your program finishes you do not need to care about memory leaks, so in principle this would be fine:
int main(){
int* x = new int(1);
}
However, thats not how one usually uses memory. Often you need to allocate memory for something that you use only for a short time and then you want to free that memory when you dont need it anymore. Consider this example:
int main(){
while ( someCondition ) {
Foo* x = new Foo();
doSomething(*x);
} // <- already here we do not need x anymore
}
That code will accumulate more and more memory for x even if all that is used is a single instance of x. Thats why one should free memory latest at the end of the scope where it is needed (once you left the scope you have no way to free it!). Because forgetting a delete isnt nice, one should make use of RAII whenever possible:
int main(){
while ( someCondition ) {
Foo x;
doSomething(x);
} // <- memory for x is freed automatically
}
Related
I understand pointer allocation of memory fully, but deallocation of memory only on a higher level. What I'm most curious about is how C++ keeps track of what memory has already been deallocated?
int* ptr = new int;
cout << ptr;
delete ptr;
cout << ptr;
// still pointing to the same place however it knows you can't access it or delete it again
*ptr // BAD
delete ptr // BAD
How does C++ know I deallocated that memory. If it just turns it to arbitrary garbage binary numbers, wouldn't I just be reading in that garbage number when I dereference the pointer?
Instead, of course, c++ knows that these are segfaults somehow.
C++ does not track memory for you. It doesn't know, it doesn't care. It is up to you: the programmer. (De)allocation is a request to the underlying OS. Or more precisely it is a call to libc++ (or possibly some other lib) which may or may not access the OS, that is an implementation detail. Either way the OS (or some other library) tracks what parts of memory are available to you.
When you try to access a memory that the OS did not assigned to you, then the OS will issue segfault (technically it is raised by the CPU, assuming it supports memory protection, it's a bit complicated). And this is a good situation. That way the OS tells you: hey, you have a bug in your code. Note that the OS doesn't care whether you use C++, C, Rust or anything else. From the OS' perspective everything is a machine code.
However what is worse is that even after delete the memory may still be owned by your process (remember those libs that track memory?). So accessing such pointer is an undefined behaviour, anything can happen, including correct execution of the code (that's why it is often hard to find such bugs).
If it just turns it to arbitrary garbage binary numbers, wouldn't I just be reading in that garbage number when I dereference the pointer?
Who says it turns into garbage? What really happens to the underlying memory (whether the OS reclaims it, or it is filled with zeros or some garbage, or maybe nothing) is none of your concern. Everything you need to know is that after delete it is no longer safe to use the pointer. Even (or especially) when it looks ok.
How does C++ know I deallocated that memory.
When you use a delete expression, "C++ knows" that you deallocated that memory.
If it just turns it to arbitrary garbage binary numbers
C++ doesn't "turn [deallocated memory] to arbitrary garbage binary numbers". C++ merely makes the memory available for other allocations. Changing the state of that memory may be a side effect of some other part of the program using that memory - which it is now free to do.
wouldn't I just be reading in that garbage number when I dereference the pointer?
When you indirect through the pointer, the behaviour of the program is undefined.
Instead, of course, c++ knows that these are segfaults somehow.
This is where your operating system helpfully stepped in. You did something that did not make sense, and the operating system killed the misbehaving process. This is one of the many things that may but might not happen when the behaviour of the program is undefined.
I take it that you wonder what delete actually does. Here it is:
First of all, it destructs the object. If the object has a destructor, it is called, and does whatever it is programmed to do.
delete then proceeds to deallocate the memory itself. This means that the deallocator function (::operator delete() in most cases in C++) typically takes the memory object, and adds it to its own, internal data structures. I.e. it makes sure that the next call to ::operator new() can find the deallocated memory slab. The next new might then reuse that memory slab for other purposes.
The entire management of memory happens by using data structures that you do not see, or need to know that they exist. How an implementation of ::operator new() and ::operator delete() organizes its internal data is strictly and fully up to the implementation. It doesn't concern you.
What concerns you is, that the language standard defines that any access to a memory object is undefined behavior after you have passed it to the delete operator. Undefined behavior does not mean that the memory needs to vanish magically, or that it becomes inaccessible, or that it is filled with garbage. Usually none of these happens immediately, because making the memory inaccessible or filling it with garbage would require explicit action from the CPU, so implementations don't generally touch what's written in the memory. You are just forbidden to make further accesses, because it's now up to system to use the memory for any other purpose it likes.
C++ still has a strong C inheritance when it comes to memory addressing. And C was invented to build an OS (first version of Unix) where it makes sense to use well known register addresses or to whatever low level operation. That means that when you address memory through a pointer, you as the programmer are supposed to know what lies there and the language just trusts you.
On common implementations, the language requests chunks of memory from the OS for new dynamic objects, and keeps track of used and unused memory block. The goal is to re-use free blocks for new dynamic objects instead of asking the OS for each and every allocation and de-allocation.
Still for common implementation, nothing changes in a freshly allocated or deallocated block, but the pointers maintaining a list of free blocks. AFAIK few return memory to the OS until the end of the process. But a free block could be later re-used, that is the reason why when a careless programmer tries to access a block of memory containing pointers that has been re-used, SEGFAULT is not far, because the program could try to use arbitrary memory addresses that could not be mapped for the process.
BTW, the only point required by the standard is that accessing an object past its end of life, specifically here using the pointer after the delete statement invokes Undefined Behaviour. Said differently anything can happen from an immediate crash to normal results, passing through later crash or abnormal result in unrelated places of the program...
When using dynamically allocated objects in C++ eg:
TGraph* A = new TGraph(...);
One should always delete these because otherwise the objects might still be in memory when
control is handed back to the parent scope. While I can see why this is true for subscopes and subroutines of a program, does the same count for the main scope?
Am I obliged to delete objects that were dynamically built inside main()? The reason why this seems a bit redudant to me is that when main ends, the program also ends, so there is no need to worry about memory leaks.
Most of the modern OS always reclaim back all memory they allocated to a program(process).
The OS doesn't really understand if your program leaked memory it merely takes back what it allocatted.
But there are bigger issues at hand than just the memory loss:
Note that if the destructor of the object whos delete needs to be called performs some non-trivial operation and your program depends on the side effects produced by it then your program falls prey to Undefined Behavior[Ref 1]. Once that happens all bets are off and your program may show any beahvior.
Also, An OS usually reclaims the allocated memory but not the other resources, So you might leak those resources indirectly. This may include operations dealing with file descriptors or state of the program itself etc.
Hence, it is a good practice to always deallocate all your allocations by calling delete or delete [] before exiting your program.
[Ref 1]C++03 Standard 3.8 Para 4:
"....if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior."
IMO it is best to always call delete properly:
to make it an automatic habit, making it less likely to forget it when it is really needed
to cover cases when non-memory resources (sockets, file handles, ...) need to be freed - these aren't automatically freed by the OS
to cater for future refactoring when the code in question might be moved out of main scope
Yes, you should call delete, at least because it's best practice. If you have important logic in your destructor, that's one extra reason that you should call delete.
Corrected: If the program depends on logic in the destructor, not calling delete explicitly results in undefined behavior.
The reason why this seems a bit redudant to me is that when main ends,
the program also ends, so there is no need to worry about memory
leaks.
You're right, but consider this: you create a class object which opens a connection to remote DB. After your program will complete, you should tell the DB "I'm done, i'm going to disconnect", but it won't happen in the case you won't call delete properly.
Its the best practice to de-allocate memory that's been allocated. You should keep in mind that Heap memory is limited and just allocating with out de-allocating while your program is running might run the heap space out for some other/or the same program(if its some kind of a daemon that is meant to run for a very long time) that needs heap.
Of course memory will be reclaimed by the operating system at the end of the program's execution.
I see you are using ROOT (CMS guy?). I think ROOT takes care of this and cleans up, doesn't it?
Best practices:
Do not use new, use automatic allocation
When dynamic allocation is necessary, use RAII to ensure automatic cleanup
You should never have to write delete in applicative code.
Here, why are you calling new for TGraph ?
TGraph A(...);
works better: less worries!
I know that stack allocated resources are released in reverse order as they were allocated at the end of a function as part of RAII. I've been working on a project and I allocate a lot of memory with "new" from a library I'm using, and am testing stuff out. I haven't added a shutdown function as a counterpart to the initialise function that does all the dynamic allocation. When you shut down the program I'm pretty sure there is no memory leak as the allocated memory should be reclaimed by the operating system. At least any modern OS, as explained in this question similar to mine: dynamically allocated memory after program termination .
I'm wondering two things:
1: Is there a particular order that resources are released in in this case? Does it have anything to do with your written code (ie., which order you allocated it), or is it completely up to the OS to do its thing?
2: The reason I haven't made a shutdown function to reverse the initialisation is because I say to myself I'm just testing stuff now, I'll do it later. Is there any risk of doing any damage to anything by doing what I'm doing? The worse I can imagine is what was mentioned in the answer to that question I linked, and that is that the OS fails to reclaim the memory and you end up with a memory leak even after the program exits.
I've followed the Bullet physics library tutorial and initialise a bunch of code like this:
pSolver = new btSequentialImpulseConstraintSolver;
pOverlappingPairCache = new btDbvtBroadphase();
pCollisionConfig = new btDefaultCollisionConfiguration();
pDispatcher = new btCollisionDispatcher(pCollisionConfig);
pDynamicsWorld = new btDiscreteDynamicsWorld(pDispatcher, pOverlappingPairCache, pSolver, pCollisionConfig);
And never call delete on any of it at the moment because, as I said, I'm just testing.
It depends on the resources. Open files will be closed. Memory will be freed. Destructors will not be called. Temporary files created will not be deleted.
There is no risk of having a memory leak after the program exits.
Because programs can crash, there are many mechanisms in place preventing a process to leak after it stopped and leaking usually isn't that bad.
As a matter of fact, if you have a lot of allocations that you don't delete until the end of the program, it can be faster to just have the kernel clean up after you.
However destructors are not run. This mostly causes temporary files to not be deleted.
Also it makes debugging actual memory leaks more difficult.
I suggest using std::unique_ptr<T> and not leaking in the first place.
It depends on how the memory is actually allocated and on your host system.
If you are only working with classes that don't override operator new() AND you are using a modern operating system that guarantees memory resources are released when the process exits, then all dynamically allocated memory should be released when your program terminates. There is no guarantee on the order of memory release (e.g. objects will not be released in the same order, or in reverse order, of their construction). The only real risk in this case is associated with bugs in the host operating system that cause resources of programs/processes to be improperly managed (which is a low risk - but not zero risk - for user programs in modern windows or unix OSs).
If you are using any classes that override operator new() (i.e. that change how raw memory is allocated in the process of dynamically constructing an object) then the risk depends on how memory is actually being allocated - and what the requirements are for deallocation. For example, if the operator new() uses global or system-wide resources (e.g. mutexes, semaphores, memory that is shared between processes) then there is a risk that your program will not properly release those resources, and then indirectly cause problems for other programs which use the same resources. In practice, depending on the design of such a class, the needed cleanup might be in a destructor, an operator delete() or some combination of the two - but, however it is done, your program will need to explicitly release such objects (e.g. a delete expression that corresponds to the new expression) to ensure the global resources are properly released.
One risk is that destructors of your dynamically allocated objects will not be invoked. If your program relies on the destructor doing anything other than release dynamically allocated memory (presumably allocated by the class constructor and managed by other member functions) then the additional clean-up actions will not be performed.
If your program will ever be built and run on a host system that doesn't have a modern OS then there is no guarantee that dynamically allocated memory will be reclaimed.
If code in your program will ever be reused in a larger long-running program (e.g. your main() function is renamed, and then called from another program in a loop) then your code may cause that larger program to have a memory leak.
It's fine, since the operating system (unless it's some exotic or ancient OS) will not leak the memory after the process has ended. Same goes for sockets and file handles; they will be closed at process exit. It's not in good style to not clean up after yourself, but if you don't, there's no harm done to the overall OS environment.
However, in your example, it looks to me like the only memory that you would actually need to release yourself is that of pDynamicsWorld, since the others should be cleaned up by the btDiscreteDynamicsWorld instance. You're passing them as constructor arguments, and I suspect they get destroyed automatically when pDynamicsWorld gets destroyed. You should read the documentation to make sure.
However, it's just not in good style (because it's unsafe) to use delete anymore. So instead of using delete to destroy pDynamicsWorld, you can use a unique_ptr instead, which you can safely create using the std::make_unique function template:
#include <memory>
// ...
// Allocate everything else with 'new' here, as usual.
// ...
// Except for this one, which doesn't seem to be passed to another
// constructor.
auto pDynamicsWorld = std::make_unique<btDiscreteDynamicsWorld>(
pDispatcher, pOverlappingPairCache, pSolver, pCollisionConfig);
Now, pDispatcher, pOverlappingPairCache, pSolver and pCollisionConfig should be destroyed by pDynamicsWorld automatically, and pDynamicsWorld itself will be destroyed automatically when it goes out of scope because it's a unique_ptr.
But, again: Read the documentation of Bullet Physics to check whether the objects you pass as arguments to the constructors of the Bullet Physics classes actually do get cleaned up automatically or not.
When using dynamically allocated objects in C++ eg:
TGraph* A = new TGraph(...);
One should always delete these because otherwise the objects might still be in memory when
control is handed back to the parent scope. While I can see why this is true for subscopes and subroutines of a program, does the same count for the main scope?
Am I obliged to delete objects that were dynamically built inside main()? The reason why this seems a bit redudant to me is that when main ends, the program also ends, so there is no need to worry about memory leaks.
Most of the modern OS always reclaim back all memory they allocated to a program(process).
The OS doesn't really understand if your program leaked memory it merely takes back what it allocatted.
But there are bigger issues at hand than just the memory loss:
Note that if the destructor of the object whos delete needs to be called performs some non-trivial operation and your program depends on the side effects produced by it then your program falls prey to Undefined Behavior[Ref 1]. Once that happens all bets are off and your program may show any beahvior.
Also, An OS usually reclaims the allocated memory but not the other resources, So you might leak those resources indirectly. This may include operations dealing with file descriptors or state of the program itself etc.
Hence, it is a good practice to always deallocate all your allocations by calling delete or delete [] before exiting your program.
[Ref 1]C++03 Standard 3.8 Para 4:
"....if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior."
IMO it is best to always call delete properly:
to make it an automatic habit, making it less likely to forget it when it is really needed
to cover cases when non-memory resources (sockets, file handles, ...) need to be freed - these aren't automatically freed by the OS
to cater for future refactoring when the code in question might be moved out of main scope
Yes, you should call delete, at least because it's best practice. If you have important logic in your destructor, that's one extra reason that you should call delete.
Corrected: If the program depends on logic in the destructor, not calling delete explicitly results in undefined behavior.
The reason why this seems a bit redudant to me is that when main ends,
the program also ends, so there is no need to worry about memory
leaks.
You're right, but consider this: you create a class object which opens a connection to remote DB. After your program will complete, you should tell the DB "I'm done, i'm going to disconnect", but it won't happen in the case you won't call delete properly.
Its the best practice to de-allocate memory that's been allocated. You should keep in mind that Heap memory is limited and just allocating with out de-allocating while your program is running might run the heap space out for some other/or the same program(if its some kind of a daemon that is meant to run for a very long time) that needs heap.
Of course memory will be reclaimed by the operating system at the end of the program's execution.
I see you are using ROOT (CMS guy?). I think ROOT takes care of this and cleans up, doesn't it?
Best practices:
Do not use new, use automatic allocation
When dynamic allocation is necessary, use RAII to ensure automatic cleanup
You should never have to write delete in applicative code.
Here, why are you calling new for TGraph ?
TGraph A(...);
works better: less worries!
In C++, when you make a new variable on the heap like this:
int* a = new int;
you can tell C++ to reclaim the memory by using delete like this:
delete a;
However, when your program closes, does it automatically free the memory that was allocated with new?
Yes, it is automatically reclaimed, but if you intend to write a huge program that makes use of the heap extensively and not call delete anywhere, you are bound to run out of heap memory quickly, which will crash your program.
Therefore, it is a must to carefully manage your memory and free dynamically allocated data with a matching delete for every new (or delete [] if using new []), as soon as you no longer require the said variable.
When the process is terminated the memory is reclaimed back by the OS. Of course this argument shouldn't in any case be used to not perform proper memory management by the program.
Don't let people tell you yes. C++ has no concept of an OS, so to say "yes the OS will clean it up" is no longer talking about C++ but about C++ running on some environment, which may not be yours.
That is, if you dynamically allocate something but never free it you've leaked. It can only end its lifetime once you call delete/delete[] on it. On some OS's (and almost all desktop OS's), memory will be reclaimed (so other programs may use it.) But memory is not the same as resource! The OS can free all the memory it wants, if you have some socket connection to close, some file to finish writing to, etc, the OS might not do it. It's important not to let resources leak. I've heard of some embedded platforms that won't even reclaim the memory you've not freed, resulting in a leak until the platform is reset.
Instead of dynamically allocating things raw (meaning you're the one that has to explicitly delete it), wrap them into automatically allocated (stack allocated) containers; not doing so is considered bad practice, and makes your code extremely messy.
So don't use new T[N], use std::vector<T> v(N);. The latter won't let a resource leak occur. Don't use new T;, use smart_ptr p(new T);. The smart pointer will track the object and delete it when it's know longer used. This is called Scope-bound Resource Management (SBRM, also known as the dumber name Resource-Acquisition is Initialization, or RAII.)
Note there is no single "smart_ptr". You have to pick which one is best. The current standard includes std::auto_ptr, but it's quite unwieldy. (It cannot be used in standard containers.) Your best bet is to use the smart pointers part of Boost, or TR1 if your compiler supports it. Then you get shared_ptr, arguably the most useful smart pointer, but there are many others.
If every pointer to dynamically allocated memory is in an object that will destruct (i.e., not another object that is dynamically allocated), and that object knows to free the memory, that pointer is guaranteed to be freed. This question shouldn't even be a problem, since you should never be in a position to leak.
No, when the program exits ("closes") the dynamically allocated memory is left as is
EDIT:
Reading the other answers, I should be more precise. The destructors of dynamically allocated objects will not run but the memory will be reclaimed anyway by any decent OS.
PS: The first line should read
int* a = new int;
No, it's your responsibility to free it. Also, a must be a pointer, so it should be:
int *a = new int;
delete a;
This excellent answer by Brian R. Bondy details why it's good practice to free the memory allocated by a.
It is important to explicitly call
delete because you may have some code
in the destructor that you want to
execute. Like maybe writing some data
to a log file. If you let the OS free
your memory for you, your code in your
destructor will not be executed.
Most operating systems will deallocate
the memory when your program ends. But
it is good practice to deallocate it
yourself and like I said above the OS
won't call your destructor.
As for calling delete in general, yes
you always want to call delete, or
else you will have a memory leak in
your program, which will lead to new
allocations failing.
When your process terminates, the OS does regain control of all resources the process was using, including memory. However, that, of course, will not cause C++'s destructors to be necessarily run, so it's not a panacea for not explicitly freeing said resources (though it won't be a problem for int or other types with noop dtors, of course;-).