I know that stack allocated resources are released in reverse order as they were allocated at the end of a function as part of RAII. I've been working on a project and I allocate a lot of memory with "new" from a library I'm using, and am testing stuff out. I haven't added a shutdown function as a counterpart to the initialise function that does all the dynamic allocation. When you shut down the program I'm pretty sure there is no memory leak as the allocated memory should be reclaimed by the operating system. At least any modern OS, as explained in this question similar to mine: dynamically allocated memory after program termination .
I'm wondering two things:
1: Is there a particular order that resources are released in in this case? Does it have anything to do with your written code (ie., which order you allocated it), or is it completely up to the OS to do its thing?
2: The reason I haven't made a shutdown function to reverse the initialisation is because I say to myself I'm just testing stuff now, I'll do it later. Is there any risk of doing any damage to anything by doing what I'm doing? The worse I can imagine is what was mentioned in the answer to that question I linked, and that is that the OS fails to reclaim the memory and you end up with a memory leak even after the program exits.
I've followed the Bullet physics library tutorial and initialise a bunch of code like this:
pSolver = new btSequentialImpulseConstraintSolver;
pOverlappingPairCache = new btDbvtBroadphase();
pCollisionConfig = new btDefaultCollisionConfiguration();
pDispatcher = new btCollisionDispatcher(pCollisionConfig);
pDynamicsWorld = new btDiscreteDynamicsWorld(pDispatcher, pOverlappingPairCache, pSolver, pCollisionConfig);
And never call delete on any of it at the moment because, as I said, I'm just testing.
It depends on the resources. Open files will be closed. Memory will be freed. Destructors will not be called. Temporary files created will not be deleted.
There is no risk of having a memory leak after the program exits.
Because programs can crash, there are many mechanisms in place preventing a process to leak after it stopped and leaking usually isn't that bad.
As a matter of fact, if you have a lot of allocations that you don't delete until the end of the program, it can be faster to just have the kernel clean up after you.
However destructors are not run. This mostly causes temporary files to not be deleted.
Also it makes debugging actual memory leaks more difficult.
I suggest using std::unique_ptr<T> and not leaking in the first place.
It depends on how the memory is actually allocated and on your host system.
If you are only working with classes that don't override operator new() AND you are using a modern operating system that guarantees memory resources are released when the process exits, then all dynamically allocated memory should be released when your program terminates. There is no guarantee on the order of memory release (e.g. objects will not be released in the same order, or in reverse order, of their construction). The only real risk in this case is associated with bugs in the host operating system that cause resources of programs/processes to be improperly managed (which is a low risk - but not zero risk - for user programs in modern windows or unix OSs).
If you are using any classes that override operator new() (i.e. that change how raw memory is allocated in the process of dynamically constructing an object) then the risk depends on how memory is actually being allocated - and what the requirements are for deallocation. For example, if the operator new() uses global or system-wide resources (e.g. mutexes, semaphores, memory that is shared between processes) then there is a risk that your program will not properly release those resources, and then indirectly cause problems for other programs which use the same resources. In practice, depending on the design of such a class, the needed cleanup might be in a destructor, an operator delete() or some combination of the two - but, however it is done, your program will need to explicitly release such objects (e.g. a delete expression that corresponds to the new expression) to ensure the global resources are properly released.
One risk is that destructors of your dynamically allocated objects will not be invoked. If your program relies on the destructor doing anything other than release dynamically allocated memory (presumably allocated by the class constructor and managed by other member functions) then the additional clean-up actions will not be performed.
If your program will ever be built and run on a host system that doesn't have a modern OS then there is no guarantee that dynamically allocated memory will be reclaimed.
If code in your program will ever be reused in a larger long-running program (e.g. your main() function is renamed, and then called from another program in a loop) then your code may cause that larger program to have a memory leak.
It's fine, since the operating system (unless it's some exotic or ancient OS) will not leak the memory after the process has ended. Same goes for sockets and file handles; they will be closed at process exit. It's not in good style to not clean up after yourself, but if you don't, there's no harm done to the overall OS environment.
However, in your example, it looks to me like the only memory that you would actually need to release yourself is that of pDynamicsWorld, since the others should be cleaned up by the btDiscreteDynamicsWorld instance. You're passing them as constructor arguments, and I suspect they get destroyed automatically when pDynamicsWorld gets destroyed. You should read the documentation to make sure.
However, it's just not in good style (because it's unsafe) to use delete anymore. So instead of using delete to destroy pDynamicsWorld, you can use a unique_ptr instead, which you can safely create using the std::make_unique function template:
#include <memory>
// ...
// Allocate everything else with 'new' here, as usual.
// ...
// Except for this one, which doesn't seem to be passed to another
// constructor.
auto pDynamicsWorld = std::make_unique<btDiscreteDynamicsWorld>(
pDispatcher, pOverlappingPairCache, pSolver, pCollisionConfig);
Now, pDispatcher, pOverlappingPairCache, pSolver and pCollisionConfig should be destroyed by pDynamicsWorld automatically, and pDynamicsWorld itself will be destroyed automatically when it goes out of scope because it's a unique_ptr.
But, again: Read the documentation of Bullet Physics to check whether the objects you pass as arguments to the constructors of the Bullet Physics classes actually do get cleaned up automatically or not.
Related
When using dynamically allocated objects in C++ eg:
TGraph* A = new TGraph(...);
One should always delete these because otherwise the objects might still be in memory when
control is handed back to the parent scope. While I can see why this is true for subscopes and subroutines of a program, does the same count for the main scope?
Am I obliged to delete objects that were dynamically built inside main()? The reason why this seems a bit redudant to me is that when main ends, the program also ends, so there is no need to worry about memory leaks.
Most of the modern OS always reclaim back all memory they allocated to a program(process).
The OS doesn't really understand if your program leaked memory it merely takes back what it allocatted.
But there are bigger issues at hand than just the memory loss:
Note that if the destructor of the object whos delete needs to be called performs some non-trivial operation and your program depends on the side effects produced by it then your program falls prey to Undefined Behavior[Ref 1]. Once that happens all bets are off and your program may show any beahvior.
Also, An OS usually reclaims the allocated memory but not the other resources, So you might leak those resources indirectly. This may include operations dealing with file descriptors or state of the program itself etc.
Hence, it is a good practice to always deallocate all your allocations by calling delete or delete [] before exiting your program.
[Ref 1]C++03 Standard 3.8 Para 4:
"....if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior."
IMO it is best to always call delete properly:
to make it an automatic habit, making it less likely to forget it when it is really needed
to cover cases when non-memory resources (sockets, file handles, ...) need to be freed - these aren't automatically freed by the OS
to cater for future refactoring when the code in question might be moved out of main scope
Yes, you should call delete, at least because it's best practice. If you have important logic in your destructor, that's one extra reason that you should call delete.
Corrected: If the program depends on logic in the destructor, not calling delete explicitly results in undefined behavior.
The reason why this seems a bit redudant to me is that when main ends,
the program also ends, so there is no need to worry about memory
leaks.
You're right, but consider this: you create a class object which opens a connection to remote DB. After your program will complete, you should tell the DB "I'm done, i'm going to disconnect", but it won't happen in the case you won't call delete properly.
Its the best practice to de-allocate memory that's been allocated. You should keep in mind that Heap memory is limited and just allocating with out de-allocating while your program is running might run the heap space out for some other/or the same program(if its some kind of a daemon that is meant to run for a very long time) that needs heap.
Of course memory will be reclaimed by the operating system at the end of the program's execution.
I see you are using ROOT (CMS guy?). I think ROOT takes care of this and cleans up, doesn't it?
Best practices:
Do not use new, use automatic allocation
When dynamic allocation is necessary, use RAII to ensure automatic cleanup
You should never have to write delete in applicative code.
Here, why are you calling new for TGraph ?
TGraph A(...);
works better: less worries!
It is said that the memory allocated by new should be freed by delete, but a modern desktop OS will reclaim the memory even though you don't delete it. So why should we delete the memory allocated by new?
Also assert is known as not calling the destructors, and it seems like it's widely used in STL (at least VS2015 does that). If it's advised to delete the memory allocated by new (classes like string, map and vector use the destructor to delete the allocated memory), why the developers still use lots of assert then?
Why should we delete the memory allocated by new?
Because otherwise
the memory is leaked. Not leaking memory is absolutely crucial for long running software such as servers and daemons because the leaks will accumulate and consume all available memory.
the destructors of the objects will not be called. The logic of the program may depend on the destructors being called. Not calling some destructors may cause non-memory resources being leaked as well.
Also assert is known as not calling the destructors
A failed assert terminates the entire process, so it doesn't really matter much whether the logic of the program remains consistent, nor whether memory or other resources are leaked since the process isn't going to reuse those resources anyway.
and it seems like it's widely used in STL (at least VS2015 does that)
To be accurate, I don't think the standard library is specified to use the assert macro. The only situation where it could use it is if you have undefined behaviour. And if you have UB, then leaked memory is the least of your worries.
If you know that the destructor of the object is trivial, and you know that the object is used throughout the program (so, it's essentially a singleton), then it's quite safe to leak the object on purpose. This does have a drawback that it will be detected by a memory leak sanitizer that you would probably want to use to detect accidental memory leaks.
It is said that the memory allocated by new should be freed by
delete, but a modern desktop OS will reclaim the memory even though
you don't delete it. So why should we delete the memory allocated
by new?
Careful! The OS reclaims the memory only after your program has finished. This is not like garbage collection in Java or C#, which frees memory while the program is running.
If you don't delete (or more precisely, if you don't make sure that delete is called by resource-managing classes like std::unique_ptr, std::string or std::vector), then memory usage will continue to grow until you run out of memory.
Not to mention that destructors will not run, which matters if you have objects of types whose destructors perform more than just releasing memory.
Also assert is known as not calling the destructors,
More precisely, assert causes the program to terminate in a way that destructors are not called, unless the corresponding translation unit was compiled with the NDEBUG preprocessor macro defined, in which case the assert does nothing.
and it seems like it's widely used in STL (at least VS2015 does that).
Yes, the standard-library implementation of Visual C++ 2015 does that a lot. You should also use it liberally in your own code to detect bugs.
The C++ standard itself does not specify how and if asserts should appear in the implementation of a standard-library function. It does specify situations where the behaviour of the program is undefined; those often correspond to an assert in the library implementation (which makes sense, because if the behaviour is undefined anyway, then the implementation is free to do anything, so why not use that liberty in a useful way to give you bug detection?).
If it's advised to delete the memory allocated by new (classes like
string, map and vector use the destructor to delete the allocated
memory), why the developers still use lots of assert then?
Because if an assertion fails, then you want your program to terminate immediately because you have detected a bug. If you have detected a bug, then your program is by definition in an unknown state. Allowing your program to continue in an unknown state by running destructors is possibly dangerous and may compromise system integrity and data consistency.
After all, as I said above, destructors may not only call delete a few times. Destructors close files, flush buffers, write into logs, close network connections, clear the screen, join on a thread or commit or rollback database transactions. Destructors can do a lot of things which can modify and possibly corrupt system resources.
It is a common pattern that applications - in the course of their execution -dynamically create objects that will not be used throughout the program execution. If an application creates a lot of such objects of temporary lifetime, it somehow has to manage memory in order not to run out of it. Note that memory is still limited, since operating systems usually do not assign all available memory to an application. Operating systems, especially those driving limited devices like mobile phones, may even kill applications once the produce a too high pressure on memory.
Hence, you should free the memory of those objects that are not used any more. And C++ offers storage class specifiers to make this handling easier. automatic storage duration, which is the default, deletes objects once they run out of scope (i.e. their enclosing block, e.g. the function in which they are defined, finishes). static objects remain until the end of normal program execution (if reached), and dynamically allocated objects remain until you call delete.
Note that - in no way - any object will survive the end of program execution, as the operating system will free the complete application memory. For normal program terminations, destructors of static objects will be called (but not for objects of dynamically created objects that have not been deleted before). For abnormal program terminations, like triggered by assert, exit or the operating system, no destructors are called; you can rather think of a program terminates because you turn off the power.
If you don't delete You introduce a memory leak. Each time this operator is invoked the process will waste some portion of its address space until it ultimately runs out of memory.
After your program finishes you do not need to care about memory leaks, so in principle this would be fine:
int main(){
int* x = new int(1);
}
However, thats not how one usually uses memory. Often you need to allocate memory for something that you use only for a short time and then you want to free that memory when you dont need it anymore. Consider this example:
int main(){
while ( someCondition ) {
Foo* x = new Foo();
doSomething(*x);
} // <- already here we do not need x anymore
}
That code will accumulate more and more memory for x even if all that is used is a single instance of x. Thats why one should free memory latest at the end of the scope where it is needed (once you left the scope you have no way to free it!). Because forgetting a delete isnt nice, one should make use of RAII whenever possible:
int main(){
while ( someCondition ) {
Foo x;
doSomething(x);
} // <- memory for x is freed automatically
}
This question is as in title:
Is it possible to produce a memory leak without using any kernel specific means like malloc, new, etc?
What if I will make a linked list inside a function with lot of elements in there, and after it I'll exit from this function without cleaning a list. The list will be created without using any malloc calls, i.e.
struct list_head {
struct list_head *next, *prev;
}
Can it be guaranteed that all resources will be freed after exiting from this function? So I can freely execute it a million times and nothing will be leaked?
Subject: If you not using any particular malloc or new calls you won't get a heap memory leak. Never. Is that right?
A leak is always connected to a resource. A resource is by definition something that you acquire manually, and that you must release manually. Memory is a prime example, but there are other resources, too (file handles, mutex locks, network connections, etc.).
A leak occurs when you acquire a resource, but subsequently lose the handle to the resource so that nobody can release it. A lesser version of a leak is a "still-reachable" kind of situation where you don't release the resource, but you still have the handle and could release it. That's mostly down to laziness, but a leak by contrast is always a programming error.
Since your code never acquires any resources, it also cannot have any leaks.
The variables you applied without malloc or new is located at stack
space in the memory. So when the function returned, the variable is
taken back.
On the other hand, the memory you applied with malloc or new is
located at heap space. The system doesn't care whether you release the
space or not. In this situation, if you don't use free or delete,
memory leak will happen.
Subject: If you not using any particular malloc or new calls you won't get a heap memory leak. Never. Is that right?
That assumption is not entirely correct. The problem is that the operating system itself (or other third party components you have to rely on) can have memory leaks as well. In that case you might not actively call malloc, but call other (operating system) functions which could leak.
So your assumption depends on how strongly you consider such a thing. You can argue that the OS/third party implementation is outside your domain, then this assumption would be correct. If you have a well defined system and your requirements are such that you have to garuantee a certain uptime, something like this may have to be considered as well.
So the answer to this question ...
Is it possible to make memory leak without using malloc?
... is:
Yes, it is possible.
malloc() allocates memory from the heap, while space for string and struct literals (string1, string2 and those list_head's) will be reserved at compile time at the stack.
Actually any memory allocated for a program (heap or stack) will be reclaimed by the kernel when the process exits (at *nix system at least).
I would define memory leak as allocating memory on heap and without freeing it when your program exits. This definition actually answers your question.
There are standard functions (like strdup) that will allocate memory on heap, beware of them.
Another example of a resource that you can allocate and forget to free:
If you're using OpenGL, and you call glGenBuffers() a million times without the corresponding glDeleteBuffers calls, it's extremely likely that you will run out of VRAM and your graphics driver will start leaking to system memory.
I just had this happen. Fortunately, Visual Studio's memory profiler made it pretty easy to find. It showed up as a large number of allocations made by the external process nvoglv32.dll, which is my NVIDIA OpenGL driver.
When using dynamically allocated objects in C++ eg:
TGraph* A = new TGraph(...);
One should always delete these because otherwise the objects might still be in memory when
control is handed back to the parent scope. While I can see why this is true for subscopes and subroutines of a program, does the same count for the main scope?
Am I obliged to delete objects that were dynamically built inside main()? The reason why this seems a bit redudant to me is that when main ends, the program also ends, so there is no need to worry about memory leaks.
Most of the modern OS always reclaim back all memory they allocated to a program(process).
The OS doesn't really understand if your program leaked memory it merely takes back what it allocatted.
But there are bigger issues at hand than just the memory loss:
Note that if the destructor of the object whos delete needs to be called performs some non-trivial operation and your program depends on the side effects produced by it then your program falls prey to Undefined Behavior[Ref 1]. Once that happens all bets are off and your program may show any beahvior.
Also, An OS usually reclaims the allocated memory but not the other resources, So you might leak those resources indirectly. This may include operations dealing with file descriptors or state of the program itself etc.
Hence, it is a good practice to always deallocate all your allocations by calling delete or delete [] before exiting your program.
[Ref 1]C++03 Standard 3.8 Para 4:
"....if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior."
IMO it is best to always call delete properly:
to make it an automatic habit, making it less likely to forget it when it is really needed
to cover cases when non-memory resources (sockets, file handles, ...) need to be freed - these aren't automatically freed by the OS
to cater for future refactoring when the code in question might be moved out of main scope
Yes, you should call delete, at least because it's best practice. If you have important logic in your destructor, that's one extra reason that you should call delete.
Corrected: If the program depends on logic in the destructor, not calling delete explicitly results in undefined behavior.
The reason why this seems a bit redudant to me is that when main ends,
the program also ends, so there is no need to worry about memory
leaks.
You're right, but consider this: you create a class object which opens a connection to remote DB. After your program will complete, you should tell the DB "I'm done, i'm going to disconnect", but it won't happen in the case you won't call delete properly.
Its the best practice to de-allocate memory that's been allocated. You should keep in mind that Heap memory is limited and just allocating with out de-allocating while your program is running might run the heap space out for some other/or the same program(if its some kind of a daemon that is meant to run for a very long time) that needs heap.
Of course memory will be reclaimed by the operating system at the end of the program's execution.
I see you are using ROOT (CMS guy?). I think ROOT takes care of this and cleans up, doesn't it?
Best practices:
Do not use new, use automatic allocation
When dynamic allocation is necessary, use RAII to ensure automatic cleanup
You should never have to write delete in applicative code.
Here, why are you calling new for TGraph ?
TGraph A(...);
works better: less worries!
In C++, when you make a new variable on the heap like this:
int* a = new int;
you can tell C++ to reclaim the memory by using delete like this:
delete a;
However, when your program closes, does it automatically free the memory that was allocated with new?
Yes, it is automatically reclaimed, but if you intend to write a huge program that makes use of the heap extensively and not call delete anywhere, you are bound to run out of heap memory quickly, which will crash your program.
Therefore, it is a must to carefully manage your memory and free dynamically allocated data with a matching delete for every new (or delete [] if using new []), as soon as you no longer require the said variable.
When the process is terminated the memory is reclaimed back by the OS. Of course this argument shouldn't in any case be used to not perform proper memory management by the program.
Don't let people tell you yes. C++ has no concept of an OS, so to say "yes the OS will clean it up" is no longer talking about C++ but about C++ running on some environment, which may not be yours.
That is, if you dynamically allocate something but never free it you've leaked. It can only end its lifetime once you call delete/delete[] on it. On some OS's (and almost all desktop OS's), memory will be reclaimed (so other programs may use it.) But memory is not the same as resource! The OS can free all the memory it wants, if you have some socket connection to close, some file to finish writing to, etc, the OS might not do it. It's important not to let resources leak. I've heard of some embedded platforms that won't even reclaim the memory you've not freed, resulting in a leak until the platform is reset.
Instead of dynamically allocating things raw (meaning you're the one that has to explicitly delete it), wrap them into automatically allocated (stack allocated) containers; not doing so is considered bad practice, and makes your code extremely messy.
So don't use new T[N], use std::vector<T> v(N);. The latter won't let a resource leak occur. Don't use new T;, use smart_ptr p(new T);. The smart pointer will track the object and delete it when it's know longer used. This is called Scope-bound Resource Management (SBRM, also known as the dumber name Resource-Acquisition is Initialization, or RAII.)
Note there is no single "smart_ptr". You have to pick which one is best. The current standard includes std::auto_ptr, but it's quite unwieldy. (It cannot be used in standard containers.) Your best bet is to use the smart pointers part of Boost, or TR1 if your compiler supports it. Then you get shared_ptr, arguably the most useful smart pointer, but there are many others.
If every pointer to dynamically allocated memory is in an object that will destruct (i.e., not another object that is dynamically allocated), and that object knows to free the memory, that pointer is guaranteed to be freed. This question shouldn't even be a problem, since you should never be in a position to leak.
No, when the program exits ("closes") the dynamically allocated memory is left as is
EDIT:
Reading the other answers, I should be more precise. The destructors of dynamically allocated objects will not run but the memory will be reclaimed anyway by any decent OS.
PS: The first line should read
int* a = new int;
No, it's your responsibility to free it. Also, a must be a pointer, so it should be:
int *a = new int;
delete a;
This excellent answer by Brian R. Bondy details why it's good practice to free the memory allocated by a.
It is important to explicitly call
delete because you may have some code
in the destructor that you want to
execute. Like maybe writing some data
to a log file. If you let the OS free
your memory for you, your code in your
destructor will not be executed.
Most operating systems will deallocate
the memory when your program ends. But
it is good practice to deallocate it
yourself and like I said above the OS
won't call your destructor.
As for calling delete in general, yes
you always want to call delete, or
else you will have a memory leak in
your program, which will lead to new
allocations failing.
When your process terminates, the OS does regain control of all resources the process was using, including memory. However, that, of course, will not cause C++'s destructors to be necessarily run, so it's not a panacea for not explicitly freeing said resources (though it won't be a problem for int or other types with noop dtors, of course;-).