I am studying a book called "C++ How to Program" by Paul Deitel, chapter 9 is talking about Classes, and I quote:
The destructor itself does not actually release the object's memory--it performs termination housekeeping before the object's
memory is reclaimed, so the memory may be reused to hold new objects.
so my question is, what does the Author mean by Termination housekeeping and releasing memory? and how different they are from each other? if they are any different.
What it means is the destructor function does not release memory, but it is a place where you can declare what housekeeping functions need to be done. For example, if your object owns pointers to other data that it should release, then it's time to delete those. For example if you had a pointer named owned that was given something to retain:
MyThing::~MyThing() {
delete owned;
}
This delete call will trigger the destructor for that owned object if it has one, which initiates this process all over again in a recursive manner.
You might also close file-handles, delete temporary files, whatever it is your object should do when tidying up. That might include deleting operating-system GUI elements as well, it really depends on where this code lives.
The destructor is called during the process of releasing memory, but it itself does not release any of its own memory. That action is performed after the destructor finishes.
There are other forms of cleanup besides releasing memory. Sometimes you may need to close a communications channel during the terminator of a class. Or you might release resources used for threading when a class closes. Or maybe you just modify a pointed to object.
The destructor is the code that runs when the object falls out of scope. There's nothing else to it.
Related
When using dynamically allocated objects in C++ eg:
TGraph* A = new TGraph(...);
One should always delete these because otherwise the objects might still be in memory when
control is handed back to the parent scope. While I can see why this is true for subscopes and subroutines of a program, does the same count for the main scope?
Am I obliged to delete objects that were dynamically built inside main()? The reason why this seems a bit redudant to me is that when main ends, the program also ends, so there is no need to worry about memory leaks.
Most of the modern OS always reclaim back all memory they allocated to a program(process).
The OS doesn't really understand if your program leaked memory it merely takes back what it allocatted.
But there are bigger issues at hand than just the memory loss:
Note that if the destructor of the object whos delete needs to be called performs some non-trivial operation and your program depends on the side effects produced by it then your program falls prey to Undefined Behavior[Ref 1]. Once that happens all bets are off and your program may show any beahvior.
Also, An OS usually reclaims the allocated memory but not the other resources, So you might leak those resources indirectly. This may include operations dealing with file descriptors or state of the program itself etc.
Hence, it is a good practice to always deallocate all your allocations by calling delete or delete [] before exiting your program.
[Ref 1]C++03 Standard 3.8 Para 4:
"....if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior."
IMO it is best to always call delete properly:
to make it an automatic habit, making it less likely to forget it when it is really needed
to cover cases when non-memory resources (sockets, file handles, ...) need to be freed - these aren't automatically freed by the OS
to cater for future refactoring when the code in question might be moved out of main scope
Yes, you should call delete, at least because it's best practice. If you have important logic in your destructor, that's one extra reason that you should call delete.
Corrected: If the program depends on logic in the destructor, not calling delete explicitly results in undefined behavior.
The reason why this seems a bit redudant to me is that when main ends,
the program also ends, so there is no need to worry about memory
leaks.
You're right, but consider this: you create a class object which opens a connection to remote DB. After your program will complete, you should tell the DB "I'm done, i'm going to disconnect", but it won't happen in the case you won't call delete properly.
Its the best practice to de-allocate memory that's been allocated. You should keep in mind that Heap memory is limited and just allocating with out de-allocating while your program is running might run the heap space out for some other/or the same program(if its some kind of a daemon that is meant to run for a very long time) that needs heap.
Of course memory will be reclaimed by the operating system at the end of the program's execution.
I see you are using ROOT (CMS guy?). I think ROOT takes care of this and cleans up, doesn't it?
Best practices:
Do not use new, use automatic allocation
When dynamic allocation is necessary, use RAII to ensure automatic cleanup
You should never have to write delete in applicative code.
Here, why are you calling new for TGraph ?
TGraph A(...);
works better: less worries!
I know that stack allocated resources are released in reverse order as they were allocated at the end of a function as part of RAII. I've been working on a project and I allocate a lot of memory with "new" from a library I'm using, and am testing stuff out. I haven't added a shutdown function as a counterpart to the initialise function that does all the dynamic allocation. When you shut down the program I'm pretty sure there is no memory leak as the allocated memory should be reclaimed by the operating system. At least any modern OS, as explained in this question similar to mine: dynamically allocated memory after program termination .
I'm wondering two things:
1: Is there a particular order that resources are released in in this case? Does it have anything to do with your written code (ie., which order you allocated it), or is it completely up to the OS to do its thing?
2: The reason I haven't made a shutdown function to reverse the initialisation is because I say to myself I'm just testing stuff now, I'll do it later. Is there any risk of doing any damage to anything by doing what I'm doing? The worse I can imagine is what was mentioned in the answer to that question I linked, and that is that the OS fails to reclaim the memory and you end up with a memory leak even after the program exits.
I've followed the Bullet physics library tutorial and initialise a bunch of code like this:
pSolver = new btSequentialImpulseConstraintSolver;
pOverlappingPairCache = new btDbvtBroadphase();
pCollisionConfig = new btDefaultCollisionConfiguration();
pDispatcher = new btCollisionDispatcher(pCollisionConfig);
pDynamicsWorld = new btDiscreteDynamicsWorld(pDispatcher, pOverlappingPairCache, pSolver, pCollisionConfig);
And never call delete on any of it at the moment because, as I said, I'm just testing.
It depends on the resources. Open files will be closed. Memory will be freed. Destructors will not be called. Temporary files created will not be deleted.
There is no risk of having a memory leak after the program exits.
Because programs can crash, there are many mechanisms in place preventing a process to leak after it stopped and leaking usually isn't that bad.
As a matter of fact, if you have a lot of allocations that you don't delete until the end of the program, it can be faster to just have the kernel clean up after you.
However destructors are not run. This mostly causes temporary files to not be deleted.
Also it makes debugging actual memory leaks more difficult.
I suggest using std::unique_ptr<T> and not leaking in the first place.
It depends on how the memory is actually allocated and on your host system.
If you are only working with classes that don't override operator new() AND you are using a modern operating system that guarantees memory resources are released when the process exits, then all dynamically allocated memory should be released when your program terminates. There is no guarantee on the order of memory release (e.g. objects will not be released in the same order, or in reverse order, of their construction). The only real risk in this case is associated with bugs in the host operating system that cause resources of programs/processes to be improperly managed (which is a low risk - but not zero risk - for user programs in modern windows or unix OSs).
If you are using any classes that override operator new() (i.e. that change how raw memory is allocated in the process of dynamically constructing an object) then the risk depends on how memory is actually being allocated - and what the requirements are for deallocation. For example, if the operator new() uses global or system-wide resources (e.g. mutexes, semaphores, memory that is shared between processes) then there is a risk that your program will not properly release those resources, and then indirectly cause problems for other programs which use the same resources. In practice, depending on the design of such a class, the needed cleanup might be in a destructor, an operator delete() or some combination of the two - but, however it is done, your program will need to explicitly release such objects (e.g. a delete expression that corresponds to the new expression) to ensure the global resources are properly released.
One risk is that destructors of your dynamically allocated objects will not be invoked. If your program relies on the destructor doing anything other than release dynamically allocated memory (presumably allocated by the class constructor and managed by other member functions) then the additional clean-up actions will not be performed.
If your program will ever be built and run on a host system that doesn't have a modern OS then there is no guarantee that dynamically allocated memory will be reclaimed.
If code in your program will ever be reused in a larger long-running program (e.g. your main() function is renamed, and then called from another program in a loop) then your code may cause that larger program to have a memory leak.
It's fine, since the operating system (unless it's some exotic or ancient OS) will not leak the memory after the process has ended. Same goes for sockets and file handles; they will be closed at process exit. It's not in good style to not clean up after yourself, but if you don't, there's no harm done to the overall OS environment.
However, in your example, it looks to me like the only memory that you would actually need to release yourself is that of pDynamicsWorld, since the others should be cleaned up by the btDiscreteDynamicsWorld instance. You're passing them as constructor arguments, and I suspect they get destroyed automatically when pDynamicsWorld gets destroyed. You should read the documentation to make sure.
However, it's just not in good style (because it's unsafe) to use delete anymore. So instead of using delete to destroy pDynamicsWorld, you can use a unique_ptr instead, which you can safely create using the std::make_unique function template:
#include <memory>
// ...
// Allocate everything else with 'new' here, as usual.
// ...
// Except for this one, which doesn't seem to be passed to another
// constructor.
auto pDynamicsWorld = std::make_unique<btDiscreteDynamicsWorld>(
pDispatcher, pOverlappingPairCache, pSolver, pCollisionConfig);
Now, pDispatcher, pOverlappingPairCache, pSolver and pCollisionConfig should be destroyed by pDynamicsWorld automatically, and pDynamicsWorld itself will be destroyed automatically when it goes out of scope because it's a unique_ptr.
But, again: Read the documentation of Bullet Physics to check whether the objects you pass as arguments to the constructors of the Bullet Physics classes actually do get cleaned up automatically or not.
I have been diving into C++ primer 5th edition these days. I found that on the page 452-453, it says shared_ptr automatically destroys its objects and frees the associated memory.
I don't quite understand it. So what's the difference between destroy the objects and free the associated memory?
Consider an object of this class:
class foo {
double* a;
foo() { a = new double();}
~foo() {delete a;}
}
If you want to clean up after using such an object, it is not sufficient to free the memory of that object, but you have to call the destrcutor so that also a gets deleted properly.
When you create an object dynamically you first allocate memory for it (which is just some bytes). Then you run the constructor to construct an object instance in those bytes - initialise members, acquire resources, and so on.
When you are finished with the object you have to do this in reverse. First the destructor is run, which frees up any resources owned by the object. Now you are once again left with some bytes that used to contain an object. And then you need to free the bytes and enable the system to reuse them for something else.
Destroying objects and freeing their memory are very closely related actions, analogously to acquiring memory and constructing objects in it.
Let's say you have some paper and you draw on some nice anti-terrorism expression. You then decide to draw something else: using a rubber you clear what you have drawn, but the paper is still there!
This is the same for objects and memory: objects exist in memory, therefore you have to acquire it before constructing them, and drawings are constructed on paper. When you don't need the drawing anymore, you clear the paper (destroy the object); that doesn't mean you can't make a new drawing on the old, still "acquired", paper. If you're really done with it, you destroy the object and release its memory, that is where it used to reside. This also means you'll need to acquire the memory again to "draw" again.
Destroying an object is about releasing the resources the object eventually acquired during its lifetime and the object memory itself.
Concretely it means that the shared pointer will call a deleter function when the reference count is 0. By default, it is the delete method, but you can supply custom deleters too.
It's probably an informal terminology that make distinct some operations when creating and deleting an object.
Creation (can be made with new) allocates memory and then initializes/constructs the object.
Deletion (can be made with delete) destroys/de-initliazes the object and then deallocate the associated memory.
Author certainly uses destroy to means that the destructor is called.
I suppose "destroying" an object could be just deleting the pointer to it, while "freeing the associated memory" literally means releasing the lock on the memory. At least that's what I think the author was trying to get at, anyway.
So, if I understand correctly, the point of RAII is to remove the hassle of memory management. That is, you do the deleting in the destructor of the object. That way, when the pointer goes out of scope, you do not have to worry about deleting it. So here is what I don't understand: why not just declare the variable on the stack in the first place?
There are a few things wrong with your understanding:
The point of RAII is to remove the hassle of resource management, not just memory. For example: A file handle that needs to be closed, a mutex that needs to be unlocked, an object that needs to be released, memory that needs to be freed. Basically, if there is something you have to to when you finish using a resource, that's a good case for RAII.
When a raw C++ pointer goes out of scope, it does nothing. I assume you're talking about a smart pointer which is nothing but an object that wraps around a raw pointer. When that object goes out of scope, its destructor is called and the destructor can be used in turn to free the memory that was allocated in the constructor.
It does not make a difference whether the object that needs to be "released" was allocated on the stack or the heap. The point is that you do something in the constructor when you acquire the resource and you do something else in the destructor when you're finished with it.
You cannot declare a database connection or a window or a file on the stack. At least, it's arguable that that is exactly what RAII permits you to do, but without that, you can't.
The point of RAII is that the destructor will be invoked no matter how you exit the scope.
So whether you exit normally or by throwing an exception, your resource will be freed.
BTW, the "resource" doesn't have to be just memory - it can be a file handle, a database connection etc.
When using dynamically allocated objects in C++ eg:
TGraph* A = new TGraph(...);
One should always delete these because otherwise the objects might still be in memory when
control is handed back to the parent scope. While I can see why this is true for subscopes and subroutines of a program, does the same count for the main scope?
Am I obliged to delete objects that were dynamically built inside main()? The reason why this seems a bit redudant to me is that when main ends, the program also ends, so there is no need to worry about memory leaks.
Most of the modern OS always reclaim back all memory they allocated to a program(process).
The OS doesn't really understand if your program leaked memory it merely takes back what it allocatted.
But there are bigger issues at hand than just the memory loss:
Note that if the destructor of the object whos delete needs to be called performs some non-trivial operation and your program depends on the side effects produced by it then your program falls prey to Undefined Behavior[Ref 1]. Once that happens all bets are off and your program may show any beahvior.
Also, An OS usually reclaims the allocated memory but not the other resources, So you might leak those resources indirectly. This may include operations dealing with file descriptors or state of the program itself etc.
Hence, it is a good practice to always deallocate all your allocations by calling delete or delete [] before exiting your program.
[Ref 1]C++03 Standard 3.8 Para 4:
"....if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined behavior."
IMO it is best to always call delete properly:
to make it an automatic habit, making it less likely to forget it when it is really needed
to cover cases when non-memory resources (sockets, file handles, ...) need to be freed - these aren't automatically freed by the OS
to cater for future refactoring when the code in question might be moved out of main scope
Yes, you should call delete, at least because it's best practice. If you have important logic in your destructor, that's one extra reason that you should call delete.
Corrected: If the program depends on logic in the destructor, not calling delete explicitly results in undefined behavior.
The reason why this seems a bit redudant to me is that when main ends,
the program also ends, so there is no need to worry about memory
leaks.
You're right, but consider this: you create a class object which opens a connection to remote DB. After your program will complete, you should tell the DB "I'm done, i'm going to disconnect", but it won't happen in the case you won't call delete properly.
Its the best practice to de-allocate memory that's been allocated. You should keep in mind that Heap memory is limited and just allocating with out de-allocating while your program is running might run the heap space out for some other/or the same program(if its some kind of a daemon that is meant to run for a very long time) that needs heap.
Of course memory will be reclaimed by the operating system at the end of the program's execution.
I see you are using ROOT (CMS guy?). I think ROOT takes care of this and cleans up, doesn't it?
Best practices:
Do not use new, use automatic allocation
When dynamic allocation is necessary, use RAII to ensure automatic cleanup
You should never have to write delete in applicative code.
Here, why are you calling new for TGraph ?
TGraph A(...);
works better: less worries!