I have a few "global" constructs that are allocated with new and are alive the entirety of the applications life span.
Should i bother calling delete on the pointers just before the application finishes?
Doesn't all the of the applications memory get reclaimed after it closes anyway?
Edit For Clarity. I am only talking about not calling delete for lifetime objects who "die" right as the program is closing.
Technically, yes, the memory is reclaimed. But unless you use delete the destructors of those objects are not run and their side effect is not applied. This might lead to a temporary file not deleted or a database change not committed depending on what those destructor were meant to do.
Also don't forget Murphy. Now the code for managing those objects is used as you describe (objects have to persist for the life of the program) but later you might want to reuse the code so that it is run multiple times. Unless it can deal with recreating objects properly it will be leaking objects.
It is always good practice to clean up everything, although the memory is reclained these objects might have other resources allocated (shared memory, smeaphores etc) that should be cleaned up, probably by the objects destructors.
If you do not want to call delete use shared pointers to hold these resources, so that they are cleaned up correctly when the application exits.
How are you testing your application? Not cleaning up might hinder development of a decent test harness. Tests of the application might want a way of spoofing a shutdown and restart.
There is more to cleaning up that simple releasing memory.
No, don't write/debug/maintain code to do something that the OS is already very good at.
Unless there are specific reasons to the contrary, (eg. outstanding transactions to be commited, files to flush, connections to be closed), I don't bother writing code to do something that the OS is going to do anyway. If a dtor does nothing special, why bother calling it?
Many developers put in a lot of effort into deleting/destroying/freeing/terminating stuff at app close time - a load of effort to avoid some spurious 'leak report' on app shutdown from a memory manager that is itself about to be destroyed.
I think you're probably right, but I personally would consider it poor coding and bad practise to rely on the system and would ensure my code always tidied properly whe shutting down.
There is no one right answer. Most of the time, it probably doesn't
matter, but there are destructors which do something important, beyond
just freeing memory (I have one which deletes temporary files), which
argues in favor of cleanup; on the other hand, destructing such objects
may lead to order of destruction issues, if the objects are used by
destructors of other objects. My general rule is to not destruct,
unless the destructor does something more than just free memory, but
others may prefer a different set of defaults.
In addition to the destructors not being executed (as sharptooth pointer out), it's also worthwhile to delete global objects to make memory checkers happy. Especially if your code is in a shared library - you don't want to clutter their memory checker (say, Valgrind) output just because you didn't delete properly.
.. then there's those cases where you definitely don't want the dtor's called at all before the OS terminates the process, eg:
1) When the dtor does not work properly because it tries to terminate a thread, fails and blocks on the thread handle or other signal, (the perennial 'join/waitFor' deadlock) - the cause off 99% of all household 'my app will not close down cleanly' posts.
2) When the dtor does not work properly because it's just bad anyway and buried in a library.
3) Where the memory must outlive the process threads else there will be segfaults/AV on close, (eg. pools of buffer objects that threads may well be writing to at close time).
4) Any other 'special cases' where the destruction of the object/s has to be left to the OS.
There are so many 'special cases' that I regard 'cleaning up' shutdown code as the special case.
Related
I'm writing a class that save the state of the connected components of a graph, supporting dynamic connectivity, and, each time a new edge is removed or added, I have to recalculate the neighbouring components to join or split them.
The only exceptions that those methods can throw is std::bad_alloc. No other exception will be thrown by any of my dependencies. So, the only possible exceptions are due to lack of memory by methods like std::unordered_set<...>::insert or std::deque<...>::push_back.
It complicates a lot the design of my algorithms, because I've to deal with local data to save differences, and then move all modifications based on those cached modifications in a well-scoped try-catch block.
The readability is severaly decreased and the time to think and write this exception-safe code increases a lot. Also, memory overcommit makes dealing with this exceptions a bit pointless.
What do you do in situations like that? Is it really important to ensure exception-safe code given that, if there is a real lack of memory, your code will probably fail anyway, but probably later, and the program as a whole will as well?
So, in short, is it worthy to deal with lack-of-memory exceptions at all, considering that, as one comment points out, that the very same exception throwing mechanism could exhaust memory as well?
As you suggested, trying to handle out-of-memory situations gracefully within a process is somewhere between extremely difficult and impossible, depending on the memory behavior of the OS you are running on. On many OS's (such as Linux when configured with its default settings) an out-of-memory scenario can result in your process being simply killed without any warning or recourse, no matter how carefully your code tries to handle bad_alloc exceptions.
My suggestion is to instead have your application launch a child process (i.e. it can launch itself as a child process, with a special argument to let the child process know that it is the child process and not the parent process). Then the parent process simply waits for the child process to exit, and if it does, it relaunches the child process.
That has the advantage of not only being able to recover from an out-of-memory situation, but also helping you recover from any other problem that might cause your child process to crash or otherwise prematurely exit.
It is almost impossible to ensure desired OOM handling on application level especially because as #StaceyGirl mentioned, there is no guarantee you will be even able to throw std::bad_alloc. Instead it is much more important (and easy) to manage memory allocation. By using memory pools and smart pointer templates you can achieve several advantages:
cleaner code
single place where your memory allocation can fail and thus should be handled
ability to ensure your application has required (or planned) amount of memory
graceful degradation. Since you are decoupling "Allocation Next Memory Chunk to Pool" event from "Give me some memory from pool" Request, at the moment of truth (std::unordered_set<...>::insert etc.) you will be able to handle it gracefully (not by throwing exception) and your program will not halt unexpectedly.
In cppreference abort, we have
Destructors of variables with automatic, thread local and static storage durations are not called. Functions passed to std::atexit() are also not called. Whether open resources such as files are closed is implementation defined.
I'm bit confused about the terminology and contradiction of the abort term that "closes" my program and from the description of that function which it says that destructors and open resources possibly are not called/closed, respectively. So, is it possible that my program remains running and it has some memory leak or resources still opened after a call to abort()?
It's like killing a person. They won't have a chance to pay any outstanding bills, organize their heritage, clean their apartment, etc.
Whether any of this happens is up to their relatives or other third parties.
So, usually things like open files will be closed and no memory will be leaked because the OS takes care of this (like when the police or so will empty the apartment). There are some platforms where this won't happen, such as 16 bit windows or embedded systems, but under modern windows or Linux systems it will be okay.
However, what definitely won't happen is that destructors are run. This would be like having the to-be-killed person write a last entry into their diary and seal it or something - only the person itself knows how to do it, and they can't when you kill them without warning. So if anything important was supposed to happen in a destructor, it can be problematic, but usually not dramatically - it might be something like that the program created a Temporary file somewhere and would normally delete it on exiting, and now it can't and the file stays.
Still, your program will be closed and not running anymore. It just won't get a chance to clean up things and is therefore depending on the OS to do the right thing and clean up the resources used by it.
Functions passed to std::atexit() are also not called. Whether open resources such as files are closed is implementation defined.
This means the implementation gets to decide what happens. On any common consumer operating system, most objects associated with your process get destroyed when your process exits. That means you won't leak memory that was allocated with new, for example, or open files.
There might be uncommon kinds of objects that aren't freed - for example, if you have a shared memory block, it might stay around in case another process tries to access it. Or if you created a temporary file, intending to delete it later, now the file will stay there because your program isn't around to delete it.
On Unix, calling abort() effectively delivers a SIGABRT signal to your process. The default behavior of the kernel when that signal is delivered is to close down your process, possibly leaving behind a core file, and closing any descriptors. Your process's thread of control is completely removed. Note that this all happens outside any notion of c++ (or any other language). That is why it is considered implementation defined.
If you wish to change the default behavior, you would need to install a signal handler to catch the SIGABRT.
I know that modern Windows versions reclaim memory that was previously acquired with malloc, new and the like, after program termination, but what about COM objects? Should I call obj->Release() on them on program's exit, or will the system do this for me?
My guess it: it depends. For out of process COM, I should probably always call Release(), but for in-process COM, I think it really doesn't matter, because the COM objects die after program termination anyway.
If you're in the process itself then yes you should as you might not know where the server is and the server could be out of proc. If you're in a DLL it becomes more complicated.
In a DLL you should UNLESS you receive a DLL_PROCESS_DETACH notification, in which case you should do absolutely nothing and just let the application close. This is because this notification is called during process teardown. As such it is too late to clean up at that point. The kernel may have already reclaimed the blocks you call release on.
Remember as a DLL writer there is nothing you can do if the process exits ungracefully, you can only do what you can within reason to clean up after yourself in a graceful exit.
One easy solution is to use smart COM Pointers everywhere, the ATL and WRL have implementations that work nicely and make it so you don't have to worry about it for the most part. Even if these are stored statically their destructors will be called before process teardown or DLL unload, thus releasing safely at a time when it is safe to do so.
So the short answer is If you can e.g. you should always call release if it is safe to do so. However there are times when it is not and you should most definitely NOT do anything.
Depending on the implementation of the underlying object, there may or may not be a penalty. The object may have state that persists beyond process shutdown. A persistent lock in a local database is the easiest example that comes to mind.
With that in mind, I say it's better to call Release just in case.
in-process COM object will die with the process
out-of-process reference will be released on timeout
poorly designed servers and clients might remain in bad state holding pointers to objects (typically proxies), that are not available any longer, being unable to see they are dead. being unable to get rid of them
it is always a good idea to release the pointers gracefully
If you don't release COM pointers properly, and COM activity included marshaling, you are very likely to have exceptions in CoUninitialze which are both annoying and/or can end up showing process crash message to the user.
We've inherited large legacy application which is structured roughly like this:
class Application
{
Foo* m_foo;
Bar* m_bar;
Baz* m_baz;
public:
Foo* getFoo() { return m_foo; }
Bar* getBar() { return m_bar; }
Baz* getBaz() { return m_baz; }
void Init()
{
m_foo = new Foo();
m_bar = new Bar();
m_baz = new Baz();
// all of them are singletons, which can call each other
// whenever they please
// may have internal threads, open files, acquire
// network resources, etc.
SomeManager.Init(this);
SomeOtherManager.Init(this);
AnotherManager.Init(this);
SomeManagerWrapper.Init(this);
ManagerWrapperHelper.Init(this);
}
void Work()
{
SomeManagerWrapperHelperWhateverController.Start();
// it will never finish
}
// no destructor, no cleanup
};
All managers once created stay there for the whole application lifetime. The application does not have close or shutdown methods and managers also doesn't have those. So, the complex inter dependencies are never dealt with.
The question is: if the objects lifetime is tightly coupled with the application lifetime, is it accepted practice to not have cleanup at all? Will the operating system (Windows in our case) be able to cleanup everything (kill threads, close open file handles, sockets, etc.) once the process ends (by ending it in task manager or by calling special functions like ExitProcess, Abort, etc.)? What are possible problems with the above approach?
Or more generic question: are destructors absolutely necessary for global objects (declared outside of main)?
Will the operating system (Windows in our case) be able to cleanup
everything (kill threads, close open file handles, sockets, etc.) once
the process ends (by ending it in task manager or by calling special
functions like ExitProcess, Abort, etc.)? What are possible problems
with the above approach?
As long as your objects aren't initialising any resources not cleaned up by the operating system, then it doesn't make any practical difference whether you explicitly clean up or not, as the OS will mop up for you when your process is terminated.
However, if your objects are creating resources which are not cleaned up by the OS then you've got a problem and need a destructor or some other explicit clean up code somewhere in your app.
Consider if one of those objects creates sessions on some remote service, like a database for example. Of course, the OS doesn't magically know that this has been done or how to clean them up when your process dies, so those sessions would remain open until something kills them (the DBMS itself probably, by enforcing some timeout threshold or other). Perhaps not a problem if your app is a tiny user of resources and you're running on a big infrastructure - but if your app creates and then orphans enough sessions then that resource contention on that remote service might start to become a problem.
if the objects lifetime is tightly coupled with the application
lifetime, is it accepted practice to not have cleanup at all?
That's a matter of subjective debate. My personal preference is to include the explicit cleanup code and make each object I create personally responsible for cleaning up after itself wherever practical. If application-lifetime objects are ever refactored such that they no longer live for the lifetime of the object, I don't have to go back and figure out whether I need to add previously-omitted cleanup. I guess for cleanup I'm saying that I generally prefer to lean towards RAII over the more pragmatic YAGNI.
is it accepted practice to not have cleanup at all
It depends on who you're asking.
Will the operating system (Windows in our case) be able to cleanup
everything (kill threads, close open file handles, sockets, etc.) once
the process ends
Yes, the OS will take back everything. It will claim memory, free handles etc.
What are possible problems with the above approach
One of the possible problems is that if you use a memory leak detector it will constantly show you have leaks.
In general, modern operating systems cleans up all a process resources on exit. But in my opinion it's still good manners to clean up after yourself. (But then I was "raised" on the Amiga, where you had to do it.)
Sometimes it's forced on you by a spec or just by the behaviour of 'peripherals'. Perhaps you have a lot of data buffered in your app that should really be flushed to disk or maybe a DB may accumulate 'half-open' connections is not explicitly closed.
Other than that, as #cnicutar says, it depends who you ask. I'm firmly in the 'don't bother' camp for the following reasons:
1) It's difficult enough to get apps to work anyway without writing extra shutdown code that is not required.
2) The more code you write, the more bugs there are and the more testing you have to do. You may have to test such code in more than one OS version:(
3) The OS developers have spent a long time ensuring that apps can always be shut down if required, (eg. by Task Manger), without any overall impact on the rest of the system. If some functionality is already there in the OS, why not leverage it?
4) Threads pose a particular problem - they could be in any state. They may be running on a different core than the thread that initiates app close or may be blocked on a system call. While it's very easy for the OS to ensure that all threads are terminated before releasing any memory, closing handles etc, it's very difficult to stop such threads in a safe and reliable manner from user code.
5) Performance-sapping memory-managers are not the only way of detecting leaks. If large objects, (eg. network buffers), are pooled, it's easy to tell if any leak during run-time without relying on 3rd-party memory-managers that issue a leak report on app close. An intensive memory-checker like Valgrind my actually cause system problems by affecting the overall timing.
6) Empirically, every app I've eve written for Windows that has no explicit shutdown code has closed immediately and completely when the user clicks on the 'red cross' border icon. This incudes busy, complex IOCP servers running on multicore boxes with thousands of connected clients.
7) Assuming that a reasonable test phase has been done - one that includes load/soak testing - it's not difficult to differentiate an app that is leaking from one that chooses to not free memory that it is using at close time. Colander-apps will show memory/handles/whatever always increasing with run time.
8) Small, occasional leaks that are not obvious are not worth spending a huge amount of time on. Most Windows boxes are restarted every month anyway, (Patch Tuesday).
9) Opaque libraries are often written by developers like me and so will generate spurious 'leak reports' on shutdown anyway.
Designing/writing/debugging/testing shutdown code solely to clean up a memory-report is an expensive luxury I can well do without:)
You should determine that for each object individually. If an object requires special actions to be taken upon cleanup (such as flushing a buffer to disk), this will not happen unless you explicitly take care of it.
If you check this link http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=107
it's written:
"For example, the abort() and exit() library functions are never to be used in an object-oriented environment—even during debugging—because they don't invoke objects' destructors before program termination."
Why does the destructor need to be called when one calls exit? (Since the OS guarantees memory will be reclaim whenever a program exits, right?)
Destructors can, and often do, other operations besides freeing memory and/or resources. They are often used to make certain other guarantees such as user data is written to a file or non-process specific resources are in a known state. The OS won't do these types of operations on exit.
That being said, any program which relies on these types of actions is fundamentally flawed. Use of exit and abort are not the only ways destructors are avoided. There are many other ways a destructor can be circumvented. For example user forcable terminating the process or a power outage.
I definitely disagree with the use of never in the quoted passage. I can think of at least one situation where you absolutely don't want destructors executing: corrupted memory. At the point you detect corrupted memory you can no longer make any guarantees about the code in your process, destructors included. Code which should write data to a file might delete / corrupt it instead.
The only safe thing to do when memory corruption is detected is to exit the process as fast as possible.
First I will like to advice that "Don't believe anything blindly you have read."
Probably as JaredPar said the destructor may doing some logging things and closing of OS resource handle things. In case you call abort or exit these things will never happen.
But I certainly do not agree with the quote at all. Abort is the best and the fastest way of finding the programming errors in the development cycle. As a developer you certainly don't put abort blindly everywhere in the code, but on the condition you know that should have never happened. You put abort when you know that the other programmer or you have messed up somewhere in the code and it's better to stop than to handle the error. I have gone through a situation where abort has really saved my #$$.