Is it really necessary to call delete on this pointer [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a pointer to a very large static array of objects that don't have any destructors nor they inherit from any class. This array is allocated at the beginning of the program and never allocated/relocated again (by design). This array needs to be destroyed only at the very end of the program. Do I really need to call delete for the array, or is it OK to let the OS (Windows) clean up? Deleting delays exiting of program by 5-10sec. Without calling delete, OS will do that for us and program exits immediately.

In general, it is a bad idea to let os to clean up.
I assume that nothing bad will happens, but in the future you may use your code in another program, and you will have to change things.
bottom line - free everything that you allocated.

You should free it; for the following reasons:
its easy, and takes minimal effort
its good practice. Taking shortcuts too often will bite you somewhere sometime
it sets an example for other people reading your code
it protects you against issues. Trusting to OS to clean it up means you trust your application to exit properly. If this does not happen, the memory release will be stalled
it protects you from yourself: in the future you might have memory in a hardware device reserved. This might not be freed by the os. So to be clear: not all memory is always be freed by the OS.
... these are just some reasons to clean up your mess ;-) ... besides... you might want to deside to put the stuff in an actual class. Then, the cleanup code is already there.
Also note: not releasing memory is one of the most common runtime issues around. Only for that reason its wise to be strict upon it.

No. You don't necessarily have to free the allocation at shutdown if you can assume that your program runs within a modern operating system. If it significantly reduces the time of shutting down the program, it may be an option worth to consider.
Not freeing the allocation does however have the downside that memory leak detection tools will detect this intentional leak making their use problematic. As a solution, I would suggest making the leak toggleable so that you can run a non leaking version with detection tools, and use the leaking version in production.

Yes, but not for the reason you think.
Although in general it goes somewhat against "good practice", it is perfectly "OK" to just exit and let the OS clean up, since there is nothing to be done. Objections such as "it's a bad idea, evil things may happen, resources oh the woes, whatever" are not well-founded insofar as what really happens is that your process ceases to exist, and so do the memory pages that the OS had allocated for it, and that's just it. So, as long as you are 100% positive that nothing needs to happen in your destructor, there is no reason to ever deallocate, except to make tools like valgrind (and auditors, if applicable) happy.
However, the fact that operator delete[] takes 5-10 seconds indicates that this is not the case at all. Something does happen there.
On my desktop, deleting millions of objects takes pretty much "zero" time as long as the destructors are not very very un-trivial. I'm saying such a funny thing as "very very un-trivial" since trivial (and non-trivial) destructor is a term which means something very specific and indeed a "not very very un-trivial" non-trivial destructor call is still very fast.
Wrote a quick 15-liner to test. Structure defintion with some data, default contructor zero-initializing data, and a trivial destructor. A main function which allocates 100 million objects, iterates over the array and updates each element with the value of argc (to prevent the compiler from optimizing the whole thing out), and finally deleting the array.
Runtime overall: 0.2 seconds give or take 50 millis. The same program but with a non-trivial destructor that conditionally updates a global counter takes 0.3 seconds total. Mind you, that's allocation, construction, and iteration included.
So, obviously you are doing something which is not at all trivial. Lots of nested virtual destructor calls on sub-objects? Deallocations from within the object's destructor? Zeroing out or otherwise touching gigabytes of memory? Closing file handles?
Impossible to tell what, but there must be something that is happening, or it wouldn't take so long.
So... no it's not OK to just quit. Because there's something that's happening, and it won't happen if you skip the delete.

Related

What are the issues with undeleted dynamic instance? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Is it ever acceptable to have a memory leak in your C or C++ application?
What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's destructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.
What if a third party library forced this situation on you? Would refuse to use that third party library no matter how great it otherwise might be?
I only see one practical disadvantage, and that is that these benign leaks will show up with memory leak detection tools as false positives.
No.
As professionals, the question we should not be asking ourselves is, "Is it ever OK to do this?" but rather "Is there ever a good reason to do this?" And "hunting down that memory leak is a pain" isn't a good reason.
I like to keep things simple. And the simple rule is that my program should have no memory leaks.
That makes my life simple, too. If I detect a memory leak, I eliminate it, rather than run through some elaborate decision tree structure to determine whether it's an "acceptable" memory leak.
It's similar to compiler warnings – will the warning be fatal to my particular application? Maybe not.
But it's ultimately a matter of professional discipline. Tolerating compiler warnings and tolerating memory leaks is a bad habit that will ultimately bite me in the rear.
To take things to an extreme, would it ever be acceptable for a surgeon to leave some piece of operating equipment inside a patient?
Although it is possible that a circumstance could arise where the cost/risk of removing that piece of equipment exceeds the cost/risk of leaving it in, and there could be circumstances where it was harmless, if I saw this question posted on SurgeonOverflow.com and saw any answer other than "no," it would seriously undermine my confidence in the medical profession.
–
If a third party library forced this situation on me, it would lead me to seriously suspect the overall quality of the library in question. It would be as if I test drove a car and found a couple loose washers and nuts in one of the cupholders – it may not be a big deal in itself, but it portrays a lack of commitment to quality, so I would consider alternatives.
I don't consider it to be a memory leak unless the amount of memory being "used" keeps growing. Having some unreleased memory, while not ideal, is not a big problem unless the amount of memory required keeps growing.
Let's get our definitions correct, first. A memory leak is when memory is dynamically allocated, eg with malloc(), and all references to the memory are lost without the corresponding free. An easy way to make one is like this:
#define BLK ((size_t)1024)
while(1){
void * vp = malloc(BLK);
}
Note that every time around the while(1) loop, 1024 (+overhead) bytes are allocated, and the new address assigned to vp; there's no remaining pointer to the previous malloc'ed blocks. This program is guaranteed to run until the heap runs out, and there's no way to recover any of the malloc'ed memory. Memory is "leaking" out of the heap, never to be seen again.
What you're describing, though, sound like
int main(){
void * vp = malloc(LOTS);
// Go do something useful
return 0;
}
You allocate the memory, work with it until the program terminates. This is not a memory leak; it doesn't impair the program, and all the memory will be scavenged up automagically when the program terminates.
Generally, you should avoid memory leaks. First, because like altitude above you and fuel back at the hangar, memory that has leaked and can't be recovered is useless; second, it's a lot easier to code correctly, not leaking memory, at the start than it is to find a memory leak later.
In theory no, in practise it depends.
It really depends on how much data the program is working on, how often the program is run and whether or not it is running constantly.
If I have a quick program that reads a small amount of data makes a calculation and exits, a small memory leak will never be noticed. Because the program is not running for very long and only uses a small amount of memory, the leak will be small and freed when the program exists.
On the other hand if I have a program that processes millions of records and runs for a long time, a small memory leak might bring down the machine given enough time.
As for third party libraries that have leaks, if they cause a problem either fix the library or find a better alternative. If it doesn't cause a problem, does it really matter?
Many people seem to be under the impression that once you free memory, it's instantly returned to the operating system and can be used by other programs.
This isn't true. Operating systems commonly manage memory in 4KiB pages. malloc and other sorts of memory management get pages from the OS and sub-manage them as they see fit. It's quite likely that free() will not return pages to the operating system, under the assumption that your program will malloc more memory later.
I'm not saying that free() never returns memory to the operating system. It can happen, particularly if you are freeing large stretches of memory. But there's no guarantee.
The important fact: If you don't free memory that you no longer need, further mallocs are guaranteed to consume even more memory. But if you free first, malloc might re-use the freed memory instead.
What does this mean in practice? It means that if you know your program isn't going to require any more memory from now on (for instance it's in the cleanup phase), freeing memory is not so important. However if the program might allocate more memory later, you should avoid memory leaks - particularly ones that can occur repeatedly.
Also see this comment for more details about why freeing memory just before termination is bad.
A commenter didn't seem to understand that calling free() does not automatically allow other programs to use the freed memory. But that's the entire point of this answer!
So, to convince people, I will demonstrate an example where free() does very little good. To make the math easy to follow, I will pretend that the OS manages memory in 4000 byte pages.
Suppose you allocate ten thousand 100-byte blocks (for simplicity I'll ignore the extra memory that would be required to manage these allocations). This consumes 1MB, or 250 pages. If you then free 9000 of these blocks at random, you're left with just 1000 blocks - but they're scattered all over the place. Statistically, about 5 of the pages will be empty. The other 245 will each have at least one allocated block in them. That amounts to 980KB of memory, that cannot possibly be reclaimed by the operating system - even though you now only have 100KB allocated!
On the other hand, you can now malloc() 9000 more blocks without increasing the amount of memory your program is tying up.
Even when free() could technically return memory to the OS, it may not do so. free() needs to achieve a balance between operating quickly and saving memory. And besides, a program that has already allocated a lot of memory and then freed it is likely to do so again. A web server needs to handle request after request after request - it makes sense to keep some "slack" memory available so you don't need to ask the OS for memory all the time.
There is nothing conceptually wrong with having the os clean up after the application is run.
It really depends on the application and how it will be run. Continually occurring leaks in an application that needs to run for weeks has to be taken care of, but a small tool that calculates a result without too high of a memory need should not be a problem.
There is a reason why many scripting language do not garbage collect cyclical references… for their usage patterns, it's not an actual problem and would thus be as much of a waste of resources as the wasted memory.
I believe the answer is no, never allow a memory leak, and I have a few reasons which I haven't seen explicitly stated. There are great technical answers here but I think the real answer hinges on more social/human reasons.
(First, note that as others mentioned, a true leak is when your program, at any point, loses track of memory resources that it has allocated. In C, this happens when you malloc() to a pointer and let that pointer leave scope without doing a free() first.)
The important crux of your decision here is habit. When you code in a language that uses pointers, you're going to use pointers a lot. And pointers are dangerous; they're the easiest way to add all manner of severe problems to your code.
When you're coding, sometimes you're going to be on the ball and sometimes you're going to be tired or mad or worried. During those somewhat distracted times, you're coding more on autopilot. The autopilot effect doesn't differentiate between one-off code and a module in a larger project. During those times, the habits you establish are what will end up in your code base.
So no, never allow memory leaks for the same reason that you should still check your blind spots when changing lanes even if you're the only car on the road at the moment. During times when your active brain is distracted, good habits are all that can save you from disastrous missteps.
Beyond the "habit" issue, pointers are complex and often require a lot of brain power to track mentally. It's best to not "muddy the water" when it comes to your usage of pointers, especially when you're new to programming.
There's a more social aspect too. By proper use of malloc() and free(), anyone who looks at your code will be at ease; you're managing your resources. If you don't, however, they'll immediately suspect a problem.
Maybe you've worked out that the memory leak doesn't hurt anything in this context, but every maintainer of your code will have to work that out in his head too when he reads that piece of code. By using free() you remove the need to even consider the issue.
Finally, programming is writing a mental model of a process to an unambiguous language so that a person and a computer can perfectly understand said process. A vital part of good programming practice is never introducing unnecessary ambiguity.
Smart programming is flexible and generic. Bad programming is ambiguous.
I'm going to give the unpopular but practical answer that it's always wrong to free memory unless doing so will reduce the memory usage of your program. For instance a program that makes a single allocation or series of allocations to load the dataset it will use for its entire lifetime has no need to free anything. In the more common case of a large program with very dynamic memory requirements (think of a web browser), you should obviously free memory you're no longer using as soon as you can (for instance closing a tab/document/etc.), but there's no reason to free anything when the user selects clicks "exit", and doing so is actually harmful to the user experience.
Why? Freeing memory requires touching memory. Even if your system's malloc implementation happens not to store metadata adjacent to the allocated memory blocks, you're likely going to be walking recursive structures just to find all the pointers you need to free.
Now, suppose your program has worked with a large volume of data, but hasn't touched most of it for a while (again, web browser is a great example). If the user is running a lot of apps, a good portion of that data has likely been swapped to disk. If you just exit(0) or return from main, it exits instantly. Great user experience. If you go to the trouble of trying to free everything, you may spend 5 seconds or more swapping all the data back in, only to throw it away immediately after that. Waste of user's time. Waste of laptop's battery life. Waste of wear on the hard disk.
This is not just theoretical. Whenever I find myself with too many apps loaded and the disk starts thrashing, I don't even consider clicking "exit". I get to a terminal as fast as I can and type killall -9 ... because I know "exit" will just make it worse.
I think in your situation the answer may be that it's okay. But you definitely need to document that the memory leak is a conscious decision. You don't want a maintenance programmer to come along, slap your code inside a function, and call it a million times. So if you make the decision that a leak is okay you need to document it (IN BIG LETTERS) for whoever may have to work on the program in the future.
If this is a third party library you may be trapped. But definitely document that this leak occurs.
But basically if the memory leak is a known quantity like a 512 KB buffer or something then it is a non issue. If the memory leak keeps growing like every time you call a library call your memory increases by 512KB and is not freed, then you may have a problem. If you document it and control the number of times the call is executed it may be manageable. But then you really need documentation because while 512 isn't much, 512 over a million calls is a lot.
Also you need to check your operating system documentation. If this was an embedded device there may be operating systems that don't free all the memory from a program that exits. I'm not sure, maybe this isn't true. But it is worth looking into.
I'm sure that someone can come up with a reason to say Yes, but it won't be me.
Instead of saying no, I'm going to say that this shouldn't be a yes/no question.
There are ways to manage or contain memory leaks, and many systems have them.
There are NASA systems on devices that leave the earth that plan for this. The systems will automatically reboot every so often so that memory leaks will not become fatal to the overall operation. Just an example of containment.
If you allocate memory and use it until the last line of your program, that's not a leak. If you allocate memory and forget about it, even if the amount of memory isn't growing, that's a problem. That allocated but unused memory can cause other programs to run slower or not at all.
I can count on one hand the number of "benign" leaks that I've seen over time.
So the answer is a very qualified yes.
An example. If you have a singleton resource that needs a buffer to store a circular queue or deque but doesn't know how big the buffer will need to be and can't afford the overhead of locking or every reader, then allocating an exponentially doubling buffer but not freeing the old ones will leak a bounded amount of memory per queue/deque. The benefit for these is they speed up every access dramatically and can change the asymptotics of multiprocessor solutions by never risking contention for a lock.
I've seen this approach used to great benefit for things with very clearly fixed counts such as per-CPU work-stealing deques, and to a much lesser degree in the buffer used to hold the singleton /proc/self/maps state in Hans Boehm's conservative garbage collector for C/C++, which is used to detect the root sets, etc.
While technically a leak, both of these cases are bounded in size and in the growable circular work stealing deque case there is a huge performance win in exchange for a bounded factor of 2 increase in the memory usage for the queues.
If you allocate a bunch of heap at the beginning of your program, and you don't free it when you exit, that is not a memory leak per se. A memory leak is when your program loops over a section of code, and that code allocates heap and then "loses track" of it without freeing it.
In fact, there is no need to make calls to free() or delete right before you exit. When the process exits, all of its memory is reclaimed by the OS (this is certainly the case with POSIX. On other OSes – particularly embedded ones – YMMV).
The only caution I'd have with not freeing the memory at exit time is that if you ever refactor your program so that it, for example, becomes a service that waits for input, does whatever your program does, then loops around to wait for another service call, then what you've coded can turn into a memory leak.
In this sort of question context is everything. Personally I can't stand leaks, and in my code I go to great lengths to fix them if they crop up, but it is not always worth it to fix a leak, and when people are paying me by the hour I have on occasion told them it was not worth my fee for me to fix a leak in their code. Let me give you an example:
I was triaging a project, doing some perf work and fixing a lot of bugs. There was a leak during the applications initialization that I tracked down, and fully understood. Fixing it properly would have required a day or so refactoring a piece of otherwise functional code. I could have done something hacky (like stuffing the value into a global and grabbing it some point I know it was no longer in use to free), but that would have just caused more confusion to the next guy who had to touch the code.
Personally I would not have written the code that way in the first place, but most of us don't get to always work on pristine well designed codebases, and sometimes you have to look at these things pragmatically. The amount of time it would have taken me to fix that 150 byte leak could instead be spent making algorithmic improvements that shaved off megabytes of ram.
Ultimately, I decided that leaking 150 bytes for an app that used around a gig of ram and ran on a dedicated machine was not worth fixing it, so I wrote a comment saying that it was leaked, what needed to be changed in order to fix it, and why it was not worth it at the time.
this is so domain-specific that its hardly worth answering. use your freaking head.
space shuttle operating system: nope, no memory leaks allowed
rapid development proof-of-concept code: fixing all those memory leaks is a waste of time.
and there is a spectrum of intermediate situations.
the opportunity cost ($$$) of delaying a product release to fix all but the worst memory leaks is usually dwarfs any feelings of being "sloppy or unprofessional". Your boss pays you to make him money, not to get a warm, fuzzy feelings.
You have to first realize that there's a big difference between a perceived memory leak and an actual memory leak. Very frequently analysis tools will report many red herrings, and label something as having been leaked (memory or resources such as handles etc) where it actually isn't. Often times this is due to the analysis tool's architecture. For example, certain analysis tools will report run time objects as memory leaks because it never sees those object freed. But the deallocation occurs in the runtime's shutdown code, which the analysis tool might not be able to see.
With that said, there will still be times when you will have actual memory leaks that are either very difficult to find or very difficult to fix. So now the question becomes is it ever OK to leave them in the code?
The ideal answer is, "no, never." A more pragmatic answer may be "no, almost never." Very often in real life you have limited number of resources and time to resolve and endless list of tasks. When one of the tasks is eliminating memory leaks, the law of diminishing returns very often comes in to play. You could eliminate say 98% of all memory leaks in an application in a week, but the remaining 2% might take months. In some cases it might even be impossible to eliminate certain leaks because of the application's architecture without a major refactoring of code. You have to weigh the costs and benefits of eliminating the remaining 2%.
While most answers concentrate on real memory leaks (which are not OK ever, because they are a sign of sloppy coding), this part of the question appears more interesting to me:
What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's deconstructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.
If the associated memory is used, you cannot free it before the program ends. Whether the free is done by the program exit or by the OS does not matter. As long as this is documented, so that change don't introduce real memory leaks, and as long as there is no C++ destructor or C cleanup function involved in the picture. A not-closed file might be revealed through a leaked FILE object, but a missing fclose() might also cause the buffer not to be flushed.
So, back to the original case, it is IMHO perfectly OK in itself, so much that Valgrind, one of the most powerful leak detectors, will treat such leaks only if requested. On Valgrind, when you overwrite a pointer without freeing it beforehand, it gets considered as a memory leak, because it is more likely to happen again and to cause the heap to grow endlessly.
Then, there are not nfreed memory blocks which are still reachable. One could make sure to free all of them at the exit, but that is just a waste of time in itself. The point is if they could be freed before. Lowering memory consumption is useful in any case.
Even if you are sure that your 'known' memory leak will not cause havoc, don't do it. At best, it will pave a way for you to make a similar and probably more critical mistake at a different time and place.
For me, asking this is like questioning "Can I break the red light at 3 AM in the morning when no one is around?". Well sure, it may not cause any trouble at that time but it will provide a lever for you to do the same in rush hour!
No, you should not have leaks that the OS will clean for you. The reason (not mentioned in the answers above as far as I could check) is that you never know when your main() will be re-used as a function/module in another program. If your main() gets to be a frequently-called function in another persons' software - this software will have a memory leak that eats memory over time.
KIV
I agree with vfilby – it depends. In Windows, we treat memory leaks as relatively serous bugs. But, it very much depends on the component.
For example, memory leaks are not very serious for components that run rarely, and for limited periods of time. These components run, do theire work, then exit. When they exit all their memory is freed implicitly.
However, memory leaks in services or other long run components (like the shell) are very serious. The reason is that these bugs 'steal' memory over time. The only way to recover this is to restart the components. Most people don't know how to restart a service or the shell – so if their system performance suffers, they just reboot.
So, if you have a leak – evaluate its impact two ways
To your software and your user's experience.
To the system (and the user) in terms of being frugal with system resources.
Impact of the fix on maintenance and reliability.
Likelihood of causing a regression somewhere else.
Foredecker
I'm surprised to see so many incorrect definitions of what a memory leak actually is. Without a concrete definition, a discussion on whether it's a bad thing or not will go nowhere.
As some commentors have rightly pointed out, a memory leak only happens when memory allocated by a process goes out of scope to the extent that the process is no longer able to reference or delete it.
A process which is grabbing more and more memory is not necessarily leaking. So long as it is able to reference and deallocate that memory, then it remains under the explicit control of the process and has not leaked. The process may well be badly designed, especially in the context of a system where memory is limited, but this is not the same as a leak. Conversely, losing scope of, say, a 32 byte buffer is still a leak, even though the amount of memory leaked is small. If you think this is insignificant, wait until someone wraps an algorithm around your library call and calls it 10,000 times.
I see no reason whatsoever to allow leaks in your own code, however small. Modern programming languages such as C and C++ go to great lengths to help programmers prevent such leaks and there is rarely a good argument not to adopt good programming techniques - especially when coupled with specific language facilities - to prevent leaks.
As regards existing or third party code, where your control over quality or ability to make a change may be highly limited, depending on the severity of the leak, you may be forced to accept or take mitigating action such as restarting your process regularly to reduce the effect of the leak.
It may not be possible to change or replace the existing (leaking) code, and therefore you may be bound to accept it. However, this is not the same as declaring that it's OK.
I guess it's fine if you're writing a program meant to leak memory (i.e. to test the impact of memory leaks on system performance).
Its really not a leak if its intentional and its not a problem unless its a significant amount of memory, or could grow to be a significant amount of memory. Its fairly common to not cleanup global allocations during the lifetime of a program. If the leak is in a server or long running app, grows over time, then its a problem.
I think you've answered your own question. The biggest drawback is how they interfere with the memory leak detection tools, but I think that drawback is a HUGE drawback for certain types of applications.
I work with legacy server applications that are supposed to be rock solid but they have leaks and the globals DO get in the way of the memory detection tools. It's a big deal.
In the book "Collapse" by Jared Diamond, the author wonders about what the guy was thinking who cut down the last tree on Easter Island, the tree he would have needed in order to build a canoe to get off the island. I wonder about the day many years ago when that first global was added to our codebase. THAT was the day it should have been caught.
I see the same problem as all scenario questions like this: What happens when the program changes, and suddenly that little memory leak is being called ten million times and the end of your program is in a different place so it does matter? If it's in a library then log a bug with the library maintainers, don't put a leak into your own code.
I'll answer no.
In theory, the operating system will clean up after you if you leave a mess (now that's just rude, but since computers don't have feelings it might be acceptable). But you can't anticipate every possible situation that might occur when your program is run. Therefore (unless you are able to conduct a formal proof of some behaviour), creating memory leaks is just irresponsible and sloppy from a professional point of view.
If a third-party component leaks memory, this is a very strong argument against using it, not only because of the imminent effect but also because it shows that the programmers work sloppily and that this might also impact other metrics. Now, when considering legacy systems this is difficult (consider web browsing components: to my knowledge, they all leak memory) but it should be the norm.
Historically, it did matter on some operating systems under some edge cases. These edge cases could exist in the future.
Here's an example, on SunOS in the Sun 3 era, there was an issue if a process used exec (or more traditionally fork and then exec), the subsequent new process would inherit the same memory footprint as the parent and it could not be shrunk. If a parent process allocated 1/2 gig of memory and didn't free it before calling exec, the child process would start using that same 1/2 gig (even though it wasn't allocated). This behavior was best exhibited by SunTools (their default windowing system), which was a memory hog. Every app that it spawned was created via fork/exec and inherited SunTools footprint, quickly filling up swap space.
This was already discussed ad nauseam. Bottom line is that a memory leak is a bug and must be fixed. If a third party library leaks memory, it makes one wonder what else is wrong with it, no? If you were building a car, would you use an engine that is occasionally leaking oil? After all, somebody else made the engine, so it's not your fault and you can't fix it, right?
Generally a memory leak in a stand alone application is not fatal, as it gets cleaned up when the program exits.
What do you do for Server programs that are designed so they don't exit?
If you are the kind of programmer that does not design and implement code where the resources are allocated and released correctly, then I don't want anything to do with you or your code. If you don't care to clean up your leaked memory, what about your locks? Do you leave them hanging out there too? Do you leave little turds of temporary files laying around in various directories?
Leak that memory and let the program clean it up? No. Absolutely not. It's a bad habit, that leads to bugs, bugs, and more bugs.
Clean up after yourself. Yo momma don't work here no more.
As a general rule, if you've got memory leaks that you feel you can't avoid, then you need to think harder about object ownership.
But to your question, my answer in a nutshell is In production code, yes. During development, no. This might seem backwards, but here's my reasoning:
In the situation you describe, where the memory is held until the end of the program, it's perfectly okay to not release it. Once your process exits, the OS will clean up anyway. In fact, it might make the user's experience better: In a game I've worked on, the programmers thought it would be cleaner to free all the memory before exiting, causing the shutdown of the program to take up to half a minute! A quick change that just called exit() instead made the process disappear immediately, and put the user back to the desktop where he wanted to be.
However, you're right about the debugging tools: They'll throw a fit, and all the false positives might make finding your real memory leaks a pain. And because of that, always write debugging code that frees the memory, and disable it when you ship.

Should I delete big tree collections in C++ at the end of program or leave that to OS? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
This might be a bit stupid question - should I call delete on huge map/set at the end of the program?
Assuming the map/set is needed throughout all program (delete is last line before return) and it's size is really huge (> 4GB). The delete call take a long time and from my perspective does not have any value(memory cannot be released any sooner), am I wrong? If so, why?
There is no guarantee in the C and C++ standards of what happens after your program exits. Including no guarantee that anything is cleaned up. Some smaller real-time OS' for example, will not perform automatic cleanup. So at least in theory, your program should definitely do delete everything you new to fulfil it's obligation as a complete and portable program that can run forever.
It is also possible that someone takes your code and puts a loop around it, so that your tree is now created a million times, and then goes to find you, bringing along the "trusty convincer", aka base-ball bat, when they find out WHY it's now running out of memory after 500 iterations.
Of course, like all things, this can be argued many different ways, and it really depends on what you are trying to achieve, why you are writing the program etc. My compiler project leaks memory like a sieve, because I use exactly the method of memory management that you describe (partly because tracking the lifetime of each dynamically allocated object is quite difficult, and partly because "I can't be bothered". I'm sure if someone actually wants a good pascal compiler, they won't go for my code anwyay).
Actually, my compiler project builds many different data structures, some of which are trees, arrays, etc, but basically none of them perform any cleanup afterwords. It's not as simple to fix as the case of building a large tree that need each node deleting. However, conceptually, it all boils down to "do cleanup" or "not do cleanup", and that in turn comes down to "well, who is going to use/modify the code, and what do you know about the environment it will run in".
As long as your program stays more or less the same, with the memory being used for the entire execution, it doesn't make a real difference.
However, if someone ever tries to take your program and make it into a component of some other program, not releasing the memory could result in a huge memory leak.
So to be on the safe side, and also to be more organized, always free what you allocate. In your case it's very easy so there's no downside - just a potential upside.
If you are using C++ STL, then there is no need for any explicit deletion, because map/set will automatically manage heap memory for you. Actually, you are not allowed to delete those map/set, since they are not pointers.
For the large objects stored inside map/set, you can use smart pointers when constructing those objects, and then you will no longer need to call their destructors. Memory leak may not be a problem for a toy program, but it is unacceptable for real-life programs, since they may run for a long time or even forever.

Why do I need to delete[]?

Lets say I have a function like this:
int main()
{
char* str = new char[10];
for(int i=0;i<5;i++)
{
//Do stuff with str
}
delete[] str;
return 0;
}
Why would I need to delete str if I am going to end the program anyways?
I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?
Is it just good practice?
Does it have deeper consequences?
If in fact your question really is "I have this trivial program, is it OK that I don't free a few bytes before it exits?" the answer is yes, that's fine. On any modern operating system that's going to be just fine. And the program is trivial; it's not like you're going to be putting it into a pacemaker or running the braking systems of a Toyota Camry with this thing. If the only customer is you then the only person you can possibly impact by being sloppy is you.
The problem then comes in when you start to generalize to non-trivial cases from the answer to this question asked about a trivial case.
So let's instead ask two questions about some non-trivial cases.
I have a long-running service that allocates and deallocates memory in complex ways, perhaps involving multiple allocators hitting multiple heaps. Shutting down my service in the normal mode is a complicated and time-consuming process that involves ensuring that external state -- files, databases, etc -- are consistently shut down. Should I ensure that every byte of memory that I allocated is deallocated before I shut down?
Yes, and I'll tell you why. One of the worst things that can happen to a long-running service is if it accidentally leaks memory. Even tiny leaks can add up to huge leaks over time. A standard technique for finding and fixing memory leaks is to instrument the allocation heaps so that at shutdown time they log all the resources that were ever allocated without being freed. Unless you like chasing down a lot of false positives and spending a lot of time in the debugger, always free your memory even if doing so is not strictly speaking necessary.
The user is already expecting that shutting the service down might take billions of nanoseconds so who cares if you cause a little extra pressure on the virtual allocator making sure that everything is cleaned up? This is just the price you pay for big complicated software. And it's not like you're shutting down the service all the time, so again, who cares if its a few milliseconds slower than it could be?
I have that same long-running service. If I detect that one of my internal data structures is corrupt I wish to "fail fast". The program is in an undefined state, it is likely running with elevated privileges, and I am going to assume that if I detect corrupted state, it is because my service is actively being attacked by hostile parties. The safest thing to do is to shut down the service immediately. I would rather allow the attackers to deny service to the clients than to risk the service staying up and compromising my users' data further. In this emergency shutdown scenario should I make sure that every byte of memory I allocated is freed?
Of course not. The operating system is going to take care of that for you. If your heap is corrupt, the attackers may be hoping that you free memory as part of their exploit. Every millisecond counts. And why would you bother polishing the doorknobs and mopping the kitchen before you drop a tactical nuke on the building?
So the answer to the question "should I free memory before my program exits?" is "it depends on what your program does".
Yes it is good practice. You should NEVER assume that your OS will take care of your memory deallocation, if you get into this habit, it will screw you later on.
To answer your question, however, upon exiting from the main, the OS frees all memory held by that process, so that includes any threads that you may have spawned or variables allocated. The OS will take care of freeing up that memory for others to use.
Important note : delete's freeing of memory is almost just a side-effect. The important thing it does is to destruct the object. With RAII designs, this could mean anything from closing files, freeing OS handles, terminating threads, or deleting temporary files.
Some of these actions would be handled by the OS automatically when your process exits, but not all.
In your example, there's no reason NOT to call delete. but there's no reason to call new either, so you can sidestep the issue this way.
char str[10];
Or, you can sidestep the delete (and the exception safety issues involved) by using smart pointers...
So, generally you should always be making sure your object's lifetime is properly managed.
But it's not always easy: Workarounds for the static initialization order fiasco often mean that you have no choice but to rely on the OS cleaning up a handful of singleton-type objects for you.
Contrary answer: No, it is a waste of time. A program with a vast amount of allocated data would have to touch nearly every page in order to return all of the allocations to the free list. This wastes CPU time, creates memory pressure for uninteresting data, and possibly even causes the process to swap pages back in from disk. Simply exiting releases all of the memory back to the OS without any further action.
(not that I disagree with the reasons in "Yes", I just think there are arguments both ways)
Your Operating System should take care of the memory and clean it up when you exit your program, but it is in general good practice to free up any memory you have reserved. I think personally it is best to get into the correct mindset of doing so, as while you are doing simple programs, you are most likely doing so to learn.
Either way, the only way to guaranteed that the memory is freed up is by doing so yourself.
new and delete are reserved keyword brothers. They should cooperate with each other through a code block or through the parent object's lifecyle. Whenever the younger brother commits a fault (new), the older brother will want to to clean (delete) it up. Then the mother (your program) will be happy and proud of them.
I cannot agree more to Eric Lippert's excellent advice:
So the answer to the question "should I free memory before my program exits?" is "it depends on what your program does".
Other answers here have provided arguments for and against both, but the real crux of the matter is what your program does. Consider a more non-trivial example wherein the type instance being dynamically allocated is an custom class and the class destructor performs some actions which produces side effect. In such a situation the argument of memory leaks or not is trivial the more important problem is that failing to call delete on such a class instance will result in Undefined behavior.
[basic.life] 3.8 Object lifetime
Para 4:
A program may end the lifetime of any object by reusing the storage which the object occupies or by explicitly calling the destructor for an object of a class type with a non-trivial destructor. For an object of a class type with a non-trivial destructor, the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released; however, if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined
behavior.
So the answer to your question is as Eric says "depends on what your program does"
It's a fair question, and there are a few things to consider when answering:
some objects have more complex destructors which don't just release memory when they're deleted. They may have other side effects, which you don't want to skip.
It is not guaranteed by the C++ standard that your memory will be released when the process terminates. (Of course on a modern OS it will be freed, but if you were on some weird OS which didn't do that, you'd have to free your memory properly
on the other hand, running destructors at program exit can actually take up quite a lot of time, and if all the do is release memory (which would be released anyway), then yes, it makes a lot of sense to just short-circuit that and exit immediately instead.
Most operating systems will reclaim memory upon process exit. Exceptions may include certain RTOS's, old mobile devices etc.
In an absolute sense your app won't leak memory; however it's good practice to clean up memory you allocate even if you know it won't cause a real leak. This issue is leaks are much, much harder to fix than not having them to begin with. Let's say you decide that you want to move the functionality in your main() to another function. You'll may end up with a real leak.
It's also bad aesthetics, many developers will look at the unfreed 'str' and feel slight nausea :(
Why would I need to delete str if I am going to end the program anyways?
Because you don't want to be lazy ...
I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?
Nope, I don't care about the land of unicorns either. The Land of Arwen is a different matter, Then we could cut their horns off and put them to good use(I've heard its a good aphrodisiac).
Is it just good practice?
It is justly a good practice.
Does it have deeper consequences?
Someone else has to clean up after you. Maybe you like that, I moved out from under my parents' roof many years ago.
Place a while(1) loop construct around your code without delete. The code-complexity does not matter. Memory leaks are related to process time.
From the perspective of debug, not releasing system resources(file handles, etc) can cause more significant and hard to find bugs. Memory-leaks while important are typically much easier to diagnose(why can't I write to this file?). Bad style will become more of a problem if you start working with threads.
int main()
{
while(1)
{
char* str = new char[10];
for(int i=0;i<5;i++)
{
//Do stuff with str
}
}
delete[] str;
return 0;
}
Another reason that I haven't see mentioned yet is to keep the output of static and dynamic analyzer tools (e.g. valgrind or Coverity) cleaner and quieter. Clean output with zero memory leaks or zero reported issues means that when a new one pops up it is easier to detect and fix.
You never know how your simple example will be used or evolved. Its better to start as clean and crisp as possible.
Not to mention that if you are going to apply for a job as a C++ programmer there is a very good chance that you won't get past the interview because of the missing delete. First - programmers don't like any leaks usually (and the guy at the interview will be surely one of them) and second - most companies (all I worked in at least) have the "no-leak" policy. Generally, the software you write is supposed to run for quite a while, creating and destroying objects on the go. In such an environment leaks can lead to disasters...
You got a lot of professional experience answers. Here I'm telling a naive but an answer I considered as the fact.
Summary
3. Does it have deeper consequences?
A: Will answer in some detail.
2. Is it just good practice?
A: It is considered a good practice. Release resources/memory you've retrieved when you're sure about it no longer used.
Why would I need to delete str if I am going to end the program anyways?
I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?
A: You need or need not, in fact, you tell why. There're some explanation follows.
I think it depends. Here are some assumed questions; the term program may mean either an application or a function.
Q: Does it depend on what the program does?
A: If universe destroyed was acceptable, then no. However, the program might not work correctly as expected, and even be a program that doesn't complete what it supposed to. You might want to seriously think about why you build a program like this?
Q: Does it depend on how the program is complicated?
A: No. See Explanation.
Q: Does it depend on what the stability of the program is expected?
A: Closely.
And I consider it depends on
What's the universe of the program?
How's the expectation of the program that it done its work?
How much does the program care about others, and the universe where it is?
About the term universe, see Explanation.
For summary, it depends on what do you care about.
Explanation
Important: If we define the term program as a function, then its universe is application. There are many details omitted; as an idea for understanding, it's long enough, though.
We may ever seen this kind of diagram which illustrates the relationship between application software and system software.
But for being aware the scope of which one covers, I'd suggest a reversed layout. Since we are talking about software only, the hardware layer is omitted in following diagram.
With this diagram, we realize that the OS covers the biggest scope, which is current the universe, sometimes we say the environment. You may imagine that the whole achitecture consists of a lot of disks like the diagram, to be either a cylinder or torus(a ball is fine but difficult to imagine). Here I should mention that the outmost of OS is in fact a unibody, the runtime may be either single or multiple by different implemention.
It's important that the runtime is responsible to both OS and applications, but the latter is more critical. Runtime is the universe of applications, if it's destroyed all applications run under it are gone.
Unlike human on the Earth, we live here, but we don't consist of Earth; we will still live in other suitable environment if the time when the Earth are destroying and we weren't there.
However, we can no longer exist when the universe is destroyed, because we are not only live in the universe, but also consist of the universe.
As mentioned above, the runtime is also responsible to the OS. The left circle in the following diagram is what it may looks like.
This is mostly like a C program in the OS. When the relationship between an application and OS match this, is just the same situation as runtime in the OS above. In this diagram, the OS is the universe of applications. The reason of the applications here should be responsible to the OS, is that OS might not virtualize the code of them, or allowed to be crashed. If OS are always prevent they to do so, then it's self-responsible, no matter what applications do. But think about the drivers, it's one of the scenarios that OS must allowed to be crashed, since this kind of applications are treated as part of OS.
Finally, let's see the right circle in the diagram above. In this case, the application itself be the universe. Sometimes, we call this kind of application operating system. If an OS never allowed custom code to be loaded and run, then it does all the things itself. Even it allowed, after itself is terminated, the memory goes nowhere but hardware. All the deallocation that may be necessary, is before it was terminated.
So, how much does your program care about the others? How much does it care about its universe? And how's the expectation of the program that it done its work? It depends on what do you care about.
TECHNICALLY, a programmer shouldn't rely on the OS to do any thing.
The OS isn't required to reclaim lost memory in this fashion.
If you do write the code that deletes all your dynamically allocated memory, then you are future proofing the code and letting others use it in a larger project.
Source: Allocation and GC Myths(PostScript alert!)
Allocation Myth 4: Non-garbage-collected programs should always
deallocate all memory they allocate.
The Truth: Omitted deallocations in frequently executed code cause
growing leaks. They are rarely acceptable. but Programs that retain
most allocated memory until program exit often perform better without
any intervening deallocation. Malloc is much easier to implement if
there is no free.
In most cases, deallocating memory just before program exit is
pointless. The OS will reclaim it anyway. Free will touch and page in
the dead objects; the OS won't.
Consequence: Be careful with "leak detectors" that count allocations.
Some "leaks" are good!
I think it's a very poor practice to use malloc/new without calling free/delete.
If the memory's going to get reclaimed anyway, what harm can there be from explicitly deallocating when you need to?
Maybe if the OS "reclaims" memory faster than free does then you'll see increased performance; this technique won't help you with any program that must remain running for any long period of time.
Having said that, so I'd recommend you use free/delete.
If you get into this habit, who's to say that you won't one day accidentally apply this approach somewhere it matters?
One should always deallocate resources after one is done with them, be it file handles/memory/mutexs. By having that habit, one will not make that sort of mistake when building servers. Some servers are expected to run 24x7. In those cases, any leak of any sort means that your server will eventually run out of that resource and hang/crash in some way. A short utility program, ya a leak isn't that bad. Any server, any leak is death. Do yourself a favor. Clean up after yourself. It's a good habit.
Think about your class 'A' having to deconstruct. If you don't call
'delete' on 'a', that destructor won't get called. Usually, that won't
really matter if the process ends anyway. But what if the destructor
has to release e.g. objects in a database? Flush a cache to a logfile?
Write a memory cache back to disk? **You see, it's not just 'good
practice' to delete objects, in some situations it is required**.
Instead of talking about this specific example i will talk about general cases, so generally it is important to explicitly call delete to de-allocate memory because (in case of C++) you may have some code in the destructor that you want to execute. Like maybe writing some data to a log file or sending shutting down signal to some other process etc. If you let the OS free your memory for you, your code in your destructor will not be executed.
On the other hand most operating systems will deallocate the memory when your program ends. But it is good practice to deallocate it yourself and like I gave destructor example above the OS won't call your destructor, which can create undesirable behavior in certain cases!
I personally consider it bad practice to rely on OS to free your memory (even though it will do) the reason is if later on you have to integrate your code with a larger program you will spend hours to track down and fix memory leaks!
So clean your room before leaving!

Why deleteing a pointer twice will cause crash? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why Free crashes when called twice
I just want to know what exactly happened when we delete the pointer that already been deleted, and what cause the crash?
It's hard to predict exactly what will happen -- it depends a little on the compiler, and a lot on the standard library. Officially, it's just undefined behavior, so nearly anything can happen.
The most common thing that'll happen is that the heap will get trashed. It may not check that what delete is valid, so when it gets two blocks at the same address it may (for example) still treat them as two separate blocks, so when you allocate memory later you may get that same block of memory twice. In some other cases, it may (for example) just complement a bit to say whether that block of memory is in use, so the second time you delete it, you actually end up marking it as being in use again, so that memory can never be allocated again, and you've basically just created a leak.
If you buy a car, can you sell it twice? No right, because second time you are no longer the owner of it. When you delete the pointer for the first time , you are relinquishing your ownership and the memory will be going back to the free pool getting ready to be given out again. If you delete it for the second time, that memory might be used by any other program now and when they try to access it it will crash. Hope this is helpful
It just does.
It's illegal to free memory that you don't own (and if you previously freed a block of memory then you no longer own it).
Because it's illegal to do, the internal algorithms don't waste time double-checking memory ownership every time you try to free something, assuming that you're wise enough to just not do it.
Of course, the side effect of this is that you can mangle your memory and make things go haywire.
Analysis of really specific examples is hugely out of scope of Stack Overflow or, indeed, sanity. This answer to a possible duplicate question does a pretty good job, though.
Why can't free() just check its "records" each time? Because C++ is designed on the principle of not doing stuff that it doesn't need to. If you're a sane programmer, you don't call free() twice on the same block of memory, so why would you want your program to waste time and resources on pointless checks? You wouldn't. The principles of the language (and, indeed, common sense) dictate that these checks are not performed automatically.
Sidenote — I'm being a bit generous with my use of the term "illegal". Within the scope of C++, it's merely "Undefined Behaviour". Within the scope of practicality, that might as well be the same darned thing.

Destructor for Singleton

Question : Should i write destructor for a singleton which has program scope (comes alive when program starts and dies when program ends)
Detail :
i am in a dilemma on
"shall i write destructor for a singleton class or not ?"
1) This class has program scope
2) Class uses a lot of memory on heap, So releasing that will take time
When user is exiting program, Response should be fast , So why to spend time in freeing memory occupied by this singleton , As memory will be released as program will end.
If it takes a long time to release memory, then don't do it. This may be a big and time-consuming issue, especially if releasing memory leads to numerous cache misses. OS will do the job (of course, if you are operating on the systems that actually does that).
However, if your destructor does finalization of some resources (for example, unlocking a file or a piece of hardware), and you use "resource acquisition is initialization", you must ensure that the proper destructors are called (for example, those of static objects are called after your main() function returns). This holds if some of the objects allocated inside your singleton lock resources, too!
In most cases, therefore, it's better to actually write a destructor for such an object and make it release memory optionally.
SSS, who asked the question, decided not to write destructor at all. However, I'd like to argue a bit more that it's not the best solution.
Not freeing a memory for a static object (let's call it "static") is a very subtle optimization that contradicts common sense and how people usually write programs. Your code, allocating memory and merely having no destructor, looks weird. Peers will think that the class is badly written, will tend to look bugs in it (while they're in the other class).
Instead, you should conform to common coding standards which dictate that memory management in C++ should be correct. Do write a destructor and, only after it shows it has a significant boost not to deallocate, wrap the code for it not to be called.
Intent to not release the memory must be explicit.
MySingleton::~MySingleton()
{
#ifndef RELEASE
// The memory will be released by OS when program terminates!
delete ptr1;
delete ptr2;
#endif
}
or even
MySingleton::~MySingleton()
{
// We don't do anything here.
// The memory will be released by OS when program terminates!
}
but destructor is better to persist.
It sounds like you're creating a God Object, so it might be beneficial reconsider the design of your class.
Whether or not you add a destructor depends on whether or not it's worth the performance hit to release, and eventually reallocate, the memory.
The C++ language does not guaranteee that memory will be reclaimed when the program terminates, but any half decent OS will do it without a problem.
So yes, if
you are running on a half decent OS, and
there is nothing else the destructor should do than release memory
then yeah, you can omit the destructor, leak the memory, and rely on the OS to clean up after you.
The obvious downside to this is that various debugging tools might scream and yell about memory leaks.
Of course, a slightly more fundamental question is why are you creating a singleton which allocates that much memory?
That sounds like terrible design to me.
Implement the destructor as it is the correct way and will be the most maintainable version. Then measure (create an executable that just creates the singleton and exits) the real time it takes to deallocate the memory.
After you have hard facts on what the cost of proper deletion is, don't write incorrect code. If it takes some measurable time, consider it as a relative measure with respect to the time the program will be running. In most cases the processing required for the application will be much more than the time it takes to free the memory.
If you still think it takes too much time, consider what a regular user will consider too much time, how many times the application is opened and closed by the user, if it really amounts to anything or not.
If at the end you decide it is too much, what you risk is that in some later time the application will be modified, the singleton could acquire external resources and not free them --this is specially bad with file locks...
Those who sacrifice correctness for performance deserve neither
If you are running under windows OS, all memory/resources occupied by any process will be reclaimed when application closes so IMHO it does not matter
As pointed out the operating system should release memory on end anyway, however do you really want to trust it to manage it? At least with explicit destruction a buggy/badly designed memory manager has more chance to get it right.
It gets two chances to get it right then.