Why deleteing a pointer twice will cause crash? [duplicate] - c++

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why Free crashes when called twice
I just want to know what exactly happened when we delete the pointer that already been deleted, and what cause the crash?

It's hard to predict exactly what will happen -- it depends a little on the compiler, and a lot on the standard library. Officially, it's just undefined behavior, so nearly anything can happen.
The most common thing that'll happen is that the heap will get trashed. It may not check that what delete is valid, so when it gets two blocks at the same address it may (for example) still treat them as two separate blocks, so when you allocate memory later you may get that same block of memory twice. In some other cases, it may (for example) just complement a bit to say whether that block of memory is in use, so the second time you delete it, you actually end up marking it as being in use again, so that memory can never be allocated again, and you've basically just created a leak.

If you buy a car, can you sell it twice? No right, because second time you are no longer the owner of it. When you delete the pointer for the first time , you are relinquishing your ownership and the memory will be going back to the free pool getting ready to be given out again. If you delete it for the second time, that memory might be used by any other program now and when they try to access it it will crash. Hope this is helpful

It just does.
It's illegal to free memory that you don't own (and if you previously freed a block of memory then you no longer own it).
Because it's illegal to do, the internal algorithms don't waste time double-checking memory ownership every time you try to free something, assuming that you're wise enough to just not do it.
Of course, the side effect of this is that you can mangle your memory and make things go haywire.
Analysis of really specific examples is hugely out of scope of Stack Overflow or, indeed, sanity. This answer to a possible duplicate question does a pretty good job, though.
Why can't free() just check its "records" each time? Because C++ is designed on the principle of not doing stuff that it doesn't need to. If you're a sane programmer, you don't call free() twice on the same block of memory, so why would you want your program to waste time and resources on pointless checks? You wouldn't. The principles of the language (and, indeed, common sense) dictate that these checks are not performed automatically.
Sidenote — I'm being a bit generous with my use of the term "illegal". Within the scope of C++, it's merely "Undefined Behaviour". Within the scope of practicality, that might as well be the same darned thing.

Related

Memory overwrites in C++ code showing up in consistent locations

I have a very un-scientific observation about memory overwrites and was curious if anyone else has noticed something similar, knows why, and/or can tell me why I wasn't really seeing what I thought I was seeing.
What I noticed was that for some C++ programs, when I have a memory overwrite bug in that program, it would usually (if not always) show up in a specific section of code which was often unrelated to the section of code with the bug. This is not a blanket observation. Not all C++ programs behave this way. But when I have one, it is pretty consistent. (No comment on why my code has enough memory overwrites that I have the opportunity to notice consistent-anything :) )
I'm not asking why a memory overwrite in function1 can show up in function2; that is understood. My observation is that over the life of a given program, we have discovered memory overwrites in function1, function2, function3, function4, and function5. But in each case, we discovered the problem because the code would crash in function6. Always in function6 and only in function6. None of those functions are related and do not touch anything that function6 uses.
Over my lifetime, I've encountered two C programs and one C++ program that behaved this way. These were years apart in unrelated systems and hardware. I just found it weird and wondered if anyone else has seen this. Plus, I suspect that I may be seeing the same pattern in a C++/JNI/Java program that I'm working on now, but it is young enough that I've not had enough hits to be sure of a pattern.
Perhaps the real question here is about what escalates "silent" memory corruption into an actual/formal crash (as opposed to "just" more subtle problems such as unexpected data values, that the user might or might not notice or recognize). I don't think that question can be answered generally, as it depends a lot on the specifics of the compiler, the code, the memory layout of the in-memory data structures, etc.
It can be said that in most modern (non-embedded) systems there is an MMU that handles translating virtual (i.e. per-process) memory addresses into physical memory addresses, and that many user-space crashes are the result of the MMU generating an unrecoverable page fault when the user program tries to dereference a virtual address that has no defined physical equivalent. So perhaps in this case function6() was trying to dereference a pointer whose value had been corrupted in such a way that the MMU couldn't translate it to a physical address. Note that the compiler often places its own pointers on the stack (to remember where the program's control flow should return to when a function returns), so a bad pointer dereference can happen even in code that doesn't explicitly dereference any pointers.
Another common cause for a crash would be a deliberately induced crash invoked by code that notices that the data it is working with is "in a state that should never happen" and calls abort() or similar. This can happen in user code that has assert() calls in it, or in system-provided code such as the code that manages the process's heap. So it could be that function6() tried to allocate or free heap memory, and in so doing gave the heap manager the chance to detect an "impossible state" in one of its data structures and crash out. Keep in mind that the process's heap is really just one big data structure that is shared by all parts of the program that use the heap, so it's not terribly surprising that heap corruption caused by one part of the program might result in a crash later on by another (mostly unrelated) part of the program that also uses the heap.

Is it really necessary to call delete on this pointer [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a pointer to a very large static array of objects that don't have any destructors nor they inherit from any class. This array is allocated at the beginning of the program and never allocated/relocated again (by design). This array needs to be destroyed only at the very end of the program. Do I really need to call delete for the array, or is it OK to let the OS (Windows) clean up? Deleting delays exiting of program by 5-10sec. Without calling delete, OS will do that for us and program exits immediately.
In general, it is a bad idea to let os to clean up.
I assume that nothing bad will happens, but in the future you may use your code in another program, and you will have to change things.
bottom line - free everything that you allocated.
You should free it; for the following reasons:
its easy, and takes minimal effort
its good practice. Taking shortcuts too often will bite you somewhere sometime
it sets an example for other people reading your code
it protects you against issues. Trusting to OS to clean it up means you trust your application to exit properly. If this does not happen, the memory release will be stalled
it protects you from yourself: in the future you might have memory in a hardware device reserved. This might not be freed by the os. So to be clear: not all memory is always be freed by the OS.
... these are just some reasons to clean up your mess ;-) ... besides... you might want to deside to put the stuff in an actual class. Then, the cleanup code is already there.
Also note: not releasing memory is one of the most common runtime issues around. Only for that reason its wise to be strict upon it.
No. You don't necessarily have to free the allocation at shutdown if you can assume that your program runs within a modern operating system. If it significantly reduces the time of shutting down the program, it may be an option worth to consider.
Not freeing the allocation does however have the downside that memory leak detection tools will detect this intentional leak making their use problematic. As a solution, I would suggest making the leak toggleable so that you can run a non leaking version with detection tools, and use the leaking version in production.
Yes, but not for the reason you think.
Although in general it goes somewhat against "good practice", it is perfectly "OK" to just exit and let the OS clean up, since there is nothing to be done. Objections such as "it's a bad idea, evil things may happen, resources oh the woes, whatever" are not well-founded insofar as what really happens is that your process ceases to exist, and so do the memory pages that the OS had allocated for it, and that's just it. So, as long as you are 100% positive that nothing needs to happen in your destructor, there is no reason to ever deallocate, except to make tools like valgrind (and auditors, if applicable) happy.
However, the fact that operator delete[] takes 5-10 seconds indicates that this is not the case at all. Something does happen there.
On my desktop, deleting millions of objects takes pretty much "zero" time as long as the destructors are not very very un-trivial. I'm saying such a funny thing as "very very un-trivial" since trivial (and non-trivial) destructor is a term which means something very specific and indeed a "not very very un-trivial" non-trivial destructor call is still very fast.
Wrote a quick 15-liner to test. Structure defintion with some data, default contructor zero-initializing data, and a trivial destructor. A main function which allocates 100 million objects, iterates over the array and updates each element with the value of argc (to prevent the compiler from optimizing the whole thing out), and finally deleting the array.
Runtime overall: 0.2 seconds give or take 50 millis. The same program but with a non-trivial destructor that conditionally updates a global counter takes 0.3 seconds total. Mind you, that's allocation, construction, and iteration included.
So, obviously you are doing something which is not at all trivial. Lots of nested virtual destructor calls on sub-objects? Deallocations from within the object's destructor? Zeroing out or otherwise touching gigabytes of memory? Closing file handles?
Impossible to tell what, but there must be something that is happening, or it wouldn't take so long.
So... no it's not OK to just quit. Because there's something that's happening, and it won't happen if you skip the delete.

Does windows automatically free up memory when program closes (not returning from main)? [duplicate]

This question already has answers here:
Is it acceptable not to deallocate memory
(19 answers)
Closed 9 years ago.
Does the new memory allocated always get freed up when program closes? (even if closes unexpectedly from a bug/error etc, or custom close functions)?
Or does it only free the memory if it returns from main?
Yes, operating systems usually keep track of the memory allocated by each process and release it when those processes terminate - no matter how.
This, however, is not a valid reason to have memory leaks in your program in general: a program should always actively release the resources (including memory) it acquires, unless there is a really good - and documented - reason for not doing so.
Good reasons could be the dependency of a program's correctness on the order of destruction of global/singleton objects, or the expensiveness of actively freeing allocated memory before termination.
However, while admitting that there could be reasons why a programmer would intentionally avoid releasing memory, please be careful not to develop a too shallow mindset as to what counts as a "good reason" for not cleaning after yourself.
I would encourage you to get used to write code that does release the memory it acquires, and document explicitly in a very clear form every situation where you are not going to follow this practice. Again, while there might be corner-cases that require this, releasing or not releasing acquired memory always has to be an active, intentional decision of the programmer.
NOTE: Quoting Steve Jessop from the comments, another good reason why you would not want to actively release memory is when your program needs to be terminated because it somehow reached an unexpected state - perhaps one that violates an invariant, or a pre-condition of a certain function. Usually, violating a precondition means Undefined Behavior.
Since - by definition - there is no sane way to recover from UB, you may want to immediately terminate your program rather than performing further actions that could have any outcome - including highly undesirable ones.
Not all operating systems (in modern OS this is not a problem) do this operation and you had better not rely on this property. You can have a look at here:
What REALLY happens when you don't free after malloc?

Why do I need to delete[]?

Lets say I have a function like this:
int main()
{
char* str = new char[10];
for(int i=0;i<5;i++)
{
//Do stuff with str
}
delete[] str;
return 0;
}
Why would I need to delete str if I am going to end the program anyways?
I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?
Is it just good practice?
Does it have deeper consequences?
If in fact your question really is "I have this trivial program, is it OK that I don't free a few bytes before it exits?" the answer is yes, that's fine. On any modern operating system that's going to be just fine. And the program is trivial; it's not like you're going to be putting it into a pacemaker or running the braking systems of a Toyota Camry with this thing. If the only customer is you then the only person you can possibly impact by being sloppy is you.
The problem then comes in when you start to generalize to non-trivial cases from the answer to this question asked about a trivial case.
So let's instead ask two questions about some non-trivial cases.
I have a long-running service that allocates and deallocates memory in complex ways, perhaps involving multiple allocators hitting multiple heaps. Shutting down my service in the normal mode is a complicated and time-consuming process that involves ensuring that external state -- files, databases, etc -- are consistently shut down. Should I ensure that every byte of memory that I allocated is deallocated before I shut down?
Yes, and I'll tell you why. One of the worst things that can happen to a long-running service is if it accidentally leaks memory. Even tiny leaks can add up to huge leaks over time. A standard technique for finding and fixing memory leaks is to instrument the allocation heaps so that at shutdown time they log all the resources that were ever allocated without being freed. Unless you like chasing down a lot of false positives and spending a lot of time in the debugger, always free your memory even if doing so is not strictly speaking necessary.
The user is already expecting that shutting the service down might take billions of nanoseconds so who cares if you cause a little extra pressure on the virtual allocator making sure that everything is cleaned up? This is just the price you pay for big complicated software. And it's not like you're shutting down the service all the time, so again, who cares if its a few milliseconds slower than it could be?
I have that same long-running service. If I detect that one of my internal data structures is corrupt I wish to "fail fast". The program is in an undefined state, it is likely running with elevated privileges, and I am going to assume that if I detect corrupted state, it is because my service is actively being attacked by hostile parties. The safest thing to do is to shut down the service immediately. I would rather allow the attackers to deny service to the clients than to risk the service staying up and compromising my users' data further. In this emergency shutdown scenario should I make sure that every byte of memory I allocated is freed?
Of course not. The operating system is going to take care of that for you. If your heap is corrupt, the attackers may be hoping that you free memory as part of their exploit. Every millisecond counts. And why would you bother polishing the doorknobs and mopping the kitchen before you drop a tactical nuke on the building?
So the answer to the question "should I free memory before my program exits?" is "it depends on what your program does".
Yes it is good practice. You should NEVER assume that your OS will take care of your memory deallocation, if you get into this habit, it will screw you later on.
To answer your question, however, upon exiting from the main, the OS frees all memory held by that process, so that includes any threads that you may have spawned or variables allocated. The OS will take care of freeing up that memory for others to use.
Important note : delete's freeing of memory is almost just a side-effect. The important thing it does is to destruct the object. With RAII designs, this could mean anything from closing files, freeing OS handles, terminating threads, or deleting temporary files.
Some of these actions would be handled by the OS automatically when your process exits, but not all.
In your example, there's no reason NOT to call delete. but there's no reason to call new either, so you can sidestep the issue this way.
char str[10];
Or, you can sidestep the delete (and the exception safety issues involved) by using smart pointers...
So, generally you should always be making sure your object's lifetime is properly managed.
But it's not always easy: Workarounds for the static initialization order fiasco often mean that you have no choice but to rely on the OS cleaning up a handful of singleton-type objects for you.
Contrary answer: No, it is a waste of time. A program with a vast amount of allocated data would have to touch nearly every page in order to return all of the allocations to the free list. This wastes CPU time, creates memory pressure for uninteresting data, and possibly even causes the process to swap pages back in from disk. Simply exiting releases all of the memory back to the OS without any further action.
(not that I disagree with the reasons in "Yes", I just think there are arguments both ways)
Your Operating System should take care of the memory and clean it up when you exit your program, but it is in general good practice to free up any memory you have reserved. I think personally it is best to get into the correct mindset of doing so, as while you are doing simple programs, you are most likely doing so to learn.
Either way, the only way to guaranteed that the memory is freed up is by doing so yourself.
new and delete are reserved keyword brothers. They should cooperate with each other through a code block or through the parent object's lifecyle. Whenever the younger brother commits a fault (new), the older brother will want to to clean (delete) it up. Then the mother (your program) will be happy and proud of them.
I cannot agree more to Eric Lippert's excellent advice:
So the answer to the question "should I free memory before my program exits?" is "it depends on what your program does".
Other answers here have provided arguments for and against both, but the real crux of the matter is what your program does. Consider a more non-trivial example wherein the type instance being dynamically allocated is an custom class and the class destructor performs some actions which produces side effect. In such a situation the argument of memory leaks or not is trivial the more important problem is that failing to call delete on such a class instance will result in Undefined behavior.
[basic.life] 3.8 Object lifetime
Para 4:
A program may end the lifetime of any object by reusing the storage which the object occupies or by explicitly calling the destructor for an object of a class type with a non-trivial destructor. For an object of a class type with a non-trivial destructor, the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released; however, if there is no explicit call to the destructor or if a delete-expression (5.3.5) is not used to release the storage, the destructor shall not be implicitly called and any program that depends on the side effects produced by the destructor has undefined
behavior.
So the answer to your question is as Eric says "depends on what your program does"
It's a fair question, and there are a few things to consider when answering:
some objects have more complex destructors which don't just release memory when they're deleted. They may have other side effects, which you don't want to skip.
It is not guaranteed by the C++ standard that your memory will be released when the process terminates. (Of course on a modern OS it will be freed, but if you were on some weird OS which didn't do that, you'd have to free your memory properly
on the other hand, running destructors at program exit can actually take up quite a lot of time, and if all the do is release memory (which would be released anyway), then yes, it makes a lot of sense to just short-circuit that and exit immediately instead.
Most operating systems will reclaim memory upon process exit. Exceptions may include certain RTOS's, old mobile devices etc.
In an absolute sense your app won't leak memory; however it's good practice to clean up memory you allocate even if you know it won't cause a real leak. This issue is leaks are much, much harder to fix than not having them to begin with. Let's say you decide that you want to move the functionality in your main() to another function. You'll may end up with a real leak.
It's also bad aesthetics, many developers will look at the unfreed 'str' and feel slight nausea :(
Why would I need to delete str if I am going to end the program anyways?
Because you don't want to be lazy ...
I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?
Nope, I don't care about the land of unicorns either. The Land of Arwen is a different matter, Then we could cut their horns off and put them to good use(I've heard its a good aphrodisiac).
Is it just good practice?
It is justly a good practice.
Does it have deeper consequences?
Someone else has to clean up after you. Maybe you like that, I moved out from under my parents' roof many years ago.
Place a while(1) loop construct around your code without delete. The code-complexity does not matter. Memory leaks are related to process time.
From the perspective of debug, not releasing system resources(file handles, etc) can cause more significant and hard to find bugs. Memory-leaks while important are typically much easier to diagnose(why can't I write to this file?). Bad style will become more of a problem if you start working with threads.
int main()
{
while(1)
{
char* str = new char[10];
for(int i=0;i<5;i++)
{
//Do stuff with str
}
}
delete[] str;
return 0;
}
Another reason that I haven't see mentioned yet is to keep the output of static and dynamic analyzer tools (e.g. valgrind or Coverity) cleaner and quieter. Clean output with zero memory leaks or zero reported issues means that when a new one pops up it is easier to detect and fix.
You never know how your simple example will be used or evolved. Its better to start as clean and crisp as possible.
Not to mention that if you are going to apply for a job as a C++ programmer there is a very good chance that you won't get past the interview because of the missing delete. First - programmers don't like any leaks usually (and the guy at the interview will be surely one of them) and second - most companies (all I worked in at least) have the "no-leak" policy. Generally, the software you write is supposed to run for quite a while, creating and destroying objects on the go. In such an environment leaks can lead to disasters...
You got a lot of professional experience answers. Here I'm telling a naive but an answer I considered as the fact.
Summary
3. Does it have deeper consequences?
A: Will answer in some detail.
2. Is it just good practice?
A: It is considered a good practice. Release resources/memory you've retrieved when you're sure about it no longer used.
Why would I need to delete str if I am going to end the program anyways?
I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?
A: You need or need not, in fact, you tell why. There're some explanation follows.
I think it depends. Here are some assumed questions; the term program may mean either an application or a function.
Q: Does it depend on what the program does?
A: If universe destroyed was acceptable, then no. However, the program might not work correctly as expected, and even be a program that doesn't complete what it supposed to. You might want to seriously think about why you build a program like this?
Q: Does it depend on how the program is complicated?
A: No. See Explanation.
Q: Does it depend on what the stability of the program is expected?
A: Closely.
And I consider it depends on
What's the universe of the program?
How's the expectation of the program that it done its work?
How much does the program care about others, and the universe where it is?
About the term universe, see Explanation.
For summary, it depends on what do you care about.
Explanation
Important: If we define the term program as a function, then its universe is application. There are many details omitted; as an idea for understanding, it's long enough, though.
We may ever seen this kind of diagram which illustrates the relationship between application software and system software.
But for being aware the scope of which one covers, I'd suggest a reversed layout. Since we are talking about software only, the hardware layer is omitted in following diagram.
With this diagram, we realize that the OS covers the biggest scope, which is current the universe, sometimes we say the environment. You may imagine that the whole achitecture consists of a lot of disks like the diagram, to be either a cylinder or torus(a ball is fine but difficult to imagine). Here I should mention that the outmost of OS is in fact a unibody, the runtime may be either single or multiple by different implemention.
It's important that the runtime is responsible to both OS and applications, but the latter is more critical. Runtime is the universe of applications, if it's destroyed all applications run under it are gone.
Unlike human on the Earth, we live here, but we don't consist of Earth; we will still live in other suitable environment if the time when the Earth are destroying and we weren't there.
However, we can no longer exist when the universe is destroyed, because we are not only live in the universe, but also consist of the universe.
As mentioned above, the runtime is also responsible to the OS. The left circle in the following diagram is what it may looks like.
This is mostly like a C program in the OS. When the relationship between an application and OS match this, is just the same situation as runtime in the OS above. In this diagram, the OS is the universe of applications. The reason of the applications here should be responsible to the OS, is that OS might not virtualize the code of them, or allowed to be crashed. If OS are always prevent they to do so, then it's self-responsible, no matter what applications do. But think about the drivers, it's one of the scenarios that OS must allowed to be crashed, since this kind of applications are treated as part of OS.
Finally, let's see the right circle in the diagram above. In this case, the application itself be the universe. Sometimes, we call this kind of application operating system. If an OS never allowed custom code to be loaded and run, then it does all the things itself. Even it allowed, after itself is terminated, the memory goes nowhere but hardware. All the deallocation that may be necessary, is before it was terminated.
So, how much does your program care about the others? How much does it care about its universe? And how's the expectation of the program that it done its work? It depends on what do you care about.
TECHNICALLY, a programmer shouldn't rely on the OS to do any thing.
The OS isn't required to reclaim lost memory in this fashion.
If you do write the code that deletes all your dynamically allocated memory, then you are future proofing the code and letting others use it in a larger project.
Source: Allocation and GC Myths(PostScript alert!)
Allocation Myth 4: Non-garbage-collected programs should always
deallocate all memory they allocate.
The Truth: Omitted deallocations in frequently executed code cause
growing leaks. They are rarely acceptable. but Programs that retain
most allocated memory until program exit often perform better without
any intervening deallocation. Malloc is much easier to implement if
there is no free.
In most cases, deallocating memory just before program exit is
pointless. The OS will reclaim it anyway. Free will touch and page in
the dead objects; the OS won't.
Consequence: Be careful with "leak detectors" that count allocations.
Some "leaks" are good!
I think it's a very poor practice to use malloc/new without calling free/delete.
If the memory's going to get reclaimed anyway, what harm can there be from explicitly deallocating when you need to?
Maybe if the OS "reclaims" memory faster than free does then you'll see increased performance; this technique won't help you with any program that must remain running for any long period of time.
Having said that, so I'd recommend you use free/delete.
If you get into this habit, who's to say that you won't one day accidentally apply this approach somewhere it matters?
One should always deallocate resources after one is done with them, be it file handles/memory/mutexs. By having that habit, one will not make that sort of mistake when building servers. Some servers are expected to run 24x7. In those cases, any leak of any sort means that your server will eventually run out of that resource and hang/crash in some way. A short utility program, ya a leak isn't that bad. Any server, any leak is death. Do yourself a favor. Clean up after yourself. It's a good habit.
Think about your class 'A' having to deconstruct. If you don't call
'delete' on 'a', that destructor won't get called. Usually, that won't
really matter if the process ends anyway. But what if the destructor
has to release e.g. objects in a database? Flush a cache to a logfile?
Write a memory cache back to disk? **You see, it's not just 'good
practice' to delete objects, in some situations it is required**.
Instead of talking about this specific example i will talk about general cases, so generally it is important to explicitly call delete to de-allocate memory because (in case of C++) you may have some code in the destructor that you want to execute. Like maybe writing some data to a log file or sending shutting down signal to some other process etc. If you let the OS free your memory for you, your code in your destructor will not be executed.
On the other hand most operating systems will deallocate the memory when your program ends. But it is good practice to deallocate it yourself and like I gave destructor example above the OS won't call your destructor, which can create undesirable behavior in certain cases!
I personally consider it bad practice to rely on OS to free your memory (even though it will do) the reason is if later on you have to integrate your code with a larger program you will spend hours to track down and fix memory leaks!
So clean your room before leaving!

Is it acceptable not to deallocate memory

I'm working on a project that is supposed to be used from the command line with the following syntax:
program-name input-file
The program is supposed to process the input, compute some stuff and spit out results on stdout.
My language of choice is C++ for several reasons I'm not willing to debate. The computation phase will be highly symbolic (think compiler) and will use pretty complex dynamically allocated data structures. In particular, it's not amenable to RAII style programming.
I'm wondering if it is acceptable to forget about freeing memory, given that I expect the entire computation to consume less than the available memory and that the OS is free to reclaim all the memory in one step after the program finishes (assume program terminates in seconds). What are your feeling about this?
As a backup plan, if ever my project will require to run as a server or interactively, I figured that I can always refit a garbage collector into the source code. Does anyone have experience using garbage collectors for C++? Do they work well?
It shouldn't cause any problems in the specific situation described the question.
However, it's not exactly normal. Static analysis tools will complain about it. Most importantly, it builds bad habits.
Sometimes not deallocating memory is the right thing to do.
I used to write compilers. After building the parse tree and traversing it to write the intermediate code, we would simply just exit. Deallocating the tree would have
added a bit of slowness to the compiler, which we wanted of course to be as fast as possible.
taken up code space
taken time to code and test the deallocators
violated the "no code executes better than 'no code'" dictum.
HTH! FWIW, this was "back in the day" when memory was non-virtual and minimal, the boxes were much slower, and the first two were non-trivial considerations.
My feeling would be something like "WTF!!!"
Look at it this way:
You choose a programming language that does not include a garbage collector, we are not allowed to ask why.
You are basically stating that you are too lazy to care about freeing the memory.
Well, WTF again. Laziness isn't a good reason for anything, the least of what is playing around with memory without freeing it.
Just free the memory, it's a bad practice, the scenario may change and then can be a million reasons you can need that memory freed and the only reason for not doing it is laziness, don't get bad habits, and get used to do things right, that way you'll tend to do them right in the future!!
Not deallocating memory should not be problem but it is a bad practice.
Joel Coehoorn is right:
It shouldn't cause any problems.
However, it's not exactly normal.
Static analysis tools will complain
about it. Most importantly, it builds
bad habits.
I'd also like to add that thinking about deallocation as you write the code is probably a lot easier than trying to retrofit it afterwards. So I would probably make it deallocate memory; you don't know how your program might be used in future.
If you want a really simple way to free memory, look at the "pools" concept that Apache uses.
Well, I think that it's not acceptable. You've already alluded to potential future problems yourself. Don't think they're necessarily easy to solve.
Things like “… given that I expect the entire computation to consume less …” are famous last phrases. Similarly, refitting code with some feature is one of these things they all talk of and never do.
Not deallocating memory might sound good in the short run but can potentially create a huge load of problems in the long run. Personally, I just don't think that's worth it.
There are two strategies. Either you build in the GC design from the very beginning. It's more work but it will pay off. For a lot of small objects it might pay to use a pool allocator and just keep track of the memory pool. That way, you can keep track of the memory consumption and simply avoid a lot of problems that similar code, but without allocation pool, would create.
Or you use smart pointers throughout the program from the beginning. I actually prefer this method even though it clutters the code. One solution is to rely heavily on templates, which takes out a lot of redundancy when referring to types.
Take a look at projects such as WebKit. Their computation phase resembles yours since they build parse trees for HTML. They use smart pointers throughout their program.
Finally: “It’s a question of style … Sloppy work tends to be habit-forming.”
– Silk in Castle of Wizardry by David Eddings.
will use pretty complex dynamically
allocated data structures. In
particular, it's not amenable to RAII
style programming.
I'm almost sure that's an excuse for lazy programming. Why can't you use RAII? Is it because you don't want to keep track of your allocations, there's no pointer to them that you keep? If so, how do you expect to use the allocated memory - there's always a pointer to it that contains some data.
Is it because you don't know when it should be released? Leave the memory in RAII objects, each one referenced by something, and they'll all trickle-down free each other when the containing object gets freed - this is particularly important if you want to run it as a server one day, each iteration of the server effective runs a 'master' object that holds all others so you can just delete it and all the memory disappears. It also helps prevent you retro-fitting a GC.
Is it because all your memory is allocated and kept in-use all the time, and only freed at the end? If so see above.
If you really, really cannot think of a design where you cannot leak memory, at least have the decency to use a private heap. Destroy that heap before you quit and you'll have a better design already, if a little 'hacky'.
There are instances where memory leaks are ok - static variables, globally initialised data, things like that. These aren't generally large though.
Reference counting smart pointers like shared_ptr in boost and TR1 could also help you manage your memory in a simple manner.
The drawback is that you have to wrap every pointers that use these objects.
I've done this before, only to find that, much later, I needed the program to be able to process several inputs without separate commands, or that the guts of the program were so useful that they needed to be turned into a library routine that could be called many times from within another program that was not expected to terminate. It was much harder to go back later and re-engineer the program than it would have been to make it leak-less from the start.
So, while it's technically safe as you've described the requirements, I advise against the practice since it's likely that your requirements may someday change.
If the run time of your program is very short, it should not be a problem. However, being too lazy to free what you allocate and losing track of what you allocate are two entirely different things. If you have simply lost track, its time to ask yourself if you actually know what your code is doing to a computer.
If you are just in a hurry or lazy and the life of your program is small in relation to what it actually allocates (i.e. allocating 10 MB per second is not small if running for 30 seconds) .. then you should be OK.
The only 'noble' argument regarding freeing allocated memory sets in when a program exits .. should one free everything to keep valgrind from complaining about leaks, or just let the OS do it? That entirely depends on the OS and if your code might become a library and not a short running executable.
Leaks during run time are generally bad, unless you know your program will run in a short amount of time and not cause other programs far more important than your's as far as the OS is concerned to skid to dirty paging.
What are your feeling about this?
Some O/Ses might not reclaim the memory, but I guess you're not intenting to run on those O/Ses.
As a backup plan, if ever my project will require to run as a server or interactively, I figured that I can always refit a garbage collector into the source code.
Instead, I figure you can spawn a child process to do the dirty work, grab the output from the child process, let the child process die as soon as possible after that and then expect the O/S to do the garbage collection.
I have not personally used this, but since you are starting from scratch you may wish to consider the Boehm-Demers-Weiser conservative garbage collector
The answer really depends on how large your program will be and what performance characteristics it needs to exhibit. If you never deallocate memory, your process's memory footprint will be much larger than it would otherwise be. Depeding on the system, this could cause a lot of paging and slow down the performance for you or other applications on the system.
Beyond that, what everyone above says is correct. It probably won't cause harm in the short term, but it's a bad practice that you should avoid. You'll never be able to use the code again. Trying to retrofit a GC on afterwards will be a nightmare. Just think about going to each place you allocate memory and trying to retrofit it but not break anything.
One more reason to avoid doing this: reputation. If you fail to deallocate, everyone who maintains the code will curse your name and your rep in the company will take a hit. "Can you believe how dumb he was? Look at this code."
If it is non-trivial for you to determine where to deallocate the memory, I would be concerned that other aspects of the data structure manipulation may not be fully understood either.
Apart from the fact that the OS (kernel and/or C/C++ library) can choose not to free the memory when the execution ends, your application should always provide proper freeing of allocated memory as a good practice. Why? Suppose you decide to extend that application or reuse the code; you'll quickly get in trouble if the code you had previously written hogs up the memory unnecessarily, after finishing its job. It's a recipe for memory leaks.
In general, I agree it's a bad practice.
For a one shot program, it can be OK, but it kinda shows like you don't what you are doing.
There is one solution to your problem though - use a custom allocator, which preallocates larger blocks from malloc, and then, after the computation phase, instead of freeing all the little blocks from you custom allocator, just release the larger preallocated blocks of memory. Then you don't need to keep track of all objects you need to deallocate and when. One guy who wrote a compiler too explained this approach many years ago to me, so if it worked for him, it will probably work for you as well.
Try to use automatic variables in methods so that they will be freed automatically from the stack.
The only useful reason to not free heap memory is to save a tiny amount of computational power used in the free() method. You might loose any advantage if page faults become an issue due to large virtual memory needs with small physical memory resources. Some factors to consider are:
If you are allocating a few huge chunks of memory or many small chunks.
Is the memory going to need to be locked into physical memory.
Are you absolutely positive the code and memory needed will fit into 2GB, for a Win32 system, including memory holes and padding.
That's generally a bad idea. You might encounter some cases where the program will try to consume more memory than it's available. Plus you risk being unable to start several copies of the program.
You can still do this if you don't care of the mentioned issues.
When you exit from a program, the memory allocated is automatically returned to the system. So you may not deallocate the memory you had allocated.
But deallocations becomes necessary when you go for bigger programs such as an OS or Embedded systems where the program is meant to run forever & hence a small memory leak can be malicious.
Hence it is always recommended to deallocate the memory you have allocated.