Related
We are all taught that you MUST free every pointer that is allocated. I'm a bit curious, though, about the real cost of not freeing memory. In some obvious cases, like when malloc() is called inside a loop or part of a thread execution, it's very important to free so there are no memory leaks. But consider the following two examples:
First, if I have code that's something like this:
int main()
{
char *a = malloc(1024);
/* Do some arbitrary stuff with 'a' (no alloc functions) */
return 0;
}
What's the real result here? My thinking is that the process dies and then the heap space is gone anyway so there's no harm in missing the call to free (however, I do recognize the importance of having it anyway for closure, maintainability, and good practice). Am I right in this thinking?
Second, let's say I have a program that acts a bit like a shell. Users can declare variables like aaa = 123 and those are stored in some dynamic data structure for later use. Clearly, it seems obvious that you'd use some solution that will calls some *alloc function (hashmap, linked list, something like that). For this kind of program, it doesn't make sense to ever free after calling malloc because these variables must be present at all times during the program's execution and there's no good way (that I can see) to implement this with statically allocated space. Is it bad design to have a bunch of memory that's allocated but only freed as part of the process ending? If so, what's the alternative?
Just about every modern operating system will recover all the allocated memory space after a program exits. The only exception I can think of might be something like Palm OS where the program's static storage and runtime memory are pretty much the same thing, so not freeing might cause the program to take up more storage. (I'm only speculating here.)
So generally, there's no harm in it, except the runtime cost of having more storage than you need. Certainly in the example you give, you want to keep the memory for a variable that might be used until it's cleared.
However, it's considered good style to free memory as soon as you don't need it any more, and to free anything you still have around on program exit. It's more of an exercise in knowing what memory you're using, and thinking about whether you still need it. If you don't keep track, you might have memory leaks.
On the other hand, the similar admonition to close your files on exit has a much more concrete result - if you don't, the data you wrote to them might not get flushed, or if they're a temp file, they might not get deleted when you're done. Also, database handles should have their transactions committed and then closed when you're done with them. Similarly, if you're using an object oriented language like C++ or Objective C, not freeing an object when you're done with it will mean the destructor will never get called, and any resources the class is responsible might not get cleaned up.
Yes you are right, your example doesn't do any harm (at least not on most modern operating systems). All the memory allocated by your process will be recovered by the operating system once the process exits.
Source: Allocation and GC Myths (PostScript alert!)
Allocation Myth 4: Non-garbage-collected programs
should always deallocate all memory
they allocate.
The Truth: Omitted
deallocations in frequently executed
code cause growing leaks. They are
rarely acceptable. but Programs that
retain most allocated memory until
program exit often perform better
without any intervening deallocation.
Malloc is much easier to implement if
there is no free.
In most cases, deallocating memory
just before program exit is pointless.
The OS will reclaim it anyway. Free
will touch and page in the dead
objects; the OS won't.
Consequence: Be careful with "leak
detectors" that count allocations.
Some "leaks" are good!
That said, you should really try to avoid all memory leaks!
Second question: your design is ok. If you need to store something until your application exits then its ok to do this with dynamic memory allocation. If you don't know the required size upfront, you can't use statically allocated memory.
=== What about future proofing and code reuse? ===
If you don't write the code to free the objects, then you are limiting the code to only being safe to use when you can depend on the memory being free'd by the process being closed ... i.e. small one-time use projects or "throw-away"[1] projects)... where you know when the process will end.
If you do write the code that free()s all your dynamically allocated memory, then you are future proofing the code and letting others use it in a larger project.
[1] regarding "throw-away" projects. Code used in "Throw-away" projects has a way of not being thrown away. Next thing you know ten years have passed and your "throw-away" code is still being used).
I heard a story about some guy who wrote some code just for fun to make his hardware work better. He said "just a hobby, won't be big and professional". Years later lots of people are using his "hobby" code.
You are correct, no harm is done and it's faster to just exit
There are various reasons for this:
All desktop and server environments simply release the entire memory space on exit(). They are unaware of program-internal data structures such as heaps.
Almost all free() implementations do not ever return memory to the operating system anyway.
More importantly, it's a waste of time when done right before exit(). At exit, memory pages and swap space are simply released. By contrast, a series of free() calls will burn CPU time and can result in disk paging operations, cache misses, and cache evictions.
Regarding the possiblility of future code reuse justifing the certainty of pointless ops: that's a consideration but it's arguably not the Agile way. YAGNI!
I completely disagree with everyone who says OP is correct or there is no harm.
Everyone is talking about a modern and/or legacy OS's.
But what if I'm in an environment where I simply have no OS?
Where there isn't anything?
Imagine now you are using thread styled interrupts and allocate memory.
In the C standard ISO/IEC:9899 is the lifetime of memory stated as:
7.20.3 Memory management functions
1 The order and contiguity of storage allocated by successive calls to the calloc,
malloc, and realloc functions is unspecified. The pointer returned if the allocation
succeeds is suitably aligned so that it may be assigned to a pointer to any type of object
and then used to access such an object or an array of such objects in the space allocated
(until the space is explicitly deallocated). The lifetime of an allocated object extends
from the allocation until the deallocation.[...]
So it has not to be given that the environment is doing the freeing job for you.
Otherwise it would be added to the last sentence: "Or until the program terminates."
So in other words:
Not freeing memory is not just bad practice. It produces non portable and not C conform code.
Which can at least be seen as 'correct, if the following: [...], is supported by environment'.
But in cases where you have no OS at all, no one is doing the job for you
(I know generally you don't allocate and reallocate memory on embedded systems,
but there are cases where you may want to.)
So speaking in general plain C (as which the OP is tagged),
this is simply producing erroneous and non portable code.
I typically free every allocated block once I'm sure that I'm done with it. Today, my program's entry point might be main(int argc, char *argv[]) , but tomorrow it might be foo_entry_point(char **args, struct foo *f) and typed as a function pointer.
So, if that happens, I now have a leak.
Regarding your second question, if my program took input like a=5, I would allocate space for a, or re-allocate the same space on a subsequent a="foo". This would remain allocated until:
The user typed 'unset a'
My cleanup function was entered, either servicing a signal or the user typed 'quit'
I can not think of any modern OS that does not reclaim memory after a process exits. Then again, free() is cheap, why not clean up? As others have said, tools like valgrind are great for spotting leaks that you really do need to worry about. Even though the blocks you example would be labeled as 'still reachable' , its just extra noise in the output when you're trying to ensure you have no leaks.
Another myth is "If its in main(), I don't have to free it", this is incorrect. Consider the following:
char *t;
for (i=0; i < 255; i++) {
t = strdup(foo->name);
let_strtok_eat_away_at(t);
}
If that came prior to forking / daemonizing (and in theory running forever), your program has just leaked an undetermined size of t 255 times.
A good, well written program should always clean up after itself. Free all memory, flush all files, close all descriptors, unlink all temporary files, etc. This cleanup function should be reached upon normal termination, or upon receiving various kinds of fatal signals, unless you want to leave some files laying around so you can detect a crash and resume.
Really, be kind to the poor soul who has to maintain your stuff when you move on to other things .. hand it to them 'valgrind clean' :)
It is completely fine to leave memory unfreed when you exit; malloc() allocates the memory from the memory area called "the heap", and the complete heap of a process is freed when the process exits.
That being said, one reason why people still insist that it is good to free everything before exiting is that memory debuggers (e.g. valgrind on Linux) detect the unfreed blocks as memory leaks, and if you have also "real" memory leaks, it becomes more difficult to spot them if you also get "fake" results at the end.
This code will usually work alright, but consider the problem of code reuse.
You may have written some code snippet which doesn't free allocated memory, it is run in such a way that memory is then automatically reclaimed. Seems allright.
Then someone else copies your snippet into his project in such a way that it is executed one thousand times per second. That person now has a huge memory leak in his program. Not very good in general, usually fatal for a server application.
Code reuse is typical in enterprises. Usually the company owns all the code its employees produce and every department may reuse whatever the company owns. So by writing such "innocently-looking" code you cause potential headache to other people. This may get you fired.
What's the real result here?
Your program leaked the memory. Depending on your OS, it may have been recovered.
Most modern desktop operating systems do recover leaked memory at process termination, making it sadly common to ignore the problem (as can be seen by many other answers here.)
But you are relying on a safety feature not being part of the language, one you should not rely upon. Your code might run on a system where this behaviour does result in a "hard" memory leak, next time.
Your code might end up running in kernel mode, or on vintage / embedded operating systems which do not employ memory protection as a tradeoff. (MMUs take up die space, memory protection costs additional CPU cycles, and it is not too much to ask from a programmer to clean up after himself).
You can use and re-use memory (and other resources) any way you like, but make sure you deallocated all resources before exiting.
If you're using the memory you've allocated, then you're not doing anything wrong. It becomes a problem when you write functions (other than main) that allocate memory without freeing it, and without making it available to the rest of your program. Then your program continues running with that memory allocated to it, but no way of using it. Your program and other running programs are deprived of that memory.
Edit: It's not 100% accurate to say that other running programs are deprived of that memory. The operating system can always let them use it at the expense of swapping your program out to virtual memory (</handwaving>). The point is, though, that if your program frees memory that it isn't using then a virtual memory swap is less likely to be necessary.
There's actually a section in the OSTEP online textbook for an undergraduate course in operating systems which discusses exactly your question.
The relevant section is "Forgetting To Free Memory" in the Memory API chapter on page 6 which gives the following explanation:
In some cases, it may seem like not calling free() is reasonable. For
example, your program is short-lived, and will soon exit; in this case,
when the process dies, the OS will clean up all of its allocated pages and
thus no memory leak will take place per se. While this certainly “works”
(see the aside on page 7), it is probably a bad habit to develop, so be wary
of choosing such a strategy
This excerpt is in the context of introducing the concept of virtual memory. Basically at this point in the book, the authors explain that one of the goals of an operating system is to "virtualize memory," that is, to let every program believe that it has access to a very large memory address space.
Behind the scenes, the operating system will translate "virtual addresses" the user sees to actual addresses pointing to physical memory.
However, sharing resources such as physical memory requires the operating system to keep track of what processes are using it. So if a process terminates, then it is within the capabilities and the design goals of the operating system to reclaim the process's memory so that it can redistribute and share the memory with other processes.
EDIT: The aside mentioned in the excerpt is copied below.
ASIDE: WHY NO MEMORY IS LEAKED ONCE YOUR PROCESS EXITS
When you write a short-lived program, you might allocate some space
using malloc(). The program runs and is about to complete: is there
need to call free() a bunch of times just before exiting? While it seems
wrong not to, no memory will be "lost" in any real sense. The reason is
simple: there are really two levels of memory management in the system.
The first level of memory management is performed by the OS, which
hands out memory to processes when they run, and takes it back when
processes exit (or otherwise die). The second level of management
is within each process, for example within the heap when you call
malloc() and free(). Even if you fail to call free() (and thus leak
memory in the heap), the operating system will reclaim all the memory of
the process (including those pages for code, stack, and, as relevant here,
heap) when the program is finished running. No matter what the state
of your heap in your address space, the OS takes back all of those pages
when the process dies, thus ensuring that no memory is lost despite the
fact that you didn’t free it.
Thus, for short-lived programs, leaking memory often does not cause any
operational problems (though it may be considered poor form). When
you write a long-running server (such as a web server or database management
system, which never exit), leaked memory is a much bigger issue,
and will eventually lead to a crash when the application runs out of
memory. And of course, leaking memory is an even larger issue inside
one particular program: the operating system itself. Showing us once
again: those who write the kernel code have the toughest job of all...
from Page 7 of Memory API chapter of
Operating Systems: Three Easy Pieces
Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau
Arpaci-Dusseau Books
March, 2015 (Version 0.90)
There's no real danger in not freeing your variables, but if you assign a pointer to a block of memory to a different block of memory without freeing the first block, the first block is no longer accessible but still takes up space. This is what's called a memory leak, and if you do this with regularity then your process will start to consume more and more memory, taking away system resources from other processes.
If the process is short-lived you can often get away with doing this as all allocated memory is reclaimed by the operating system when the process completes, but I would advise getting in the habit of freeing all memory you have no further use for.
You are correct, memory is automatically freed when the process exits. Some people strive not to do extensive cleanup when the process is terminated, since it will all be relinquished to the operating system. However, while your program is running you should free unused memory. If you don't, you may eventually run out or cause excessive paging if your working set gets too big.
You are absolutely correct in that respect. In small trivial programs where a variable must exist until the death of the program, there is no real benefit to deallocating the memory.
In fact, I had once been involved in a project where each execution of the program was very complex but relatively short-lived, and the decision was to just keep memory allocated and not destabilize the project by making mistakes deallocating it.
That being said, in most programs this is not really an option, or it can lead you to run out of memory.
It depends on the scope of the project that you're working on. In the context of your question, and I mean just your question, then it doesn't matter.
For a further explanation (optional), some scenarios I have noticed from this whole discussion is as follow:
(1) - If you're working in an embedded environment where you cannot rely on the main OS' to reclaim the memory for you, then you should free them since memory leaks can really crash the program if done unnoticed.
(2) - If you're working on a personal project where you won't disclose it to anyone else, then you can skip it (assuming you're using it on the main OS') or include it for "best practices" sake.
(3) - If you're working on a project and plan to have it open source, then you need to do more research into your audience and figure out if freeing the memory would be the better choice.
(4) - If you have a large library and your audience consisted of only the main OS', then you don't need to free it as their OS' will help them to do so. In the meantime, by not freeing, your libraries/program may help to make the overall performance snappier since the program does not have to close every data structure, prolonging the shutdown time (imagine a very slow excruciating wait to shut down your computer before leaving the house...)
I can go on and on specifying which course to take, but it ultimately depends on what you want to achieve with your program. Freeing memory is considered good practice in some cases and not so much in some so it ultimately depends on the specific situation you're in and asking the right questions at the right time. Good luck!
If you're developing an application from scratch, you can make some educated choices about when to call free. Your example program is fine: it allocates memory, maybe you have it work for a few seconds, and then closes, freeing all the resources it claimed.
If you're writing anything else, though -- a server/long-running application, or a library to be used by someone else, you should expect to call free on everything you malloc.
Ignoring the pragmatic side for a second, it's much safer to follow the stricter approach, and force yourself to free everything you malloc. If you're not in the habit of watching for memory leaks whenever you code, you could easily spring a few leaks. So in other words, yes -- you can get away without it; please be careful, though.
If a program forgets to free a few Megabytes before it exits the operating system will free them. But if your program runs for weeks at a time and a loop inside the program forgets to free a few bytes in each iteration you will have a mighty memory leak that will eat up all the available memory in your computer unless you reboot it on a regular basis => even small memory leaks might be bad if the program is used for a seriously big task even if it originally wasn't designed for one.
It depends on the OS environment the program is running in, as others have already noted, and for long running processes, freeing memory and avoiding even very slow leaks is important always. But if the operating system deals with stuff, as Unix has done for example since probably forever, then you don't need to free memory, nor close files (the kernel closes all open file descriptors when a process exits.)
If your program allocates a lot of memory, it may even be beneficial to exit without "hesitation". I find that when I quit Firefox, it spends several !minutes ! paging in gigabytes of memory in many processes. I guess this is due to having to call destructors on C++ objects. This is actually terrible. Some might argue, that this is necessary to save state consistently, but in my opinion, long-running interactive programs like browsers, editors and design programs, just to mention a few, should ensure that any state information, preferences, open windows/pages, documents etc is frequently written to permanent storage, to avoid loss of work in case of a crash. Then this state-saving can be performed again quickly when the user elects to quit, and when completed, the processes should just exit immediately.
All memory allocated for this process will be marked unused by OS then reused, because the memory allocation is done by user space functions.
Imagine OS is a god, and the memories is the materials for creating a wolrd of process, god use some of materials creat a world (or to say OS reserved some of memory and create a process in it). No matter what the creatures in this world have done the materials not belong to this world won't be affected. After this world expired, OS the god, can recycle materials allocated for this world.
Modern OS may have different details on releasing user space memory, but that has to be a basic duty of OS.
I think that your two examples are actually only one: the free() should occur only at the end of the process, which as you point out is useless since the process is terminating.
In you second example though, the only difference is that you allow an undefined number of malloc(), which could lead to running out of memory. The only way to handle the situation is to check the return code of malloc() and act accordingly.
I have a large embedded project that has Linux running. Also, it has various process and threads running. I can't log all the malloc and new calls as it will make the box - Embedded Set-top box sluggish. Also, sluggishness might cause a crash because of mutex time out or other things. Thus, I want to make a tool that can help to debug the issues related to memory like - memory overflow.
For example, when you do a malloc of 4 bytes. But, you write 8 bytes. This may create a problem on the other chunk of data allocated.The other chunk of data header can be tampered. Thus, free() will fail or crash. How can I make a tool to detect such issue. Also, a tool to track down the memory leaks. Is there a way to do so? I can't use valgrind as it slows down my STB. So, I want to develop my tool that can check for the memory header corruption or memory leaks. Just based on my choice, it can do either memory corruption check or memory leak detection. Also, it should be a light weight.
Firstly there is probably no way to call this "simple".
Secondly if you are using C++ I highly suggest not using malloc/free but rather new/delete. The options for overriding those operators are much more flexible.
C++ provides a number of tools to improve memory safety really:
smart pointers (the performance cost really is worth the safety improvement)
Encapsulating things in classes. for example if you use std::array::at(i) it will throw an exception if your access is out of bounds. ref
lastly having proper usage of asserts in your code can go a long way to catch errors.
My point is merely that you should not depend on your debugging tools to negate the necessity of using good C++ programming methods.
Ok so now next you need to override new and delete.
A google search will provide many ways to do this.
link1
For your problem it probably makes more sense to overload delete/new globally.
Buffer overflow detection
This is the first part of your problem.
What you need to do is allocate additional memory in your new overloaded instruction so that there are some memory buffer regions before and after the memory and then return only the centre part.
How big a buffer is your choice.
pseudo code:
inline void* operator new(size_t s)
{
void* mem = malloc(s+2*BUFFER);
memset(mem,0x5A,s+2*BUFFER);
return (mem+BUFFER)
}
At some stage in the future you need to check that the BUFFER regions kept the values of 0x5A. You should probably do this in the call to free() but you can also have your own function to do this which you call periodically. In order to speed up this process use a function like memcmp perhaps.
Memory leak detection
Detecting memory leaks is not trivial.
Firstly I suggest using stack-based objects when ever possible to all-together avoid allocating memory on the heap when not needed.
The main question regarding memory leaks is to know if a certain memory block shouldn't been deleted or not.
99% of your memory leak problems can probably be solved just by using smart pointers.
However one of the most difficult memory leaks to catch is that of a growing data structure. (say for example a linked list that grows slowly over time)
Firstly in your overloaded new/malloc functions keep a list of all memory currently allocated. And also a counter of the total number of memory allocated.
Method 1: threshold detection:
Essentially every-time your program's memory usage exceeds a threshold amount you report this and increase the threshold. If your program continues to exceed thresholds as it keeps running something is wrong.
Method 2: Comparative analysis:
In pseudeo code:
Value1 = currentAmountOfMemoryUsed;
runSomeCode()
if (currentAmountOfMemoryUsed != Value1) reportProblem()
If this is possible depends a lot on what happens in runSomeCode() as some code can legally "save" up some memory for when it runs again later.
Method 3: Leak detection on program exit:
The premise is that if your code is 100% correctly written every bit of memory allocated should be freed at the time your program exists.
This method once again is not always possible because perhaps your program needs to run indefinitely and also your program might segfault because of your errors long before this can be detected.
Compiler support
On a lower level most compilers have some support to get into the whole memory management system but the way to handle this is 100% compiler/platform specific. e.g. Visual Studio C++
This is why I highly suggest not using malloc/free directly as this is problematic for debugging in this way as well as breaks the constructor/destructor design patterns of C++.
overriding malloc/free
There is however a more hands-on approach to overriding malloc/free.
That is by defining your own malloc/free functions.
Typically under debugging this will then use macro's to include FILE and LINE in the call:
#ifdef NDEBUG
#define myMalloc(s) myMallocImplementation(s,__FILE__,__LINE__);
#else
#define myMalloc(s) malloc(s)
#endif
What this allows is that your malloc implementation can then save the source location where the memory was allocated. This approach will however not catch malloc/free usage within libraries you are using.
This is a bit harder to do with new/delete calls as it would normally require some amount of digging into the call-stack at run-time to find out who called your new() function and that again is fairly compiler specific.
Also see: MSDN blog article
Memory freezing
Given everything I like to also just mention something that is very common in safety critical code (as used in motor vehicles and/or airplanes ect)
Outside of initialization a safety-critical program is usually not allowed to use malloc/free/new/delete. So all memory allocations must happen during initialization and then once the program and then usually malloc/free is frozen in some way. Any call to malloc/free after that will cause an assert.
This can be quite a heavy limitation to work with in a C++ environment but it does make for very robust code.
Note this does nothing for buffer overflow access or invalid pointer access problems.
I have read lots of programmers saying and writing when programming in C/C++ there are lots of issue related to memory. I am planning to learn to program in C/C++. I have beginner knowledge of C/C++ and I want to see some short sample why C/C++ can have issues with memory management. Please Provide some samples.
There are many ways you can corrupt or leak memory in C or C++. These errors are some of the most difficult to diagnose, because they are often not easily reproducible.
For example, it is simple to fail to free memory you have allocated. For example, this will do a "double free", trying to free a twice and failing to free b:
char *a = malloc(128*sizeof(char));
char *b = malloc(128*sizeof(char));
b = a;
free(a);
free(b); // will not free the pointer to the original allocated memory.
An example buffer overrun, which corrupts arbitrary memory follows. It is a buffer overrun because you do not know how long str is. If it is longer than 256 bytes, then it will write those bytes somewhere in memory, possible overwriting your code, possibly not.
void somefunc(char *str) {
char buff[256];
strcpy(buff, str);
}
Basically, in these languages, you have to manually request every bit of memory that is not a local variable known at compile time, and you have to manually release it when you don't need it anymore. The are libraries (so-called smart pointers) that can automate this process to some degree, but they are not applicable everywhere. Furthermore, there are absolutely no limits to how you can (try to) access memory via pointer arithmetics.
Manual memory management can lead to a number of bugs:
If you forget to release some memory, you have a memory leak
If you use more memory than you've requested for a given pointer, you have a buffer overrun.
If you release memory and keep using a "dangling pointer" to it, you have undefined behaviour (usually the program crashes)
If you miscalculate your pointer arithmetics, you have a crash, or corrupted data
And many of these problems are very hard to diagnose and debug.
I am planning to learn to program in C/C++
What exactly do you mean by that? Do you want to learn to program in C, or do you want to learn to program in C++? I would not recommend learning both languages at once.
From a user's perspective, memory management in C++ is a lot easier than in C, because most of it is encapsulated by classes, for example std::vector<T>. From a conceptual perspective, C's memory management is arguable a lot simpler. Basically, there's just malloc and free :)
I can honestly say that I have no "issues" with memory allocation when programming in C++. The last time I had a memory leak was over 10 years ago, and was due to blind stupidity on my part. If you write code using RAII, standard library containers and a small modicum of common sense, the problem really does not exist.
One of the common memory management problems in C and C++ is related to the lack of bounds checking on arrays. Unlike Java (for example), C and C++ do not check to make sure that the array index falls within the actual array bounds. Because of this, it is easy to accidentally overwrite memory. For example (C++):
char *a = new char[10];
a[12] = 'x';
There will be no compile or runtime error associated with the above code, except that your code will overwrite memory that shouldn't be.
The reason that this is commonly cited as something that is different in C/C++ is that many modern languages perform memory management and garbage collection. In C/C++ this is not the case (for better or for worse). You need to manually allocate and deallocate memory, and failing to do so correctly results in a memory leak that would not be possible in a language that performs memory management for you.
When new is used to gain a block of memory the size reserved by the operating system may be bigger than your request, but never smaller. Because of this and the fact that delete doesn't immediately return the memory to the operating system, when you inspect the whole memory that your program is using you may be made to believe that your application has serious memory leaks. So inspecting the number of bytes the whole program is using should not be used as a way of detecting memory errors. Only if the memory manager indicates a large and continuous growth of memory used should you suspect memory leaks.
It is obvious you have to release that memory which is no longer in use or not use in future, but you have to determine at the begining of the programme , the memory requirement is static or varied when it execute. If it dynamic you take enough memory for your program work that time may be your program consume extra memory.
so you have to release that memory which not in use and create that time when it is needed.
just like
struct student
{
char name[20];
int roll;
float marks;
}s[100];
Here I suppose 100 student in class. Student may be more than 100 or less 100. If more than 100 then your programme will lose information or less 100 then programme will run but waste of memory, it can be big.
so we are usually create a record dynamically at the time of execution.
just like
struct student *s;
s=(struct student *)malloc(sizeof(struct student));
scanf("%s %d %f",s->name,s->roll,s->marks);
if not in use then delete it form momery space.
free(s);
That is good manner for programming, if you not remove from memory then at one time it can full your memory stack and it can be hang.
One thing not mentioned so far is performance and why you want to manually manage memory.
Managing memory is hard to do accurately especially as a program becomes more complex(especially when you use threads and when the lifetime of pieces of memory becomes complex(ie when it gets hard when its hard to tell exactly when you don't need a piece of information)) even with powerful modern programing tools like valgrind.
So why would you want to manually manage memory,a few reasons:
--To understand how it works/to
--To implement garbage collection/automatic memory management you need to manually
--With some lower level things such as a kernel you need the flexibility manual control of memory can give.
--Most importantly if you do manual memory management right you can gain a large speedup/lower memory overhead(better performance), an associated problem with garbage collection(although its getting better as better garbage collectors such as the hotspot jvm one are written) is that you can't control memory management so it hard to do stuff with real time stuff(guaranteeing deadlines for certain objectives like car brakes and pacemakers, try special real time gc) and programs that interact with users might freeze up for a little bit or lag(this would suck for a game).
A lot of "Modern C++" (it has been said that C++ can be thought of as multiple languages depending on how you use it) as opposed to c with classes(or x and y C++ feature) use a compromise by frequently using simple optional gc/automatic memory management(note that optional gc can be worse at memory management then mandatory because when its mandatory its a simpler system) features and and some manual memory management. Depending on how you do this it can have some advantages and disadvantages of using gc and doing manual memory management. Optional GC is also available with some C libraries but its less common with c then c++.
Good afternoon all,
What I'm trying to accomplish: I'd like to implement an extension to a C++ unit test fixture to detect if the test allocates memory and doesn't free it. My idea was to record allocation levels or free memory levels before and after the test. If they don't match then you're leaking memory.
What I've tried so far: I've written a routine to read /proc/self/stat to get the vm size and resident set size. Resident set size seems like what I need but it's obviously not right. It changes between successive calls to the function with no memory allocation. I believe it's returning the cached memory used not what's allocated. It also changes in 4k increments so it's too coarse to be of any real use.
I can get the stack size by allocating a local and saving it's address. Are there any problems with doing this?
Is there a way to get real free or allocated memory on linux?
Thanks
Your best bet may actually be to use a tool specifically designed for the job of finding memory leaks. I have personal experience with Electric Fence, which is easy to use and seems to do the job nicely (not sure how well it will handle C++). Also recommended by others is Dmalloc.
For sure though, everyone seems to like Valgrind, which can do just about anything and even has front-ends (though anything that has a front-end built for it means that it probably isn't the simplest thing in the world). If the KDE folks can recommend it, it must be able to handle just about anything. (I'm not saying anything bad about KDE, just that it is a very large C++ codebase, so if Valgrind can handle KDE software, it must have something going for it. Though I don't have personal experience with it as Electric Fence was always enough for me)
I'd have to agree with those suggesting Valgrind and similar, but if the run-time overhead is too great, one option may be to use mallinfo() call to retrieve statistics on currently allocated memory, and check whether uordblks is nonzero.
Note that this will have to be run before global destructors are called - so if you have any allocations that are cleaned up there, this will register a false positive. It also won't tell you where the allocation is made - but it's a good first pass to figure out which test cases need work.
don't look a the OS to get allocation info. the C library manages memory internally, and only asks the OS for more RAM in chunks (4KB in your case). In most cases, it's never released to back to the OS, so you can't really check anything there.
You'll have to patch malloc() and free() to get the info you need.
Or, use Valgrind.
Not a direct answer but you could re-define the ::new and ::delete operators, and internally either via a singleton or global objects, keep track of the allocated, and de-allocated memory.
Edit: If this is a personal, DIY project then cool. But if its for something critical you can always jump onto one of the many leak detection libraries/programs available, a quick google search should suffice.
google-perftools can be used in your test code.
Unless you're programming parts of an OS or an embedded system are there any reasons to do so? I can imagine that for some particular classes that are created and destroyed frequently overloading memory management functions or introducing a pool of objects might lower the overhead, but doing these things globally?
Addition
I've just found a bug in an overloaded delete function - memory wasn't always freed. And that was in a not-so memory critical application. Also, disabling these overloads decreases performance by ~0.5% only.
We overload the global new and delete operators where I work for many reasons:
pooling all small allocations -- decreases overhead, decreases fragmentation, can increase performance for small-alloc-heavy apps
framing allocations with a known lifetime -- ignore all the frees until the very end of this period, then free all of them together (admittedly we do this more with local operator overloads than global)
alignment adjustment -- to cacheline boundaries, etc
alloc fill -- helping to expose usage of uninitialized variables
free fill -- helping to expose usage of previously deleted memory
delayed free -- increasing the effectiveness of free fill, occasionally increasing performance
sentinels or fenceposts -- helping to expose buffer overruns, underruns, and the occasional wild pointer
redirecting allocations -- to account for NUMA, special memory areas, or even to keep separate systems separate in memory (for e.g. embedded scripting languages or DSLs)
garbage collection or cleanup -- again useful for those embedded scripting languages
heap verification -- you can walk through the heap data structure every N allocs/frees to make sure everything looks ok
accounting, including leak tracking and usage snapshots/statistics (stacks, allocation ages, etc)
The idea of new/delete accounting is really flexible and powerful: you can, for example, record the entire callstack for the active thread whenever an alloc occurs, and aggregate statistics about that. You could ship the stack info over the network if you don't have space to keep it locally for whatever reason. The types of info you can gather here are only limited by your imagination (and performance, of course).
We use global overloads because it's convenient to hang lots of common debugging functionality there, as well as make sweeping improvements across the entire app, based on the statistics we gather from those same overloads.
We still do use custom allocators for individual types too; in many cases the speedup or capabilities you can get by providing custom allocators for e.g. a single point-of-use of an STL data structure far exceeds the general speedup you can get from the global overloads.
Take a look at some of the allocators and debugging systems that are out there for C/C++ and you'll rapidly come up with these and other ideas:
valgrind
electricfence
dmalloc
dlmalloc
Application Verifier
Insure++
BoundsChecker
...and many others... (the gamedev industry is a great place to look)
(One old but seminal book is Writing Solid Code, which discusses many of the reasons you might want to provide custom allocators in C, most of which are still very relevant.)
Obviously if you can use any of these fine tools you will want to do so rather than rolling your own.
There are situations in which it is faster, easier, less of a business/legal hassle, nothing's available for your platform yet, or just more instructive: dig in and write a global overload.
The most common reason to overload new and delete are simply to check for memory leaks, and memory usage stats. Note that "memory leak" is usually generalized to memory errors. You can check for things such as double deletes and buffer overruns.
The uses after that are usually memory-allocation schemes, such as garbage collection, and pooling.
All other cases are just specific things, mentioned in other answers (logging to disk, kernel use).
In addition to the other important uses mentioned here, like memory tagging, it's also the only way to force all allocations in your app to go through fixed-block allocation, which has enormous implications for performance and fragmentation.
For example, you may have a series of memory pools with fixed block sizes. Overriding global new lets you direct all 61-byte allocations to, say, the pool with 64-byte blocks, all 768-1024 byte allocs to the the 1024b-block pool, all those above that to the 2048 byte block pool, and anything larger than 8kb to the general ragged heap.
Because fixed block allocators are much faster and less prone to fragmentation than allocating willy-nilly from the heap, this lets you force even crappy 3d party code to allocate from your pools and not poop all over the address space.
This is done often in systems which are time- and space-critical, such as games. 280Z28, Meeh, and Dan Olson have described why.
UnrealEngine3 overloads global new and delete as part of its core memory management system. There are multiple allocators that provide different features (profiling, performance, etc.) and they need all allocations to go through it.
Edit: For my own code, I would only ever do it as a last resort. And by that I mean I would almost positively never use it. But my personal projects are obviously much smaller/very different requirements.
Some realtime systems overload them to avoid them being used after init..
Overloading new & delete makes it possible to add a tag to your memory allocations. I tag allocations per system or control or by middleware. I can view, at runtime, how much each uses. Maybe I want to see the usage of a parser separated from the UI or how much a piece of middleware is really using!
You can also use it to put guard bands around the allocated memory. If/when your app crashes you can take a look at the address. If you see the contents as "0xABCDABCD" (or whatever you choose as guard) you are accessing memory you don't own.
Perhaps after calling delete you can fill this space with a similarly recognizable pattern.
I believe VisualStudio does something similar in debug. Doesn't it fill uninitialized memory with 0xCDCDCDCD?
Finally, if you have fragmentation issues you could use it to redirect to a block allocator? I am not sure how often this is really a problem.
You need to overload them when the call to new and delete doesn't work in your environment.
For example, in kernel programming, the default new and delete don't work as they rely on user mode library to allocate memory.
From a practical standpoint it may just be better to override malloc on a system library level, since operator new will probably be calling it anyway.
On linux, you can put your own version of malloc in place of the system one, as in this example here:
http://developers.sun.com/solaris/articles/lib_interposers.html
In that article, they are trying to collect performance statistics. But you may also detect memory leaks if you also override free.
Since you are doing this in a shared library with LD_PRELOAD, you don't even need to recompile your application.
I've seen it done in a system that for 'security'* reasons was required to write over all memory it used on de-allocation. The approach was to allocate an extra few bytes at the start of each block of memory which would contain the size of the overall block which would then be overwritten with zeros on delete.
This had a number of problems as you can probably imagine but it did work (mostly) and saved the team from reviewing every single memory allocation in a reasonably large, existing application.
Certainly not saying that it is a good use but it is probably one of the more imaginative ones out there...
* sadly it wasn't so much about actual security as the appearance of security...
Photoshop plugins written in C++ should override operator new so that they obtain memory via Photoshop.
I've done it with memory mapped files so that data written to the memory is automatically also saved to disk.
It's also used to return memory at a specific physical address if you have memory mapped IO devices, or sometimes if you need to allocate a certain block of contiguous memory.
But 99% of the time it's done as a debugging feature to log how often, where, when memory is being allocated and released.
It's actually pretty common for games to allocate one huge chunk of memory from the system and then provide custom allocators via overloaded new and delete. One big reason is that consoles have a fixed memory size, making both leaks and fragmentation large problems.
Usually (at least on a closed platform) the default heap operations come with a lack of control and a lack of introspection. For many applications this doesn't matter, but for games to run stably in fixed-memory situations the added control and introspection are both extremely important.
It can be a nice trick for your application to be able to respond to low memory conditions by something else than a random crash. To do this your new can be a simple proxy to the default new that catches its failures, frees up some stuff and tries again.
The simplest technique is to reserve a blank block of memory at start-up time for that very purpose. You may also have some cache you can tap into - the idea is the same.
When the first allocation failure kicks in, you still have time to warn your user about the low memory conditions ("I'll be able to survive a little longer, but you may want to save your work and close some other applications"), save your state to disk, switch to survival mode, or whatever else makes sense in your context.
The most common use case is probably leak checking.
Another use case is when you have specific requirements for memory allocation in your environment which are not satisfied by the standard library you are using, like, for instance, you need to guarantee that memory allocation is lock free in a multi threaded environment.
As many have already stated this is usually done in performance critical applications, or to be able to control memory alignment or track your memory. Games frequently use custom memory managers, especially when targeting specific platforms/consoles.
Here is a pretty good blog post about one way of doing this and some reasoning.
Overloaded new operator also enables programmers to squeeze some extra performance out of their programs. For example, In a class, to speed up the allocation of new nodes, a list of deleted nodes is maintained so that their memory can be reused when new nodes are allocated.In this case, the overloaded delete operator will add nodes to the list of deleted nodes and the overloaded new operator will allocate memory from this list rather than from the heap to speedup memory allocation. Memory from the heap can be used when the list of deleted nodes is empty.