Is it possible to make memory leak without using malloc? - c++

This question is as in title:
Is it possible to produce a memory leak without using any kernel specific means like malloc, new, etc?
What if I will make a linked list inside a function with lot of elements in there, and after it I'll exit from this function without cleaning a list. The list will be created without using any malloc calls, i.e.
struct list_head {
struct list_head *next, *prev;
}
Can it be guaranteed that all resources will be freed after exiting from this function? So I can freely execute it a million times and nothing will be leaked?
Subject: If you not using any particular malloc or new calls you won't get a heap memory leak. Never. Is that right?

A leak is always connected to a resource. A resource is by definition something that you acquire manually, and that you must release manually. Memory is a prime example, but there are other resources, too (file handles, mutex locks, network connections, etc.).
A leak occurs when you acquire a resource, but subsequently lose the handle to the resource so that nobody can release it. A lesser version of a leak is a "still-reachable" kind of situation where you don't release the resource, but you still have the handle and could release it. That's mostly down to laziness, but a leak by contrast is always a programming error.
Since your code never acquires any resources, it also cannot have any leaks.

The variables you applied without malloc or new is located at stack
space in the memory. So when the function returned, the variable is
taken back.
On the other hand, the memory you applied with malloc or new is
located at heap space. The system doesn't care whether you release the
space or not. In this situation, if you don't use free or delete,
memory leak will happen.

Subject: If you not using any particular malloc or new calls you won't get a heap memory leak. Never. Is that right?
That assumption is not entirely correct. The problem is that the operating system itself (or other third party components you have to rely on) can have memory leaks as well. In that case you might not actively call malloc, but call other (operating system) functions which could leak.
So your assumption depends on how strongly you consider such a thing. You can argue that the OS/third party implementation is outside your domain, then this assumption would be correct. If you have a well defined system and your requirements are such that you have to garuantee a certain uptime, something like this may have to be considered as well.
So the answer to this question ...
Is it possible to make memory leak without using malloc?
... is:
Yes, it is possible.

malloc() allocates memory from the heap, while space for string and struct literals (string1, string2 and those list_head's) will be reserved at compile time at the stack.
Actually any memory allocated for a program (heap or stack) will be reclaimed by the kernel when the process exits (at *nix system at least).
I would define memory leak as allocating memory on heap and without freeing it when your program exits. This definition actually answers your question.
There are standard functions (like strdup) that will allocate memory on heap, beware of them.

Another example of a resource that you can allocate and forget to free:
If you're using OpenGL, and you call glGenBuffers() a million times without the corresponding glDeleteBuffers calls, it's extremely likely that you will run out of VRAM and your graphics driver will start leaking to system memory.
I just had this happen. Fortunately, Visual Studio's memory profiler made it pretty easy to find. It showed up as a large number of allocations made by the external process nvoglv32.dll, which is my NVIDIA OpenGL driver.

Related

Is ending a program without releasing all dynamically allocated resources risky?

I know that stack allocated resources are released in reverse order as they were allocated at the end of a function as part of RAII. I've been working on a project and I allocate a lot of memory with "new" from a library I'm using, and am testing stuff out. I haven't added a shutdown function as a counterpart to the initialise function that does all the dynamic allocation. When you shut down the program I'm pretty sure there is no memory leak as the allocated memory should be reclaimed by the operating system. At least any modern OS, as explained in this question similar to mine: dynamically allocated memory after program termination .
I'm wondering two things:
1: Is there a particular order that resources are released in in this case? Does it have anything to do with your written code (ie., which order you allocated it), or is it completely up to the OS to do its thing?
2: The reason I haven't made a shutdown function to reverse the initialisation is because I say to myself I'm just testing stuff now, I'll do it later. Is there any risk of doing any damage to anything by doing what I'm doing? The worse I can imagine is what was mentioned in the answer to that question I linked, and that is that the OS fails to reclaim the memory and you end up with a memory leak even after the program exits.
I've followed the Bullet physics library tutorial and initialise a bunch of code like this:
pSolver = new btSequentialImpulseConstraintSolver;
pOverlappingPairCache = new btDbvtBroadphase();
pCollisionConfig = new btDefaultCollisionConfiguration();
pDispatcher = new btCollisionDispatcher(pCollisionConfig);
pDynamicsWorld = new btDiscreteDynamicsWorld(pDispatcher, pOverlappingPairCache, pSolver, pCollisionConfig);
And never call delete on any of it at the moment because, as I said, I'm just testing.
It depends on the resources. Open files will be closed. Memory will be freed. Destructors will not be called. Temporary files created will not be deleted.
There is no risk of having a memory leak after the program exits.
Because programs can crash, there are many mechanisms in place preventing a process to leak after it stopped and leaking usually isn't that bad.
As a matter of fact, if you have a lot of allocations that you don't delete until the end of the program, it can be faster to just have the kernel clean up after you.
However destructors are not run. This mostly causes temporary files to not be deleted.
Also it makes debugging actual memory leaks more difficult.
I suggest using std::unique_ptr<T> and not leaking in the first place.
It depends on how the memory is actually allocated and on your host system.
If you are only working with classes that don't override operator new() AND you are using a modern operating system that guarantees memory resources are released when the process exits, then all dynamically allocated memory should be released when your program terminates. There is no guarantee on the order of memory release (e.g. objects will not be released in the same order, or in reverse order, of their construction). The only real risk in this case is associated with bugs in the host operating system that cause resources of programs/processes to be improperly managed (which is a low risk - but not zero risk - for user programs in modern windows or unix OSs).
If you are using any classes that override operator new() (i.e. that change how raw memory is allocated in the process of dynamically constructing an object) then the risk depends on how memory is actually being allocated - and what the requirements are for deallocation. For example, if the operator new() uses global or system-wide resources (e.g. mutexes, semaphores, memory that is shared between processes) then there is a risk that your program will not properly release those resources, and then indirectly cause problems for other programs which use the same resources. In practice, depending on the design of such a class, the needed cleanup might be in a destructor, an operator delete() or some combination of the two - but, however it is done, your program will need to explicitly release such objects (e.g. a delete expression that corresponds to the new expression) to ensure the global resources are properly released.
One risk is that destructors of your dynamically allocated objects will not be invoked. If your program relies on the destructor doing anything other than release dynamically allocated memory (presumably allocated by the class constructor and managed by other member functions) then the additional clean-up actions will not be performed.
If your program will ever be built and run on a host system that doesn't have a modern OS then there is no guarantee that dynamically allocated memory will be reclaimed.
If code in your program will ever be reused in a larger long-running program (e.g. your main() function is renamed, and then called from another program in a loop) then your code may cause that larger program to have a memory leak.
It's fine, since the operating system (unless it's some exotic or ancient OS) will not leak the memory after the process has ended. Same goes for sockets and file handles; they will be closed at process exit. It's not in good style to not clean up after yourself, but if you don't, there's no harm done to the overall OS environment.
However, in your example, it looks to me like the only memory that you would actually need to release yourself is that of pDynamicsWorld, since the others should be cleaned up by the btDiscreteDynamicsWorld instance. You're passing them as constructor arguments, and I suspect they get destroyed automatically when pDynamicsWorld gets destroyed. You should read the documentation to make sure.
However, it's just not in good style (because it's unsafe) to use delete anymore. So instead of using delete to destroy pDynamicsWorld, you can use a unique_ptr instead, which you can safely create using the std::make_unique function template:
#include <memory>
// ...
// Allocate everything else with 'new' here, as usual.
// ...
// Except for this one, which doesn't seem to be passed to another
// constructor.
auto pDynamicsWorld = std::make_unique<btDiscreteDynamicsWorld>(
pDispatcher, pOverlappingPairCache, pSolver, pCollisionConfig);
Now, pDispatcher, pOverlappingPairCache, pSolver and pCollisionConfig should be destroyed by pDynamicsWorld automatically, and pDynamicsWorld itself will be destroyed automatically when it goes out of scope because it's a unique_ptr.
But, again: Read the documentation of Bullet Physics to check whether the objects you pass as arguments to the constructors of the Bullet Physics classes actually do get cleaned up automatically or not.

Why should we delete the memory allocated by new?

It is said that the memory allocated by new should be freed by delete, but a modern desktop OS will reclaim the memory even though you don't delete it. So why should we delete the memory allocated by new?
Also assert is known as not calling the destructors, and it seems like it's widely used in STL (at least VS2015 does that). If it's advised to delete the memory allocated by new (classes like string, map and vector use the destructor to delete the allocated memory), why the developers still use lots of assert then?
Why should we delete the memory allocated by new?
Because otherwise
the memory is leaked. Not leaking memory is absolutely crucial for long running software such as servers and daemons because the leaks will accumulate and consume all available memory.
the destructors of the objects will not be called. The logic of the program may depend on the destructors being called. Not calling some destructors may cause non-memory resources being leaked as well.
Also assert is known as not calling the destructors
A failed assert terminates the entire process, so it doesn't really matter much whether the logic of the program remains consistent, nor whether memory or other resources are leaked since the process isn't going to reuse those resources anyway.
and it seems like it's widely used in STL (at least VS2015 does that)
To be accurate, I don't think the standard library is specified to use the assert macro. The only situation where it could use it is if you have undefined behaviour. And if you have UB, then leaked memory is the least of your worries.
If you know that the destructor of the object is trivial, and you know that the object is used throughout the program (so, it's essentially a singleton), then it's quite safe to leak the object on purpose. This does have a drawback that it will be detected by a memory leak sanitizer that you would probably want to use to detect accidental memory leaks.
It is said that the memory allocated by new should be freed by
delete, but a modern desktop OS will reclaim the memory even though
you don't delete it. So why should we delete the memory allocated
by new?
Careful! The OS reclaims the memory only after your program has finished. This is not like garbage collection in Java or C#, which frees memory while the program is running.
If you don't delete (or more precisely, if you don't make sure that delete is called by resource-managing classes like std::unique_ptr, std::string or std::vector), then memory usage will continue to grow until you run out of memory.
Not to mention that destructors will not run, which matters if you have objects of types whose destructors perform more than just releasing memory.
Also assert is known as not calling the destructors,
More precisely, assert causes the program to terminate in a way that destructors are not called, unless the corresponding translation unit was compiled with the NDEBUG preprocessor macro defined, in which case the assert does nothing.
and it seems like it's widely used in STL (at least VS2015 does that).
Yes, the standard-library implementation of Visual C++ 2015 does that a lot. You should also use it liberally in your own code to detect bugs.
The C++ standard itself does not specify how and if asserts should appear in the implementation of a standard-library function. It does specify situations where the behaviour of the program is undefined; those often correspond to an assert in the library implementation (which makes sense, because if the behaviour is undefined anyway, then the implementation is free to do anything, so why not use that liberty in a useful way to give you bug detection?).
If it's advised to delete the memory allocated by new (classes like
string, map and vector use the destructor to delete the allocated
memory), why the developers still use lots of assert then?
Because if an assertion fails, then you want your program to terminate immediately because you have detected a bug. If you have detected a bug, then your program is by definition in an unknown state. Allowing your program to continue in an unknown state by running destructors is possibly dangerous and may compromise system integrity and data consistency.
After all, as I said above, destructors may not only call delete a few times. Destructors close files, flush buffers, write into logs, close network connections, clear the screen, join on a thread or commit or rollback database transactions. Destructors can do a lot of things which can modify and possibly corrupt system resources.
It is a common pattern that applications - in the course of their execution -dynamically create objects that will not be used throughout the program execution. If an application creates a lot of such objects of temporary lifetime, it somehow has to manage memory in order not to run out of it. Note that memory is still limited, since operating systems usually do not assign all available memory to an application. Operating systems, especially those driving limited devices like mobile phones, may even kill applications once the produce a too high pressure on memory.
Hence, you should free the memory of those objects that are not used any more. And C++ offers storage class specifiers to make this handling easier. automatic storage duration, which is the default, deletes objects once they run out of scope (i.e. their enclosing block, e.g. the function in which they are defined, finishes). static objects remain until the end of normal program execution (if reached), and dynamically allocated objects remain until you call delete.
Note that - in no way - any object will survive the end of program execution, as the operating system will free the complete application memory. For normal program terminations, destructors of static objects will be called (but not for objects of dynamically created objects that have not been deleted before). For abnormal program terminations, like triggered by assert, exit or the operating system, no destructors are called; you can rather think of a program terminates because you turn off the power.
If you don't delete You introduce a memory leak. Each time this operator is invoked the process will waste some portion of its address space until it ultimately runs out of memory.
After your program finishes you do not need to care about memory leaks, so in principle this would be fine:
int main(){
int* x = new int(1);
}
However, thats not how one usually uses memory. Often you need to allocate memory for something that you use only for a short time and then you want to free that memory when you dont need it anymore. Consider this example:
int main(){
while ( someCondition ) {
Foo* x = new Foo();
doSomething(*x);
} // <- already here we do not need x anymore
}
That code will accumulate more and more memory for x even if all that is used is a single instance of x. Thats why one should free memory latest at the end of the scope where it is needed (once you left the scope you have no way to free it!). Because forgetting a delete isnt nice, one should make use of RAII whenever possible:
int main(){
while ( someCondition ) {
Foo x;
doSomething(x);
} // <- memory for x is freed automatically
}

Should the memory allocated by wcsdup be freed explicitly?

Functions like wcsdup, implicitly calls malloc to allocate memory for the destination buffer. I was wondering as the memory allocation is not very explicit, so does it seems logical to explicitly free the storage?
This is more like a design dilemma and the reasons for and against are as follows
Should be freed because
Not freeing it would cause Memory Leak.
It is well documented that wcsdup/_wcsdup calls malloc to allocate memory even when its called from a C++ Program.
Should not be freed because
Memory accumulated by wcsdup would eventually be freed when program exits. We always live with some memory leaks through out the program lifetime(Unless we are heavily calling wcsdup for large buffer size).
It can be confusing as free was not preceded by an explicit malloc.
As its not part of the standard but posix compliant, Microsoft implementation may not use malloc for allocating destination buffer.
What should be the approach?
From MSDN:
it is good practice always to release this memory by calling the free routine on the pointer returned
From the page you linked:
The returned pointer can be passed to free()
It seems fairly explicit: if you care about memory leaks, then you should free the memory by using free.
To be honest, I'm concerned about the cavalier attitude hinted at with this:
We always live with some memory leaks through out the program lifetime
There are very rarely good reasons to leak memory. Even if the code you write today is a one-off, and it's not a long-lived process, can you be sure that someone's not going to copy-and-paste it into some other program?
Yes, you should always free heap-allocated memory when you're done using it and know that it is safe to do so. The documentation you link to even states:
For functions that allocate memory as if by malloc(), the application
should release such memory when it is no longer required by a call to
free(). For wcsdup(), this is the return value.
If you are concerned about the free being potentially confusing, leave a comment explaining it. To be honest, though, that seems superfluous; it's pretty obvious when a pointer is explicitly freed that it's "owned" by the code freeing it, and anyone who does become confused can easily look up the wcsdup documentation.
Also, you should really never have memory leaks in your program. In practice some programs do have memory leaks, but that doesn't mean it's okay for them to exist. Also note that just because you have a block of memory allocated for the entire lifespan of the program, it is not leaked memory if you are still using it for that entire duration.
From your own link:
For functions that allocate memory as if by malloc(), the application should release such memory when it is no longer required by a call to free().
From MSDN:
The _strdup function calls malloc to allocate storage space for a copy of strSource and then copies strSource to the allocated space.
and strdup is deprecated as from MSVC 2005 and calling it calls _strdup so it is using malloc

Preventing memory fragmentation with the new - delete trick

I remember reading in a book on programming for computer games, sorry can't remember title. That an easy way to improve performance is do something like this near the start:
int main()
{
{
char dummy* = new char[10000000]; // 10Mbytes ish
delete[] dummy;
}
...
}
The idea is that the expensive part of dynamic memory allocation is the request to get memory from the OS, which doesn't normally get returned until the end of the program. Has anybody used this and seen performance improvements ?
Whether this works or not depends on the OS in question. A lot of modern OSes use mmap under the hood for large memory allocation and bypass the process's heap altogether. This means the allocation will be made directly from the OS and then returned directly to the OS when freed.
A much better strategy is to generally do your memory allocation at the start and reuse space allocated as much as possible before returning the memory to the heap. This is the logic behind STL allocators.
That doesn't make sense. You allocate a huge block, then free it and the heap takes ownership of the memory occupied by the block and can legally fragment it while it is used later.
That doesn't necessarily improve performance, as the current Standard says nothing about how the memory is dynamically allocated, deallocated and then reallocated. However, an implementation can make use of the same memory region in the rest of the program whenever it needs to allocate memory . It's more like memory pooling.
Anything could be possible. It entirely depends on implementation. Also it may even choose to remove the code altogether, as it does nothing.
Depending on your environment, there may be loader flags that let you specify an initial heap size. This memory will be allocated when your program is loaded into memory, so it becomes part of the start-up time. This should give the same results, with the benefit that the compiler won't optimize it away.

Freeing dynamically allocated memory

In C++, when you make a new variable on the heap like this:
int* a = new int;
you can tell C++ to reclaim the memory by using delete like this:
delete a;
However, when your program closes, does it automatically free the memory that was allocated with new?
Yes, it is automatically reclaimed, but if you intend to write a huge program that makes use of the heap extensively and not call delete anywhere, you are bound to run out of heap memory quickly, which will crash your program.
Therefore, it is a must to carefully manage your memory and free dynamically allocated data with a matching delete for every new (or delete [] if using new []), as soon as you no longer require the said variable.
When the process is terminated the memory is reclaimed back by the OS. Of course this argument shouldn't in any case be used to not perform proper memory management by the program.
Don't let people tell you yes. C++ has no concept of an OS, so to say "yes the OS will clean it up" is no longer talking about C++ but about C++ running on some environment, which may not be yours.
That is, if you dynamically allocate something but never free it you've leaked. It can only end its lifetime once you call delete/delete[] on it. On some OS's (and almost all desktop OS's), memory will be reclaimed (so other programs may use it.) But memory is not the same as resource! The OS can free all the memory it wants, if you have some socket connection to close, some file to finish writing to, etc, the OS might not do it. It's important not to let resources leak. I've heard of some embedded platforms that won't even reclaim the memory you've not freed, resulting in a leak until the platform is reset.
Instead of dynamically allocating things raw (meaning you're the one that has to explicitly delete it), wrap them into automatically allocated (stack allocated) containers; not doing so is considered bad practice, and makes your code extremely messy.
So don't use new T[N], use std::vector<T> v(N);. The latter won't let a resource leak occur. Don't use new T;, use smart_ptr p(new T);. The smart pointer will track the object and delete it when it's know longer used. This is called Scope-bound Resource Management (SBRM, also known as the dumber name Resource-Acquisition is Initialization, or RAII.)
Note there is no single "smart_ptr". You have to pick which one is best. The current standard includes std::auto_ptr, but it's quite unwieldy. (It cannot be used in standard containers.) Your best bet is to use the smart pointers part of Boost, or TR1 if your compiler supports it. Then you get shared_ptr, arguably the most useful smart pointer, but there are many others.
If every pointer to dynamically allocated memory is in an object that will destruct (i.e., not another object that is dynamically allocated), and that object knows to free the memory, that pointer is guaranteed to be freed. This question shouldn't even be a problem, since you should never be in a position to leak.
No, when the program exits ("closes") the dynamically allocated memory is left as is
EDIT:
Reading the other answers, I should be more precise. The destructors of dynamically allocated objects will not run but the memory will be reclaimed anyway by any decent OS.
PS: The first line should read
int* a = new int;
No, it's your responsibility to free it. Also, a must be a pointer, so it should be:
int *a = new int;
delete a;
This excellent answer by Brian R. Bondy details why it's good practice to free the memory allocated by a.
It is important to explicitly call
delete because you may have some code
in the destructor that you want to
execute. Like maybe writing some data
to a log file. If you let the OS free
your memory for you, your code in your
destructor will not be executed.
Most operating systems will deallocate
the memory when your program ends. But
it is good practice to deallocate it
yourself and like I said above the OS
won't call your destructor.
As for calling delete in general, yes
you always want to call delete, or
else you will have a memory leak in
your program, which will lead to new
allocations failing.
When your process terminates, the OS does regain control of all resources the process was using, including memory. However, that, of course, will not cause C++'s destructors to be necessarily run, so it's not a panacea for not explicitly freeing said resources (though it won't be a problem for int or other types with noop dtors, of course;-).