Subtle Memory Leak, and is this common practice? - c++

I think I might be creating a memory leak here:
void commandoptions(){
cout<< "You have the following options: \n 1). Buy Something.\n 2).Check you balance. \n3). See what you have bought.\n4.) Leave the store.\n\n Enter a number to make your choice:";
int input;
cin>>input;
if (input==1) buy();
//Continue the list of options.....
else
commandoptions(); //MEMORY LEAK IF YOU DELETE THE ELSE STATEMENTS!
}
inline void buy(){
//buy something
commandoptions();
}
Let's say commandoptions has just exectued for the first time the program has been run. The user selects '1', meaning the buy() subroutine is executed by the commandoptions() subroutine.
After buy() executes, it calls commandoptions() again.
Does the first commandoptions() ever return? Or did I just make a memory leak?
If I make a subroutine that does nothing but call itself, it will cause a stackoverflow because the other 'cycles' of that subroutine never exit. Am I doing/close to doing that here?
Note that I used the inline keyword on buy... does that make any difference?
I'd happily ask my professor, he just doesn't seem available. :/
EDIT: I can't believe it didn't occur to me to use a loop, but thanks, I learned something new about my terminology!

A memory leak is where you have allocated some memory using new like so:
char* memory = new char[100]; //allocate 100 bytes
and then you forget, after using this memory to delete the memory
delete[] memory; //return used memory back to system.
If you forget to delete then you are leaving this memory as in-use while your program is running and cannot be reused for something else. Seeing that memory is a limited resource, doing this millions of times for example, without the program terminating, would end you with no memory left to use.
This is why we clean up after ourselves.
In C++ you'd use an idiom like RAII to prevent memory leaks.
class RAII
{
public:
RAII() { memory = new char[100]; }
~RAII() { delete[] memory }
//other functions doing stuff
private:
char* memory;
};
Now you can use this RAII class, as so
{ // some scope
RAII r; // allocate some memory
//do stuff with r
} // end of scope destroys r and calls destructor, deleting memory
Your code doesn't show any memory allocations, therefore has no visible leak.
Your code does seem to have endless recursion, without a base case that will terminate the recursion.

Inline keyword won't cause a memory leak.
If this is all the code you have, there shouldn't be a memory leak. It does look like you have infinite recursion though. If the user types '1' then commandoptions() gets called again inside of buy(). Suppose they type '1' in that one. Repeat ad infinum, you then eventually crash because the stack got too deep.
Even if the user doesn't type '1', you still call commandoptions() again inside of commandoptions() at the else, which will have the exact same result -- a crash because of infinite recursion.
I don't see a memory leak with the exact code given however.

This is basically a recursion without a base case. So, the recursion will never end (until you run out of stack space that is).
For what you're trying to do, you're better off using a loop, rather than recursion.
And to answer your specific questions :
No, commandoptions never returns.
If you use a very broad definition of a memory leak, then this is a memory leak, since you're creating stack frames without ever removing them again. Most people wouldn't label it as such though (including me).
Yes, you are indeed gonna cause a stack overflow eventually.
The inline keyword won't make a difference in this.

This is not about memory leak, you are making infinite calls to commandoptions function no matter what the value of input is, which will result in stack crash. You need some exit point in your commandoptions function.

There is no memory leak here. What does happen (at least it looks that way in that butchered code snippet of yours) is that you get into an infinite loop. You might run out of stack space if tail call optimization doesn't kick in or isn't supported by your compiler (it's a bit hard to see whether or not your calls actually are in tail position though).

Related

How to free allocated memory in a recursive function in C++

I have a recursive method in a class that computes some stages (doesn't matter what this is). If it notice that the probability of success for a stage is to low, the stage will be delayed by storing it in a queue and the program will look for delayed stage later. It grabs a stage, copies the data for working and deletes the stage.
The program runs fine, but I got a memory problem. Since this program is randomized it could happen that it delays up to 10 million stages which results in something like 8 to 12 GB memory usage (or more but it crashes before that happens). It seems the program never frees the memory for a deleted stage before reaching the end of the call stack of the recursive function.
class StageWorker
{
private:
queue<Stage*> delayedStages;
void perform(Stage** start)
{
// [long value_from_stage = (**start).value; ]
delete *start;
// Do something here
// Delay a stage if necessary like this
this->delayedStages.push(new Stage(value));
// Do something more here
// Look for delayed stages like this
Stage* front = this->delayedStages.front();
this->delayedStages.pop();
this->perform(&front);
}
}
I use pointer to pointer as I thought the memory is not freed, because there is a pointer (front*) that points to the stage. So I used a pointer that point to the pointer, and so I can delete it. But it seems not to work. If I watch the memory usage on Performance monitor (on Windows) it like like this:
I marked the end of the recursive calls. This is also just a example plot. real data, but in a very small scenario.
Any ideas how to free the memory of not longer used sages before reaching the end of the call stack?
Edit
I followed some advice and removed the pointers. So now it looks like this:
class StageWorker
{
private:
queue<Stage> delayedStages;
void perform(Stage& start)
{
// [long value_from_stage = start.value; ]
// Do something here
// Delay a stage if necessary like this
this->delayedStages.push(Stage(value));
// Do something more here
// Look for delayed stages like this
this->perform(this->delayedStages.front());
this->delayedStages.pop();
}
}
But this changed nothing, memory usage is the same as before.
As mentioned it maybe is a problem of monitoring. Is there a better way to check the exact memory usage?
Just to close this question as answered:
Mats Petersson (thanks a lot) mentiond in the comments that it could be a problem of monitoring the memory usage.
In fact this exactly was the problem. The code I provided in the edit (after replacing the pointers by references) frees the space correctly, but it is not given back to the OS (in this case Windows). So from the monitor it looks like the space is not freed.
But then I made some experiments and saw that freed memory is reused by the application, so it does not have to ask the OS for more memory.
So what the monitor shows is the maximum memory used bevore getting back the whole memory the recursive function required, after the first call of this function ends.

Do memory leaks have any effect after the program ends?

My question is about the absolute scope of memory allocated on the heap. Say you have a simple program like
class Simple
{
private:
int *nums;
public:
Simple()
{
nums = new int[100];
}
~Simple()
{
delete [] nums;
}
};
int main()
{
Simple foo;
Simple *bar = new Simple;
}
Obviously foo falls out of scope at the end of main and its destructor is called, whereas bar will not call its destructor unless delete is called on it. So the Simple object that bar points to, as well as the nums array, will be lost in the heap. While this is obviously bad practice, does it actually matter since the program ends immediately after? Am I correct in my understanding that the OS will free all heap memory that it allocated to this program once it ends? Are the effects of my bad decisions limited to the time it runs?
Any modern OS will reclaim all the memory allocated by any process after it terminates.
Each process has it's own virtual address space in all common operating systems nowdays, so it's easy for the OS to claim all memory back.
Needless to say it's a bad practice to rely on the OS for that.
It essentially means such code can't be used in a program that runs for a long while.
Also, in real world applications destructors may do far more than just deallocate memory.
A network client may send a termination message, a database related object may commit transactions, and a file wrapping object may write some closure data to it's file.
In other words: don't let your memory leak.

Why this code is not causing memory leak?

I wanted to simulate memory leak in my application. I write following code, and tried to see in perfmon.
int main()
{
int *i;
while(1)
{
i = (int *) malloc(1000);
//just to avoid lazy allocation
*i = 100;
if(i == NULL)
{
printf("Memory Not Allocated\n");
}
Sleep(1000);
}
}
When I see used memory in Task Manager, it is fluctuate between 52K and 136K, but not going beyond that. Means, somethings it shows 52K and sometimes 136K, I do not understand how this code once goes to 136K, coming back to 52K, and not going beyond that.
I tried to use perfmon, but not able to exactly what to see in perfmon, snapshot of counters,
Please suggest how to simulate memory leak and how to detect it.
While an OS may defer actual allocation of dynamically allocated memory until it is used, the compiler optimizer may eliminate allocations that are only written to, and never read from. Because your writes have no well defined observable behaviour (you never read from it), the compiler may well optimize it away. I would suggest examing the generated assembly code to see what the compiler is actually generating. Really, this ought be one of the first steps in answering questions like "why doesn't this code behave like I think it should?".
Strictly, a memory leak is a bit context dependent: something in your program keeps allocating memory over time and not freeing it, when it should have been freed.
You code produces a "leak" on each subsequent pass through the while loop, because your program loses knowledge of a previously allocated pointer at that point. This is only visible by inspection however in this case; from the code posted it looks more like you are actually doing, albeit very slowly, is attempting to create a memory stress situation.
To 'find' a leak without inspection you need to run a tool like valgrind (Unix/Linux/OSX) or in Visual Studio enable allocation tracing with the DEBUG_NEW macro and view the output using a debugger.
If you really want to stress memory in a hurry, allocate 1024 x 1024 x 1024 bytes at a time...

Unexpected Behaviour from tcmalloc

I have been using tcmalloc for a few months in a large project, and so far I must say that I am pretty happy about it, most of all for its HeapProfiling features which allowed to track memory leaks and remove them.
In the past couple of weeks though we experienced random crashes in our application, and we could not find the source of the random crash. In a very particular situation, When the application crashed, we found ourselves with a completely corrupted stack for one of the application threads. Several times instead I found that threads were stuck in tcmalloc::PageHeap::AllocLarge(), but since I dont have debug symbols of tcmalloc linked, I could not understand what the issue was.
After nearly one week of investigation, today I tried the most simple of things: removed tcmalloc from linkage to avoid using it, just to see what happened. Well... I finally found out what the problem was, and the offending code looks very much like this:
void AllocatingFunction()
{
Object object_on_stack;
ProcessObject(&object_on_stack);
}
void ProcessObject(Object* object)
{
...
// Do Whatever
...
delete object;
}
Using libc the application still crashed but I finally saw that I was calling delete on an object which was allocated on the stack.
What I still cant figure out is why tcmalloc instead kept the application running regardless of this very risky (if not utterly wrong) object deallocation, and the double deallocation when object_on_stack goes out of scope when AllocatingFunction ends. The fact is that the offending code could be called repeatedly without any hint of the underlying abomination.
I know that memory deallocation is one of those "undefined behaviour" when not used properly, but my surprise is such a different behaviour between "standard" libc and tcmalloc.
Does anyone have some sort of explanation of insight, on why tcmalloc keeps the application running?
Thanks in advance :)
Have a Nice Day
very risky (if not utterly wrong) object deallocation
well, I disagree here, it is utterly wrong, and since you invoke UB, anything can happen.
It very much depends on what the tcmalloc code acutally does on deallocation, and how it uses the (possibly garbage) data around the stack at that location.
I have seen tcmalloc crash on such occasions too, as well as glibc going into an infinite loop. What you see is just coincidence.
Firstly, there was no double free in your case. When object_on_stack goes out of scope there is no free call, just the stack pointer is decreased (or rather increased, as stack grows down...).
Secondly, during delete TcMalloc should be able to recognize that the address from a stack does not belong to the program heap. Here is a part of free(ptr) implementation:
const PageID p = reinterpret_cast<uintptr_t>(ptr) >> kPageShift;
Span* span = NULL;
size_t cl = Static::pageheap()->GetSizeClassIfCached(p);
if (cl == 0) {
span = Static::pageheap()->GetDescriptor(p);
if (!span) {
// span can be NULL because the pointer passed in is invalid
// (not something returned by malloc or friends), or because the
// pointer was allocated with some other allocator besides
// tcmalloc. The latter can happen if tcmalloc is linked in via
// a dynamic library, but is not listed last on the link line.
// In that case, libraries after it on the link line will
// allocate with libc malloc, but free with tcmalloc's free.
(*invalid_free_fn)(ptr); // Decide how to handle the bad free request
return;
}
Call to invalid_free_fn crashes.

Thread with and without a memory leak

I'm stuck with a c++ assignment where I should make a simple thread and another thread that has the same logic but also have a memory leak.
This should just be an easy thread example, even not doing anything useful in itself. So I guess my question is, what is the easiest thread that can be made in c++ and if I have understood correctly that to make it leak memory, I should make a variable, that is never deleted?
Also should this "leak" be placed in a loop or made to repeat in some other fashion...because for me just leaving one variable undeleted doesn't seem like a major leak.
This would be enough for a leak:
new char;
You can place it in a loop if you want more, but be careful -
while( true ) {
new char;
}
brings most systems to a halt quite quickly - they start swapping and become barely usable. IMO you should stick to leaking a couple of objects unless you have other specific requirements.
You could always allocate a large object (such as a big buffer) and never free it; that way a single allocation would be a substantial memory leak.
As well, if you had a thread that was designed as some kind of frequently called worker thread and had a small memory leak there, over the runtime of your program you could easily run into memory problems through "death of a thousand cuts" style leaks.
There is a Boost Thread library, which is probably your easiest option for threads in C++. Yes, a memory leak is just an undeleted variable. If you don't want a one-variable memory leak, just allocate an array of whatever size you deem necessary. new char[x], where x is how many bytes of memory leakage you want