Freeing heap pointers stored in std::queue - c++

Consider this code:
class Foo;
std:queue<Foo*> q;
// allocate and add objects to the queue
for (int i=0; i<100000; i++)
{
Foo* f = new Foo();
q.push(f);
}
// remove objects from queue and free them
while (!q.empty())
{
Foo* f2 = q.front();
q.pop();
delete f2;
}
By single-stepping I can see the Foo destructor getting called as each object is deleted, so I would expect the process memory usage to drop as each delete happens - but it doesn't. In my application the queue is used in producer/consumer threads and the memory usage just keeps growing.
The only way I have found to recover the memory is to swap the queue for an empty one whenever I have popped all items from it:
q.swap(std::queue<Foo*>());
If I use a vector rather than a queue, deleting the stored objects immediately drops process memory usage. Can anyone explain why the queue isn't behaving like that?
Edit to clarify from the comments: I understand that the queue manages the memory of the pointer variables themselves (i.e. 4 or 8 bytes per pointer), and that I can't control when that memory gets released. What I'm concerned about is that the heap memory being pointed to, which I am managing through new and delete, is also not being released on time.
*Edit 2: seems to only happen when the process is being debugged.. so not actually a problem in reality. Still weird though.

Many implementations of delete/free may not always make all memory available again to the operating system right away – the memory might just be available for that process for awhile. So if you are measuring your RSS or something, you can't necessarily expect it to go down the instant that something is deleted. I think the behavior may be different here in debug mode, which would explain what you are seeing.

Related

How to free allocated memory in a recursive function in C++

I have a recursive method in a class that computes some stages (doesn't matter what this is). If it notice that the probability of success for a stage is to low, the stage will be delayed by storing it in a queue and the program will look for delayed stage later. It grabs a stage, copies the data for working and deletes the stage.
The program runs fine, but I got a memory problem. Since this program is randomized it could happen that it delays up to 10 million stages which results in something like 8 to 12 GB memory usage (or more but it crashes before that happens). It seems the program never frees the memory for a deleted stage before reaching the end of the call stack of the recursive function.
class StageWorker
{
private:
queue<Stage*> delayedStages;
void perform(Stage** start)
{
// [long value_from_stage = (**start).value; ]
delete *start;
// Do something here
// Delay a stage if necessary like this
this->delayedStages.push(new Stage(value));
// Do something more here
// Look for delayed stages like this
Stage* front = this->delayedStages.front();
this->delayedStages.pop();
this->perform(&front);
}
}
I use pointer to pointer as I thought the memory is not freed, because there is a pointer (front*) that points to the stage. So I used a pointer that point to the pointer, and so I can delete it. But it seems not to work. If I watch the memory usage on Performance monitor (on Windows) it like like this:
I marked the end of the recursive calls. This is also just a example plot. real data, but in a very small scenario.
Any ideas how to free the memory of not longer used sages before reaching the end of the call stack?
Edit
I followed some advice and removed the pointers. So now it looks like this:
class StageWorker
{
private:
queue<Stage> delayedStages;
void perform(Stage& start)
{
// [long value_from_stage = start.value; ]
// Do something here
// Delay a stage if necessary like this
this->delayedStages.push(Stage(value));
// Do something more here
// Look for delayed stages like this
this->perform(this->delayedStages.front());
this->delayedStages.pop();
}
}
But this changed nothing, memory usage is the same as before.
As mentioned it maybe is a problem of monitoring. Is there a better way to check the exact memory usage?
Just to close this question as answered:
Mats Petersson (thanks a lot) mentiond in the comments that it could be a problem of monitoring the memory usage.
In fact this exactly was the problem. The code I provided in the edit (after replacing the pointers by references) frees the space correctly, but it is not given back to the OS (in this case Windows). So from the monitor it looks like the space is not freed.
But then I made some experiments and saw that freed memory is reused by the application, so it does not have to ask the OS for more memory.
So what the monitor shows is the maximum memory used bevore getting back the whole memory the recursive function required, after the first call of this function ends.

Do memory leaks have any effect after the program ends?

My question is about the absolute scope of memory allocated on the heap. Say you have a simple program like
class Simple
{
private:
int *nums;
public:
Simple()
{
nums = new int[100];
}
~Simple()
{
delete [] nums;
}
};
int main()
{
Simple foo;
Simple *bar = new Simple;
}
Obviously foo falls out of scope at the end of main and its destructor is called, whereas bar will not call its destructor unless delete is called on it. So the Simple object that bar points to, as well as the nums array, will be lost in the heap. While this is obviously bad practice, does it actually matter since the program ends immediately after? Am I correct in my understanding that the OS will free all heap memory that it allocated to this program once it ends? Are the effects of my bad decisions limited to the time it runs?
Any modern OS will reclaim all the memory allocated by any process after it terminates.
Each process has it's own virtual address space in all common operating systems nowdays, so it's easy for the OS to claim all memory back.
Needless to say it's a bad practice to rely on the OS for that.
It essentially means such code can't be used in a program that runs for a long while.
Also, in real world applications destructors may do far more than just deallocate memory.
A network client may send a termination message, a database related object may commit transactions, and a file wrapping object may write some closure data to it's file.
In other words: don't let your memory leak.

Subtle Memory Leak, and is this common practice?

I think I might be creating a memory leak here:
void commandoptions(){
cout<< "You have the following options: \n 1). Buy Something.\n 2).Check you balance. \n3). See what you have bought.\n4.) Leave the store.\n\n Enter a number to make your choice:";
int input;
cin>>input;
if (input==1) buy();
//Continue the list of options.....
else
commandoptions(); //MEMORY LEAK IF YOU DELETE THE ELSE STATEMENTS!
}
inline void buy(){
//buy something
commandoptions();
}
Let's say commandoptions has just exectued for the first time the program has been run. The user selects '1', meaning the buy() subroutine is executed by the commandoptions() subroutine.
After buy() executes, it calls commandoptions() again.
Does the first commandoptions() ever return? Or did I just make a memory leak?
If I make a subroutine that does nothing but call itself, it will cause a stackoverflow because the other 'cycles' of that subroutine never exit. Am I doing/close to doing that here?
Note that I used the inline keyword on buy... does that make any difference?
I'd happily ask my professor, he just doesn't seem available. :/
EDIT: I can't believe it didn't occur to me to use a loop, but thanks, I learned something new about my terminology!
A memory leak is where you have allocated some memory using new like so:
char* memory = new char[100]; //allocate 100 bytes
and then you forget, after using this memory to delete the memory
delete[] memory; //return used memory back to system.
If you forget to delete then you are leaving this memory as in-use while your program is running and cannot be reused for something else. Seeing that memory is a limited resource, doing this millions of times for example, without the program terminating, would end you with no memory left to use.
This is why we clean up after ourselves.
In C++ you'd use an idiom like RAII to prevent memory leaks.
class RAII
{
public:
RAII() { memory = new char[100]; }
~RAII() { delete[] memory }
//other functions doing stuff
private:
char* memory;
};
Now you can use this RAII class, as so
{ // some scope
RAII r; // allocate some memory
//do stuff with r
} // end of scope destroys r and calls destructor, deleting memory
Your code doesn't show any memory allocations, therefore has no visible leak.
Your code does seem to have endless recursion, without a base case that will terminate the recursion.
Inline keyword won't cause a memory leak.
If this is all the code you have, there shouldn't be a memory leak. It does look like you have infinite recursion though. If the user types '1' then commandoptions() gets called again inside of buy(). Suppose they type '1' in that one. Repeat ad infinum, you then eventually crash because the stack got too deep.
Even if the user doesn't type '1', you still call commandoptions() again inside of commandoptions() at the else, which will have the exact same result -- a crash because of infinite recursion.
I don't see a memory leak with the exact code given however.
This is basically a recursion without a base case. So, the recursion will never end (until you run out of stack space that is).
For what you're trying to do, you're better off using a loop, rather than recursion.
And to answer your specific questions :
No, commandoptions never returns.
If you use a very broad definition of a memory leak, then this is a memory leak, since you're creating stack frames without ever removing them again. Most people wouldn't label it as such though (including me).
Yes, you are indeed gonna cause a stack overflow eventually.
The inline keyword won't make a difference in this.
This is not about memory leak, you are making infinite calls to commandoptions function no matter what the value of input is, which will result in stack crash. You need some exit point in your commandoptions function.
There is no memory leak here. What does happen (at least it looks that way in that butchered code snippet of yours) is that you get into an infinite loop. You might run out of stack space if tail call optimization doesn't kick in or isn't supported by your compiler (it's a bit hard to see whether or not your calls actually are in tail position though).

_bstr_t memory leak

I have a c++ code. But it is not releasing memory properly. Tell me where I am wrong, here is my code
1 void MyClass::MyFunction(void)
2 {
3 for (int i=0; i<count; i++)
4 {
5 _bstr_t xml = GetXML(i);
6 // some work
7 SysFreeString(xml);
8 }
9 }
GetXML (line 5) returns me a BSTR. At this memory of program increases. But after SysFreeString (line 7) memory does not release. What I am doing wrong here?
First:
// This makes a copy.
// This is where the leak is. You are leaking the original string.
_bstr_t xml = GetXML();
// You want to use this, to attach the BSTR to the _bstr_t
_bstr_t xml = _bstr_t(GetXML(), false);
Second, don't do this:
SysFreeString(xml);
The _bstr_t class will do that for you.
Third, BSTR will not release the memory to the OS immediately, it caches recently used strings in order to make SysAllocString faster. You shouldn't expect to see memory usage go straight down after SysFreeString.
You can control this behaviour for debugging purposes:
http://support.microsoft.com/default.aspx?scid=kb;en-us;Q139071
Lastly, when viewing memory usage in Task Manager you need to look at the column "Commit Size" not "Working Set". Go to Menu->View->Select Columns to show the column. And also note that this really only helps over a period of time - the memory may not be released to the OS immediately, but if you have no leaks, it shouln't go up forever, over a course of hours.
I suppose you should use :
xml.Attach(GetXML(i));
operator= looks like it is actually assigning new value - which means copying it. That value returned by GetXML stays unfreed.
also there should be no need for SysFreeString(xml);
Task Manager only provides the amount of memory allocated to the process. When C++ releases memory ( C's free) it does not necessarily return the memory to the Operating system so Task Manager will not necessarily show memory going doem until the process ends.
What Task Manager can show is if you keep allocating memory and not releasing it then the memory size of the process will keep increasing, if this happens you probably hava a memory leak.
When programming you need to use memory profilers to see if you are releasing memory. In Windows I used Rational's Purify to give me this information but it costs a lot. MS C runtime can be used to track memory. MSDN provides an overview here, read and follow th links.
As to your code and as per other comments and answers one of the points of using _bstr_t class is to do the memory and other resource management for you so you should not call SysFreeString
The destructor of _bstr_t will call SysFreeString(xml), so you don't need to call SysFreeString(xml) again. An additional memory free will result in crash.

How to prevent memory leaks while cancelling an operation in a worker thread?

Currently i am working on a desktop application which consists mathematical analysiss.I am using qt for GUI and project written in c++.
When user starts an analysis, i open a worker thread and start a progress bar.Everything is ok up to now, problem starts when user cancels operation.Operation is complex, i am using several functions and objects, i allocate/deallocate memory at several times.I want to learn what should i do for recovering in cancel operation.Because there can be memory leaks.Which pattern or method i should use to be robust and safe for cancelling operation?
My idea is throwing an exception, but operation is really complex so should i put try-catch to all of my functions or is there a more generic way, pattern..
Edit: Problem is my objects are transfered between scopes, so shared_ptr or auto_ptr doesnt solve my problem,
Flag idea can be, but i think it requires so much code and there should be an easy way.
A pretty common way to close down worker threads, is to mark it with a flag, and let the worker thread inspect this flag at regular intervals. If marked, it should discontinue its workflow, clean up and exit.
Is that a possibility in your situation?
The worker thread should check for a message to stop. the message can be through a flag or an event. when stop message received the thread should exit.
USE BOOST safe pointers for all memory allocated. on exit you would have no memory leaks. ever.
Be sure your allocated memory is owned
Be sure every allocated memory is owned by a smart pointer, either C++03's auto_ptr, C++11's unique_ptr or Boost's scoped_ptr, or even shared_ptr (which can be shared, copied and moved).
This way, RAII will protect you from any memory leak.
Use Boost.Thread 1.37
Read Interrupt Politely, an article from Herb Sutter explaining miscellaneous ways to interrupt a thread.
Today wth Boost.Thread 1.37, You can ask a thread to terminate by throwing an exception. In Boost, it's the boost::thread_interrupted exception, which will throw an exception from any interruption point.
Thus, you do not need to handle some kind of message loop, or verify some global/shared data. The main thread asks the worker thread to stop through an exception, and as soon as the worker thread reaches an interruption point, the exception is thrown. The RAII mecanism described earlier will make sure all your allocated data will be freed correctly.
Let's say you have some pseudo code that will be called in a thread. It could be something like a function that will perhaps allocated memory, and another that will do a lot of computation inside a loop:
Object * allocateSomeObject()
{
Object * o = NULL ;
if(/*something*/)
{
// Etc.
o = new Object() ;
// Etc.
}
return o ; // o can be NULL
}
void doSomethingLengthy()
{
for(int i = 0; i < 1000; ++i)
{
// Etc.
for(int j = 0; j < 1000; ++j)
{
// Etc.
// transfert of ownership
Object * o = allocateSomeObject() ;
// Etc.
delete o ;
}
// Etc.
}
}
The code above is not safe and will leak not matter the interruption mode if steps are not taken to be sure that at all moments the memory will be owned by a C++ object (usually, a smart pointer).
It could be modified this way to have the code be both interruptible, and memory safe:
boost::shared_ptr<Object> allocateSomeObject()
{
boost::shared_ptr<Object> o ;
if(/*something*/)
{
// Etc.
boost::this_thread::interruption_point() ;
// Etc.
o = new Object() ;
// Etc.
boost::this_thread::interruption_point() ;
// Etc.
}
return o ; // o can be "NULL"
}
void doSomethingLengthy()
{
for(int i = 0; i < 1000; ++i)
{
// Etc.
for(int j = 0; j < 1000; ++j)
{
// Etc.
// transfert of ownership
boost::shared_ptr<Object> o = allocateSomeObject() ;
// Etc.
boost::this_thread::interruption_point() ;
// Etc.
}
// Etc.
boost::this_thread::interruption_point() ;
// Etc.
}
}
void mainThread(boost::thread & worker_thread)
{
// etc.
if(/* some condition */)
{
worker_thread.interrupt() ;
}
}
Do not use Boost?
If you do not use Boost, then you can simulate this. Have some thread storage boolean-like variable set to "true" if the thread should be interrupted. Add functions checking this variable, and then throw a specific exception if true. Have the "root" of your thread catch this exception to have it end correctly.
Disclaimer
I don't have access to Boost 1.37 right now, so I'm unable to test the previous code, but the idea is there. I will test this as soon as possible, and eventually post a more complete/correct/compilable code.
You should try to hold dynamically allocated resources in automatic (locals that live on the stack) sentry objects which free those resources in their destructors when they go out of scope. This way you can know they aren't going to leak, even if a function exits because of an exception. You also might want to investigate the boost libraries shared_ptr for sharing memory between routines.
First, throwing exceptions multi-threaded applications is iffy because there isn't a standard way to handle them (do they propagate to other threads? the scheduler? main()? somewhere else?). At least until you get a C++0x library which has standardized threading built in.
For the time being it makes more sense to use RAII (which will guarantee that all resources -- including memory -- is cleaned up when the scope exits, whether it exists due to success or failure) and have some sort of status code passed back to whichever thread makes the most sense (the scheduler for instance).
Also, directly canceling threads has been discouraged for more than a decade. It's much better to tell a thread to stop itself and have the thread handle the clean up, as Simon Jensen suggests.
There is no general solution to this question.
Some possible strategies:
Sometimes the use of shared_ptrs and friends helps
If you don't want the cancelation functionality to clutter up your algorithm, consider throwing. Catch in the toplevel function and clean up from there.
Whatever you put on the stack and not on the heap will not cause leaks
For large structures on the heap with lots of pointers inbetween classes, it is usually a question of rigorously providing a way to deallocate your entire memory structure.
Did you consider placement new in pools of memory that you throw away on cancelation?
But in any case, have a strategy or you will feel the pain.
The answer is that it depends on the complexity of your operation.
There's a few approaches here.
1) as was mentioned, put a 'cancel' flag in the operation, and have that operation poll the cancel flag at regular (close) intervals, probably at least as often as you update your progress bar. When the user hits cancel, then hit the cancel routine.
Now, as for memory handling in this scenario, I've done it a couple of ways. My preference is to use smart pointers or STL objects that will clean themselves up when you go out of scope. Basically, declare your objects inside of an object that has a destructor that will handle memory cleanup for you; as you create these objects, memory is created for you, and as the object goes out of scope, the memory is removed automatically. You can also add something like a 'dispose' method to handle the memory. It could look like this:
class MySmartPointer {
Object* MyObject;
MySmartPointer() { MyObject = new Object(); }
~MySmartPointer() { if (MyObject != null) { delete MyObject; MyObject = null; }}
void Dispose() { if (MyObject != null) { delete MyObject; MyObject = null; } }
Object* Access() { return MyObject; }
}
If you want to get really clever, you can template that class to be generic for any of your objects, or even to have arrays and the suchlike. Of course, you might have to check to see if an object has been disposed prior to accessing it, but thems the breaks when you use pointers directly. You might also be able to inline the Access method, so that it doesn't cost you a function call during execution.
2) A goto method. Declare your memory at the front, delete at the end, and when you hit the cancel method, call goto to go to the end of the method. I think certain programmers might lynch you for this, as goto is considered extremely bad style. Since I learned on basic and 'goto 10' as a way for looping, it doesn't freak me out as much, but you might have to answer to a pedant during a code review, so you'd better have a really good explanation for why you went with this one and not option 1.
3) Put it all into a process, rather than a thread. If you can, serialize all of the information to manipulate to disk, and then run your complicated analysis in another program. If that program dies, so be it, it doesn't destroy your main application, and if your analysis is that complicated and on a 32 bit machine, you may need all of that memory space to run anyway. Rather than using shared memory for passing progress information, you just read/write progress to disk, and canceling is instant. It's a bit trickier to implement, but not impossible, and potentially far more stable.
Since you are using Qt, you can take advantage of QObject's parenting memory system.
You say that you allocate and deallocate memory during the run of your worker thread. If each allocation is an instance of QObject, why don't you just parent it to the current QThread Object?
MyObject * obj = new MyObject( QThread::currentThread() );
You can delete them as you go and that's fine, but if you miss some, they will be cleaned up when the QThread is deallocated.
Note that when you tell your worker QThread to cancel, you must then wait for it to finish before you delete the QThread instance.
workerThread->cancel(); // reqfuest that it stop
workerThread->wait(); // wait for it to complete
delete workerThread; // deletes all child objects as well
I you use Simon Jensen's answer to quit your thread and QObject as your memory strategy, I think you'll be in a good situation.