memory leak xerces use - c++

I have a leak in my application and I've come to reduce my code to the following and it's leaking about 12kb per iteration. I cannot see if this is a problem with my code or a problem with the xerces library itself. But looking at the Private Bytes in Perfmon I can only see growth and no shrinkage, so it's obviously leaking.
Can someone please advice what could be wrong with the following code that causes it to leak at such an incredible rate?:
(single threaded test app)
for (int x = 0; x < 1000000; x++){
DataSerializer* ds = new DataSerializer();
ds->test(request);
ds->releasedocument();
ds->destroy_xml_lib();
delete ds;
}
void DataSerializer::test(std::string& request)
{
impl = initialize_impl();
}
DOMImplementation* DataSerializer::initialize_impl()
{
try
{
boost::mutex::scoped_lock init_lock(impl_mtx);
XMLPlatformUtils::Initialize();
return DOMImplementationRegistry::getDOMImplementation(XConv("Core"));
}
catch(const XMLException& toCatch)
{
char *pMsg = XMLString::transcode(toCatch.getMessage());
std::string msg(pMsg);
XMLString::release(&pMsg);
}
return NULL;
}
void DataSerializer::destroy_xml_lib()
{
boost::mutex::scoped_lock terminate_lock (impl_mtx); //is being used in MT app
XMLPlatformUtils::Terminate();
}
void DataSerializer::releasedocument()
{
if (document){
document->release();
document = NULL;
}
}
I don't understand how this could possibly leak? What have I missed?

Where does impl get deleted?
I know nothing more about the API than googling the docs, but they suggest to me that you should not be calling Terminate() - in a real program, other code elsewhere, possibly in other threads, may still be using the xerces library.
The DOMImplementation is returned as a pointer and has a destructor - clear indications you have to manage its lifetime. It seems a really likely story that that is your memory leak.
Furthermore, that the DOMImplementationRegistry::getDOMImplementation() can return NULL so you have to guard against that.
If you can run this on Linux,use Valgrind (Purify is a commercial equivalent for windows)

Not sure where you allocate document.
In the ReleaseDocument() function you don't delete it. All you do is set it to zero after clearing its content.
PS: don't know xerces either.

Related

What will happens to a local pointer if thread is terminated?

what happens to data created in local scope of thread if thread is terminated, memory leak?
void MyThread()
{
auto* ptr = new int[10];
while (true)
{
// stuff
}
// thread is interrupted before this delete
delete[] ptr;
}
Okay, my perspective.
If the program exits, the threads exit wherever they are. They don't clean up. But in this case you don't care. You might care if it's an open file and you want it flushed.
However, I prefer a way to tell my threads to exit cleanly. This isn't perfect, but instead of while (true) you can do while (iSHouldRun) and set the field to false when it's time for the thread to exit.
You can also set a flag that says, iAmExiting at the end, then myThread.join() once the flag is set. That gives your exit code a chance to clean up nicely.
Coding this from the beginning helps when you write your unit tests.
The other thing -- as someone mentioned in comments -- use RAII. Pretty much if you're using raw pointers, you're doing something you shouldn't do in modern C++.
That's not an absolute. You can write your own RAII classes. For instance:
class MyIntArray {
MyArray(int sizeIn) { ... }
~MyArray() { delete array; }
private:
int * array = nullptr;
int size = 0;
};
You'll need a few more methods to actually get to the data, like an operator[]. Now, this isn't any different than using std::vector, so it's only an example of how to implement RAII for your custom data, for instance.
But your functions should NEVER call new like this. It's old-school. If your method pukes somehow, you have a memory leak. If it pukes on exit(), no one cares. But if it pukes for another reason, it's a problem. RAII is a much, much better solution than the other patterns.

Why is the memory created via unique_ptr not being deleted properly when reset() is called?

I have an application that receives packets at a fast rate, and every time it receives packets, some objects are created to handle them and for the object creation I am using std::unique_ptr.
For some reason they don't seem to be getting cleaned up properly as I can see the memory usage of the application constantly rise.
I took a snapshot to see where the allocations are coming from and it was as expected
Here is the code that is creating these PacketIn and PacketHeader objects
while (!server->BufferEmpty()) {
std::shared_ptr<Stream> inStream = std::make_shared<Stream>();
std::vector<unsigned char> buffer = inStream->GetBuffer();
std::size_t n = server->receive(boost::asio::buffer(buffer),
boost::posix_time::milliseconds(-1), ec);
if (ec)
{
std::cout << "Receive error: " << ec.message() << "\n";
}
else
{
std::unique_ptr<IPacketIn> incomingPacket = std::make_unique<IPacketIn>();
incomingPacket->ReadHeader(inStream);
std::cout << "Received a buffer! ";
//std::cout.write(buffer, n);
std::cout << "\n";
incomingPacket.reset();
}
++packetsRead;
inStream.reset();
}
PacketIn
class IPacketIn {
public:
IPacketIn() {
m_packetHeader = std::make_unique<PacketHeader>();
}
~IPacketIn() {
m_packetHeader.reset();
}
void ReadHeader(std::shared_ptr<Stream> stream) {
m_packetHeader->ReadHeader(stream);
}
private:
std::unique_ptr<IPacketHeader> m_packetHeader;
};
PacketHeader
class PacketHeader : public IPacketHeader {
public:
PacketHeader() {
}
~PacketHeader() {
}
void ReadHeader(std::shared_ptr<Stream> stream) override {
//m_uuid = stream->ReadUUID(10);
//m_timestamp = stream->ReadInt64();
//m_packetId = stream->ReadShort();
}
private:
std::string m_uuid;
//long m_timestamp;
//unsigned short m_packetId;
I've stepped through the code and it seems calling reset is clearing the unique_ptr but is it actually deleting the memory it has created or am I missing something?
Edit
So it seems it is not related to the unique_ptr as I have tried swapping to using new and delete with the same issue.
What I have noticed is that the issue occurs when the PacketHeader class has member variables
std::string m_uuid;
long m_timestamp;
unsigned short m_packetId;
When these variables are removed, the issue no longer occurs.
I have narrowed it down to being the std::string uuid;. When this is present in the PacketHeader class it causes the memory to rise but when it is removed it is fine. Why is this?
Are these somehow not being removed when the object is destroyed?
It turns out that ownership of instances of PacketHeader class is held through a pointer to base class IPacketHeader which lacks a virtual destructor. So the std::unique_ptr<IPacketHeader> was unable to perform cleanup properly.
Yes, it is deleting the memory.
Note that neither of the calls to reset are needed - the destructor of the pointer is about to be called in both cases, and that will delete the memory.
Note that monitoring process memory is a very unreliable way to tell if you have a memory leak. Up to some limit, system libraries quite often try not to reuse recently released memory - in order to reduce the impact of use-after-free bugs.
Try using valgrind to see if you have an actual memory leak.
Edit: VTT has clarified that the OP wasn't just monitoring process memory, but using VS memory profiler (which is very similar to valgrind).

ExtAudioFileOpenURL leak

I am opening an audio file to read it and I get an abandoned malloc block from this caller each time.
In a loop I set data like this (which is marked as the memory usage in instruments as 99.7%) data = (short*)malloc(kSegmentSize*sizeof(short));
and free it like this free(data); at the end of each iteration.
Im not really sure what is happening here and would appreciate any help.
EDIT: KSegmentSize varies in the thousands, from minimum 6000 - max 50000 (speculative)
Instruments trace:
Not having the exact code:
Pretty sure you're having this problem b/c something between the malloc and free is throwing (and you're probably catching it already so you don't exit the loop). Depending on if this is happening in C (or objective-C) or C++ code, you have slightly different methods of resolution.
In C++, wrap the malloc/free in the RAII pattern so that when the stack is unwound the free is called.
class MyData {
public:
A(size_t numShorts) : dataPtr(0) { dataPtr = malloc(numShorts * sizeof(short)); }
~A() { free(dataPtr); }
operator short*() { return dataPtr; }
private:
short* dataPtr;
}
MyData data(numShorts);
// do your stuff, you can still use data as you were before due the 'operator short*'
// allow the dtor to be called when you go out of scope
In Objective-C you need to use a finally block:
void* myPtr = 0;
#try { myPtr = malloc(...); }
#catch {}
#finally { free(myPtr); }
Suggest that you start by simplifying, for example comment out (preferably using #if 0) all of the code except the malloc/free. Run the code and ensure no abandoned heap blocks. Then gradually re-introduce the remaining code and re-run until you hit the problem, then debug.
Sorry to answer my own question, but after commenting out code back up the stack trace the actual issue was to do with the file not be disposed.
Calling ExtAudioFileDispose(audioFile); solved this hidden bug. Instruments was not entirely clear and marked mallocs as the leak. To be fair the mallocs where from data that was within the file referenced by the ExtAudioOpenFile method, not disposing the file reference left a leak.

release mode error, but not in debug mode

My code runs fine in debug mode but fails in release mode.
Here's a snippet of my code where it fails:
LOADER->AllocBundle(&m_InitialContent);
while(!m_InitialContent.isReady())
{
this->LoadingScreen();
}
AllocBundle() will load the content contained in m_InitialContent and set it's ready status to true when it is done. This is implemented using multithreading.
this->LoadingScreen() should render a loading screen, however at the moment that is not implemented yet so the function has an empty body.
Apparently this might be the cause of the error: If I give the function LoadingScreen() one line of code: std::cout<<"Loading"<<std::endl; then it will run fine.
If I don't, then the code gets stuck at while(!m_InitialContent.isReady()) It never even jumps to the code between the brackets (this->LoadingScreen();). And apparently neither does it update the expression in the while statement because it stays stuck there forever.
Does anyone have any ideas what might be causing this? And if so, what might the problem be?
I'm completely puzzled.
EDIT: Additional code on request
member of ContentLoader: details::ContentBundleAllocator m_CBA;
void ContentLoader::AllocBundle(ContentBundle* pBundle)
{
ASSERT(!(m_CBA.isRunning()), "ContentBundleAllocator is still busy");
m_CBA.Alloc(pBundle, m_SystemInfo.dwNumberOfProcessors);
}
void details::ContentBundleAllocator::Alloc(ContentBundle* pCB, UINT numThreads)
{
m_bIsRunning = true;
m_pCB = pCB;
pCB->m_bIsReady = false;
m_NumRunningThrds = numThreads;
std::pair<UINT,HANDLE> p;
for (UINT i = 0; i < numThreads; ++i)
{
p.second = (HANDLE)_beginthreadex(NULL,
NULL,
&details::ContentBundleAllocator::AllocBundle,
this,
NULL,&p.first);
SetThreadPriority(p.second,THREAD_PRIORITY_HIGHEST);
m_Threads.Insert(p);
}
}
unsigned int __stdcall details::ContentBundleAllocator::AllocBundle(void* param)
{
//PREPARE
ContentBundleAllocator* pCBA = (ContentBundleAllocator*)param;
//LOAD STUFF [collapsed for visibility+]
//EXIT===========================================================================================================
pCBA->m_NumRunningThrds -= 1;
if (pCBA->m_NumRunningThrds == 0)
{
pCBA->m_bIsRunning = false;
pCBA->m_pCB->m_bIsReady = true;
pCBA->Clear();
#ifdef DEBUG
std::tcout << std::endl;
#endif
std::tcout<<_T("exiting allocation...")<<std::endl;
}
std::tcout<<_T("exiting thread...")<<std::endl;
return 0;
}
bool isReady() const {return m_bIsReady;}
When you compile your code in Debug mode, the compiler does a lot of stuff behind the scenes that prevents many mistakes made by the programmer from crashing the application. When you run in Release, all bets are off. If your code is not correct, you're much more likely to crash in Release than in Debug.
A few things to check:
Make sure all variables are properly intialized
Make sure you do not have any deadlocks or race conditions
Make sure you aren't passing around pointers to local objects that have been deallocated
Make sure your strings are properly NULL-terminated
Don't catch exceptions that you're not expecting and then continue running as if nothing had happened.
You are accessing the variable m_bIsReady from different threads without memory barriers. This is wrong, as it may be cached by either optimizer or processor cache. You have to protect this variable from simultaneous access with a CriticalSection, or mutex, or whatever synchronization primitive is available in your library.
Note that there might be further mistakes, but this one is definitely a mistake, too. As a rule of thumb: each variable which is accessed from different threads has to be protected with a mutex/critical section/whatever.
from a quick look m_NumRunningThrds doesn't seem to be protected against simultaneous access so if (pCBA->m_NumRunningThrds == 0) might never be satisfied.

How to find a way to detect the memory leak in this code?

All, Could any one tell me a good way or tool to detect the memory leak in visual studio for these code? I have tested the crt debug, but while i abort the debug process(shift+f5), the memory leak report doesnot appear in the debug windows.
void fun1()
{
int * pInt = new int;
return;
}
void Execute(void)
{
while(true)
{
cout<<"I will sleep for 1 second..."<<endl;
::Sleep(1000);
fun1();
}
return;
}
int main()
{
Execute();
return 0;
}
Could any one know how to find the memory for above code?
BTW, if i choose to use shared_ptr, the memory leak will not happen again, Right?
The problem here is fairly simple: when you abort a process, leaking memory is more or less taken for granted -- even if your code wouldn't normally leak, aborting it with the debugger is (short of extremely good luck) going to leak memory anyway. As such, most tools that would normally report memory leaks won't when you abort the program with the debugger.
As such, to see a leak report, you just about need to write code that will, at some point, exit on its own instead of requiring you to kill it with the debugger. If you change your code to something like this:
void fun1()
{
int * pInt = new int;
return;
}
void Execute(void)
{
for (int i=0; i<100000; i++)
{
//cout<<"I will sleep for 1 second..."<<endl;
//::Sleep(2000);
fun1();
}
return;
}
int main()
{
Execute();
return 0;
}
By the way, when you pass 2000 as the parameter to Sleep, you should expect it to sleep at least 2 seconds, not just one. For the moment, I've commented out the cout and Sleep, so it should just quickly leak the memory and produce a leak report. With a lot of output and Sleeping, it would do the same, just a lot more slowly and noisily.
Does adding this to the top of your main function not work?
#if defined(DEBUG) | defined (_DEBUG)
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);
#endif
You have to run in debug mode.
The problem is that while a process is running it is hard for an automated process to tell what memory was leaked. In languages that keep track of references to objects and memory blocks you can do this at any time, you just need to find the blocks that have no references. In C/C++ there is no such thing (unless you implement it yourself, that is), so you can't really tell if a memory block has been leaked or not.
One thing you can do in cases like this is to trigger the function that dumps memory leaks at a point in time in the life of your process that you know should not have any leaks. For example, let's say that you know that your application should not have any leaks at the end of each iteration in the Execute() while loop. Then you could do something like this:
#include <crtdbg.h>
void fun1()
{
int * pInt = new int;
return;
}
void Execute(void)
{
int i = 0;
while(true)
{
cout<<"I will sleep for 1 second..."<<endl;
::Sleep(2000);
fun1();
#ifdef _DEBUG
// dump any leaks every 100 iterations
if (++i % 100 == 0)
_CrtDumpMemoryLeaks();
#endif
}
return;
}
int main()
{
Execute();
return 0;
}
See this page for information about _CrtDumpMemoryLeaks() and other functions of the MSVC CRT library.
I hope this helps.
I have no idea about VS, on Linux I'd use valgrind.