I've been trying to debug a memory leak in my program, and have narrowed it down to the WinHttp comms. I've been able to reproduce the problem in the following test code:
#include <windows.h>
#include "winhttp.h"
void main() {
while (1) {
HINTERNET send_session = WinHttpOpen(L"asdf", WINHTTP_ACCESS_TYPE_DEFAULT_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, 0);
WinHttpCloseHandle(send_session);
}
}
After running this for a few seconds, the program is already using over 20MB memory. Why is it doing this? The API states that you need to call WinHttpCloseHandle after the handle is no longer needed - I'm doing that.
I'm compiling using mingw32 on Arch Linux, and running the code on Windows 7.
If you modify the code a bit, you will see what is happening.
int _tmain(int argc, _TCHAR* argv[])
{
for(INT n = 0; n < 1000000; n++)
{
if(!(n % 10000))
_tprintf(_T("%d\n"), n / 10000);
HINTERNET send_session = WinHttpOpen(L"asdf", WINHTTP_ACCESS_TYPE_DEFAULT_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, 0);
WinHttpCloseHandle(send_session);
}
_tprintf(_T("Done\n"));
Sleep(INFINITE);
return 0;
}
Along with creating/closing one million of sessions you will see counters climbing up. The API creates background resources including threads and does not release them immediately, so they keep piling up.
However as soon as you stop creating new sessions and let it have a few seconds of idle time - you will see all that memory is released.
The bottom line is that in real code you should not create a separate session for every small thing. One session might be hosting multiple connections and requests.
Related
I'm writing a local network scanner on Windows to find online hosts with IP Helper Functions, which is equivalent to nmap -PR but without WinPcap. I know SendARP will block and send arp request 3 times if the remote host doesn't respond, so I use std::aync to create one threads for each host, but the problem is I want to send an ARP request every 20ms so it would not be too much arp packets in a very short time.
#include <iostream>
#include <future>
#include <vector>
#include <winsock2.h>
#include <iphlpapi.h>
#pragma comment(lib, "iphlpapi.lib")
#pragma comment(lib, "ws2_32.lib")
using namespace std;
int main(int argc, char **argv)
{
ULONG MacAddr[2]; /* for 6-byte hardware addresses */
ULONG PhysAddrLen = 6; /* default to length of six bytes */
memset(&MacAddr, 0xff, sizeof (MacAddr));
PhysAddrLen = 6;
IPAddr SrcIp = 0;
IPAddr DestIp = 0;
char buf[64] = {0};
size_t start = time(NULL);
std::vector<std::future<DWORD> > vResults;
for (auto i = 1; i< 255; i++)
{
sprintf(buf, "192.168.1.%d", i);
DestIp = inet_addr(buf);
vResults.push_back(std::async(std::launch::async, std::ref(SendARP), DestIp, SrcIp, MacAddr, &PhysAddrLen));
Sleep(20);
}
for (auto it= vResults.begin(); it != vResults.end(); ++it)
{
if (it->get() == NO_ERROR)
{
std::cout<<"host up\n";
}
}
std::cout<<"time elapsed "<<(time(NULL) - start)<<std::endl;
return 0;
}
At first I can do this by calling Sleep(20) after launching a thread, but once SendARP in these threads re-send ARP requests if no replies from remote host, it's out of my control, and I see many requests in a very short time(<10ms) in Wireshark, so my question is:
Any way to make SendARP asynchronous?
if not, can I control the sent timing of SendARP in threads?
There doesn't seem to be any way to force SendARP to act in a non-blocking manner, it would appear that when a host is unreachable, it will try to re-query several times before giving up.
As for the solution, nothing you want to hear. the MSDN Docs state that there's a newer API that deprecates SendARP called ResolveIpNetEntry2 that can also do the same thing, but it also appears to behave in the same manner.
The struct it receives contains a field called ReachabilityTime.LastUnreachable which is: The time, in milliseconds, that a node assumes a neighbor is unreachable after not having received a reachability confirmation.
However, it does not appear to have any real effect.
The best way to do it, is to use WinPCap or some other driver, there doesn't seem to be a way of solving your problem in userland.
I'm trying to setup a sandbox akin to chromium. In particular, I'm trying to replicate their trick of creating a sleeping process with a low-privilege token, then setting a high-privilege token temporarily before running it. The idea is to let the process do all its initialization in high-privilege mode, then reverting to the low-privilege token right before running any unsafe code.
So far, I'm struggling just to get a basic test up and running. Here's my code:
#include "stdafx.h"
#include <atlbase.h>
#include <iostream>
#include <cassert>
#include <vector>
#include <string>
#include <AccCtrl.h>
#include <aclapi.h>
#define VERIFY(x) { bool r = x; assert(r); }
uint8_t* GetTokenInfo(const HANDLE& token, TOKEN_INFORMATION_CLASS info_class, DWORD* error)
{
// Get the required buffer size.
DWORD size = 0;
::GetTokenInformation(token, info_class, NULL, 0, &size);
if (!size)
{
*error = ::GetLastError();
return nullptr;
}
uint8_t* buffer = new uint8_t[size];
if (!::GetTokenInformation(token, info_class, buffer, size, &size))
{
*error = ::GetLastError();
return nullptr;
}
*error = ERROR_SUCCESS;
return buffer;
}
int main()
{
// Open the current token
CHandle processToken;
VERIFY(::OpenProcessToken(::GetCurrentProcess(), TOKEN_ALL_ACCESS, &processToken.m_h));
// Create an impersonation token without restrictions
HANDLE impersonationToken;
VERIFY(DuplicateToken(processToken, SecurityImpersonation, &impersonationToken));
// Build the list of the deny only group SIDs
DWORD error;
uint8_t* buffer = GetTokenInfo(processToken, TokenGroups, &error);
if (!buffer) return error;
TOKEN_GROUPS* token_groups = reinterpret_cast<TOKEN_GROUPS*>(buffer);
std::vector<SID*> sids_for_deny_only;
for (unsigned int i = 0; i < token_groups->GroupCount; ++i)
{
if ((token_groups->Groups[i].Attributes & SE_GROUP_INTEGRITY) == 0 &&
(token_groups->Groups[i].Attributes & SE_GROUP_LOGON_ID) == 0)
{
sids_for_deny_only.push_back(reinterpret_cast<SID*>(token_groups->Groups[i].Sid));
}
}
{
DWORD size = sizeof(TOKEN_USER) + SECURITY_MAX_SID_SIZE;
uint8_t* buffer = new uint8_t[size];
TOKEN_USER* token_user = reinterpret_cast<TOKEN_USER*>(buffer);
BOOL result = ::GetTokenInformation(processToken, TokenUser, token_user, size, &size);
if (!result) return ::GetLastError();
sids_for_deny_only.push_back(reinterpret_cast<SID*>(token_user->User.Sid));
}
size_t deny_size = sids_for_deny_only.size();
SID_AND_ATTRIBUTES *deny_only_array = NULL;
if (deny_size)
{
deny_only_array = new SID_AND_ATTRIBUTES[deny_size];
for (unsigned int i = 0; i < sids_for_deny_only.size(); ++i)
{
deny_only_array[i].Attributes = SE_GROUP_USE_FOR_DENY_ONLY;
deny_only_array[i].Sid = const_cast<SID*>(sids_for_deny_only[i]);
}
}
// Create restricted sids
DWORD size_sid = SECURITY_MAX_SID_SIZE;
BYTE sid_[SECURITY_MAX_SID_SIZE];
VERIFY(::CreateWellKnownSid(WinNullSid, NULL, sid_, &size_sid));
SID_AND_ATTRIBUTES sidsToRestrict[] =
{
reinterpret_cast<SID*>(const_cast<BYTE*>(sid_)),
0
};
// Create the restricted token
HANDLE restrictedToken;
VERIFY(::CreateRestrictedToken(processToken,
0, // flags
deny_size,
deny_only_array,
0,
0,
_countof(sidsToRestrict), // number of SIDs to restrict,
sidsToRestrict, // no SIDs to restrict,
&restrictedToken));
VERIFY(::IsTokenRestricted(restrictedToken));
// Create a process using the restricted token (but keep it suspended)
STARTUPINFO startupInfo = { 0 };
PROCESS_INFORMATION processInfo;
VERIFY(::CreateProcessAsUser(restrictedToken,
L"C:\\Dev\\Projects\\SandboxTest\\Debug\\Naughty.exe",
0, // cmd line
0, // process attributes
0, // thread attributes
FALSE, // don't inherit handles
CREATE_SUSPENDED | DETACHED_PROCESS, // flags
0, // inherit environment
0, // inherit current directory
&startupInfo,
&processInfo));
// Set impersonation token with more rights
{
HANDLE temp_thread = processInfo.hThread;
if (!::SetThreadToken(&temp_thread, impersonationToken))
{
return 1;
}
}
// Run the process
if (!::ResumeThread(processInfo.hThread)) // Other process crashes immediately when this is run
{
return 1;
}
std::cout << "Done!" << std::endl;
return 0;
}
Not quite sure about deny list and restrict list yet, but if I understand this correctly it should be irrelevant. I'm calling SetThreadToken with my unrestricted token before running the thread, so I figure it should not matter what settings I use for restrictedToken. However, this is not the case; the new process crashes with the error code 0xc00000a5. If I use processToken instead of restrictedToken in CreateProcessAsUser, the code runs just fine. It's like SetThreadToken isn't doing anything.
I'm not doing much in naughty.exe right now, just starting an infinite loop.
Anyone know what I'm doing wrong here?
Edit 1:
According to this page, 0xc00000a5 means "STATUS_BAD_IMPERSONATION_LEVEL". Not sure on this, but I think I'm missing SeImpersonatePrivilege, causing stuff to fail. Still investigating options...
Edit 2:
Okay, seems like I had to reduce the privilege of the impersonation token to be able to use it with the other process. Not sure why, but not I can run the program without admin rights.
Still getting an error though :/ Now it's "STATUS_DLL_NOT_FOUND". Best lead from examining Process Monitor logs is an ACCESS DENIED on "C:\Windows\SysWOW64\ucrtbased.dll". The weird part is that it seems to be working once in a while (i.e. the spawned process sometimes runs just fine). Back to digging...
The problem is caused by the startup code trying to load the C runtime DLL from a new thread (which doesn't have access to the high-privilege token). What worked for me is to statically link the CRT into the sandbox process (i.e. /MTd in Debug builds, /MT in Release builds).
the new process crashes with the error code 0xc00000a5 / STATUS_BAD_IMPERSONATION_LEVEL
I've encountered this when:
the restricted token specifies SidsToRestrict that the permissive token does not.
the restricted token specifies a lower integrity level than the permissive token.
"restricted": The more restrictive token passed to CreateProcessAsUser
"permissive": The less restrictive token passed to SetThreadToken, and used until that thread calls RevertToSelf
It appears you cannot unrestrict sids or raise the integrity level with SetThreadToken, even if the parent process is unrestricted in these regards - you can only undeny sids, or unremove privileges.
I had to reduce the privilege of the impersonation token to be able to use it with the other process
This was a red herring for me. I tried every combination of keeping/removing privilige LUIDs for the permissive and restrictive tokens with little effect.
0xC0000135 / STATUS_DLL_NOT_FOUND
Best lead from examining Process Monitor logs is an ACCESS DENIED on "C:\Windows\SysWOW64\ucrtbased.dll". The weird part is that it seems to be working once in a while (i.e. the spawned process sometimes runs just fine).
The DLL loading/initializing code appears to be multithreaded. Speculating a bit here, my theory is that the new threads don't inherit the permissive token specified by SetThreadToken, and that ucrtbased.dll only loads successfully if the initial/main thread happened to be the thread that loaded ucrtbased.dll, and will fail if it's instead loaded by any of the worker threads - hence why it works sometimes, but typically not.
Workaround options:
Statically link the CRT. This is my current preference.
Don't pass Everyone or Users to SidsToDisable. Defeats the point of denying sids for sandboxing purposes in the first place, so I'd recommend against this, especially since I see no way to disable them later.
Have the parent process listen for CREATE_THREAD_DEBUG_EVENT and SetThreadToken for them too? I haven't tested this, but I suspect it might work. Would only want to do this during startup, lest you break open the sandbox after the child process has called RevertToSelf() etc.
0xC0000142 / ERROR_DLL_INIT_FAILED
Okay, this one's just me: I encountered this when trying to spawn a process at Untrusted integrity when initializing bcrypt.dll for Rust's stdlib. Spawn at Low instead, and have the child process lower itself to Untrusted post-init IMO.
How the heck do you use SidsToRestrict at all then?
You can't go from nullptr restrictions on a permissive token to real restrictions on a restricted token without causing 0xc00000a5 / STATUS_BAD_IMPERSONATION_LEVEL.
However, you can go from one restriction list to another, and neither necessairly needs to contain all the exact same SIDs as the other.
With a restrictive SidsToRestrict of only S-1-0-0 "NULL SID", I can use a permissive SidsToRestrict containing only:
S-1-1-0 "Everyone" (otherwise child dies w/ STATUS_ACCESS_DENIED)
S-1-5-5-x-yyyyyyy "LogonSessionId_..." (otherwise dies w/ STATUS_DLL_INIT_FAILED?)
Perhaps S-1-0-0 is considered a subset of Everyone, or perhaps the restricted sids can be outright disjoint?
Using all group SIDs marked SE_GROUP_ENABLED | SE_GROUP_LOGON_ID for your permissive token might be more appropriate.
Note that the child can't lower it's integrity level unless it can OpenProcessToken(.., ADJUST_DEFAULT, ..) based on the current access token.
The only overlap between the permissive token's restriction sids, and the restricted default TokenDefaultDacl, is the logon session, which doesn't grant write access by default:
ACCESS_ALLOWED_ACE { Mask: GENERIC_ALL, Sid: S-1-5-21-xxxx-yyyy-zzzz "%USERNAME%", .. }
ACCESS_ALLOWED_ACE { Mask: GENERIC_ALL, Sid: S-1-5-18 "SYSTEM", .. }
ACCESS_ALLOWED_ACE { Mask: GENERIC_READ | GENERIC_EXECUTE, Sid: S-1-5-5-x-yyyyyyyyy "LogonSessionId_x_yyyyyyyyy, .. }
So you may want to create a new default dacl for the restricted token with:
InitializeAcl(...);
AddAccessAllowedAce(acl, ACL_REVISION, TOKEN_ADJUST_DEFAULT | ..., logon_session_sid);
TOKEN_DEFAULT_DACL default_dacl = { acl };
SetTokenInformation(restriced, TokenDefaultDacl, &default_dacl, sizeof(default_dacl));
And ensure you adjust your child process's process token integrity level before calling RevertToSelf.
I have a really strange problem. Looking for the cause on the web and try everything. Nothing helps.
First case:
(This works exactly as expected. Windows task manager shows the constant memory size, and does not increase.)
unsigned long WINAPI thfun(void * arg)
{
::Sleep(50);
::ExitThread(0);
return 0;
}
int main(int argc, const wchar_t ** argv)
{
HANDLE th = 0;
DWORD thid, err;
while (true)
{
th = ::CreateThread(0, 0, thfun, 0, 0, &thid);
if (!th)
{
err = ::GetLastError();
}
::WaitForSingleObject(th, INFINITE);
}
return 0;
}
Second case:
unsigned long WINAPI thfun(void * arg)
{
::Sleep(50);
::ExitThread(0);
return 0;
}
int main(int argc, const wchar_t ** argv)
{
WORD ver;
WSADATA wsadata;
ver = MAKEWORD(2, 2);
if (WSAStartup(ver, &wsadata)) return 1;
::Sleep(50);
HANDLE th = 0;
DWORD thid, err;
while (true)
{
th = ::CreateThread(0, 0, thfun, 0, 0, &thid);
if (!th)
{
err = ::GetLastError();
}
::WaitForSingleObject(th, INFINITE);
}
return 0;
}
If I call any function from winsock least once created threads do not release memory.
Windows task manager shows ever-growing memory of my application.
What should I do so that I achieve the same behavior as in the first case when I use winsock?
I use visual studio 2013
Thank you very much for any help
You do not close your thread handles. A common error.
Your core loop should look like that:
while (true)
{
th = ::CreateThread(0, 0, thfun, 0, 0, &thid);
if (!th)
{
err = ::GetLastError();
}
::WaitForSingleObject(th, INFINITE);
CloseHandle(th);
}
That problem exists in both of your examples. That memory grow of the second sample can be a side effect.
ExitThread(0) is never a good idea and I do not understand why Microsoft recommand it for C. As the Winsock API should not have any destructor, it should not be a problem. Nevertheless, do not use it.
UPDATE
I tested your code as release on a Windows 7 64bit SP1 System with Antivira personal installed (my gaming machine). Also on my Windows 8 VM (parallels). Both system did not show the problems you described and show in your video. This is IMHO good news for you, because it seems a problem of your installation and not a general problem.
The video shows a leak of only a few bytes per ended thread and strict linear growing per thread. This looks for me like thread associated information, usually stored nowadays in TLS (Thread Local Storage). Also it only appears when you init The WSASocket system. If the WSASocket system itself would be the problem, we would found reports of it for sure(but I didn't). I believe a hook DLL is causing that problem, a DLL is informed over the DllMain of any started or ended thread of the process. Any virus scanner or keyboard addon(!) can cause such a problem as they usually use hook DLLs and manipulate IOs like pipes and sockets.
Unfortunatly I only know one way to find out:
Make a release canditate of your sample. Make sure the problem exist.
Make a clean install of Windows 7
Install step by step the environment you use on your produktive system. Make sure you restart the Computer after every step.
Hopefully find the culprit.
Deactivating or uninstall hooks may help but need not. Unfortunately installing programs on windows system is maximal inversive.
Sorry for not heaving the easy answer.
I am attempting to add handle leak detection to the unit test framework on my code. (Windows 7, x64 VS2010)
I basically call GetProcessHandleCount() before and after each unit test.
This works fine except when threads are created/destroyed as part of the test.
It seems that windows is occasionally creating an 1-3 events on thread shutdown. Running the same test in a loop does not increase the event creation count. (eg running the test 5000 times in a loop only results in 1-3 extra events being created)
I do not create events manually in my own code.
It seems that this is similar to this problem:
boost::thread causing small event handle leak?
but I am doing manual thread creation/shutdown.
I followed this code:
http://blogs.technet.com/b/yongrhee/archive/2011/12/19/how-to-troubleshoot-a-handle-leak.aspx
And got this callstack from WinDbg:
Outstanding handles opened since the previous snapshot:
--------------------------------------
Handle = 0x0000000000000108 - OPEN
Thread ID = 0x00000000000030dc, Process ID = 0x0000000000000c90
0x000000007715173a: ntdll!NtCreateEvent+0x000000000000000a
0x0000000077133f26: ntdll!RtlpCreateCriticalSectionSem+0x0000000000000026
0x0000000077133ee3: ntdll!RtlpWaitOnCriticalSection+0x000000000000014e
0x000000007714e40b: ntdll!RtlEnterCriticalSection+0x00000000000000d1
0x0000000077146ad2: ntdll!LdrShutdownThread+0x0000000000000072
0x0000000077146978: ntdll!RtlExitUserThread+0x0000000000000038
0x0000000076ef59f5: kernel32!BaseThreadInitThunk+0x0000000000000015
0x000000007712c541: ntdll!RtlUserThreadStart+0x000000000000001d
--------------------------------------
As you can see, this is an event created on the thread shutdown.
Is there a better way of doing this handle leak detection in unit tests? My only current options are:
Forget trying to do this handle leak detection
Spin up some dummy tasks to attempt to create these spurious events.
Allow some small tolerance value in leaks and run each test 100's of times (so actual leaks will be a large number)
Get the handle count excluding events (difficult amount of code)
I have also tried switching to using std::thread in VS2013, but it seems that it creates a lot of background threads and handles when used. (makes the count difference much worse)
Here is a self contained example where 99+% of the time (on my computer) an event is created behind the scenes. (handle count is different). Putting the startup/shutdown code in a loop indicates it does not directly leak, but accumulates the occasional events:
#include "stdio.h"
#include <Windows.h>
#include <process.h>
#define THREADCOUNT 3
static HANDLE s_semCommand, s_semRender;
static unsigned __stdcall ExecutiveThread(void *)
{
WaitForSingleObject(s_semCommand, INFINITE);
ReleaseSemaphore(s_semRender, THREADCOUNT - 1, NULL);
return 0;
}
static unsigned __stdcall WorkerThread(void *)
{
WaitForSingleObject(s_semRender, INFINITE);
return 0;
}
int main(int argc, char* argv[])
{
DWORD oldHandleCount = 0;
GetProcessHandleCount(GetCurrentProcess(), &oldHandleCount);
s_semCommand = CreateSemaphoreA(NULL, 0, 0xFFFF, NULL);
s_semRender = CreateSemaphoreA(NULL, 0, 0xFFFF, NULL);
// Spool threads up
HANDLE threads[THREADCOUNT];
for (int i = 0; i < THREADCOUNT; i++)
{
threads[i] = (HANDLE)_beginthreadex(NULL, 4096, (i==0) ? ExecutiveThread : WorkerThread, NULL, 0, NULL);
}
// Signal shutdown - Wait for threads and close semaphores
ReleaseSemaphore(s_semCommand, 1, NULL);
for (int i = 0; i < THREADCOUNT; i++)
{
WaitForSingleObject(threads[i], INFINITE);
CloseHandle(threads[i]);
}
CloseHandle(s_semCommand);
CloseHandle(s_semRender);
DWORD newHandleCount = 0;
GetProcessHandleCount(GetCurrentProcess(), &newHandleCount);
printf("Handle %d -> %d", oldHandleCount, newHandleCount);
return 0;
}
I've been scratching my head over this for the past month now and I still can't figure out what is going on.
The problem is that I have a very serious memory leak on a C++ application running on Windows Server 2008, compiled using Visual Studio 2005. This is a managed project. The application starts at around 5-6MB (according to Task Manager) and starts to exhibit symptoms of failure around the ~200MB mark. I know Task Manager is a crude tool, but given the scale of the leak it seems OK to use.
I've narrowed the problem to MySQL Database interaction. If the application does not interact with the database, no memory is leaked.
All database interactions use mysql++. I've followed the build instructions in the man pages on tangentsoft.net.
We've evaluated the code for thread safety (that is, we ensured that each thread only uses mysqlpp object from that thread and no other) and checked to make sure all destructors are called for any dynamically generated objects created using 'new'.
Looking on the internet I keep seeing various reports from users of the mysqlpp class that indicate there is a leak somewhere. In particular, there was a discussion about how the Win C API would leak when mysqlpp was used:
http://www.phpmarks.com/6-mysql-plus/ffd713579bbb1c3e.htm
This discussion seems to conclude in a fix, however, when I try the fixes in my application it still leaks.
I implemented a version of the application cited in the thread above, but with some of the advice from the man pages added:
int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
while (true)
{
//Initialise MySQL API
mysql_library_init(0, NULL, NULL);
Sleep(50);
//Connect to Database.
mysqlpp::Connection c;
c.connect("myDatabase","localhost","username","password");
Sleep(50);
//Disconnect from Database
c.disconnect();
Sleep(50);
//Free memory allocated to the heap for this thread
c.thread_end();
Sleep(50);
//Free any memory allocated by MySQL C API
mysql_library_end();
Sleep(50);
}
return 1;
}
I added the Sleep(50) just to throttle each stage of the loop, so that each function has time to "settle down". I know it probably isn't necessary but at least this way I can eliminate that as a cause.
Nevertheless, this program leaks quite rapidly (~1mb per hour).
I've seen similar questions to mine asked in a few places, with no conclusions made :(
So i'm not alone with this issue. It occurs to me that the mysqlpp class has a reputation for usefulness and so must be quite robust. Given that is the case, I still can't see what i've done wrong. Does anyone have some experience of mysqlpp with Visual Studio 2005 that might shed light on the problem?
Cheers,
Adam.
EDIT
I created another example using a pointer, just in case c was being duplicated in the loop:
//LEAKY
int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
mysqlpp::Connection * c;
while (true)
{
mysql_library_init(0, NULL, NULL);
c = new mysqlpp::Connection;
Sleep(50);
c->connect("myDatabase","localhost","username","password");
Sleep(50);
c->disconnect();
Sleep(50);
c->thread_end();
Sleep(50);
mysql_library_end();
Sleep(50);
delete c;
c = NULL;
}
return 1;
}
This also leaks. I then created a control example based on this code, which doesn't leak at all:
//NOT LEAKY
int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
char * ch;
while (true)
{
mysql_library_init(0, NULL, NULL);
//Allocate 4000 bytes
ch = new char [4000];
Sleep(250);
mysql_library_end();
delete ch;
ch = NULL;
}
return 1;
}
Note that I also left the calls to the MySQL C API here to prove that it isn't the cause of the leak. I then created an example using a pointer but without the calls to connect/ disconnect:
//NOT LEAKY
int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
mysqlpp::Connection * c;
while (true)
{
mysql_library_init(0, NULL, NULL);
c = new mysqlpp::Connection;
Sleep(250);
mysql_library_end();
delete c;
c = NULL;
}
return 1;
}
This doesn't leak.
So the difference is just the use of the mysqlpp::connect / disconnect methods. I'll dig into the mysqlpp class itself and try to see whats up.
Cheers,
Adam.
EDIT
Here is an example of the leaky code where checks are made.
//LEAKY
int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
mysqlpp::Connection * c;
while (true)
{
mysql_library_init(0, NULL, NULL);
c = new mysqlpp::Connection;
Sleep(50);
if ( c->connect("myDatabase","localhost","username","password") == false )
{
cout << "Connection Failure";
return 0;
}
Sleep(50);
c->disconnect();
Sleep(50);
c->thread_end();
Sleep(50);
mysql_library_end();
Sleep(50);
delete c;
c = NULL;
}
return 1;
}
Cheers,
Adam.