Memory leak checking using Instruments on Mac - c++

I've just been pulling my hair out trying to make Instruments cough up my deliberately constructed memory leaks. My test example looks like this:
class Leaker
{
public:
char *_array;
Leaker()
{
_array=new char[1000];
}
~Leaker()
{
}
};
void *leaker()
{
void *p=malloc(1000);
int *pa=new int[2000];
{
Leaker l;
Leaker *pl=new Leaker();
}
return p;
}
int main (int argc, char **argv)
{
for (int i=0; i<1000; ++i) {
leaker();
}
sleep(2); // Needed to give Instruments a chance to poll memory
return 0;
}
Basically Instruments never found the obvious leaks. I was going nuts as to why, but then discovered "sec Between Auto Detections" in the "Leaks Configuration" panel under the Leaks panel. I dialed it back as low as it would go, which was 1 second, and placed the sleep(2) in in my code, and voila; leaks found!
As far as I'm concerned, a leak is a leak, regardless of whether it happens 30 minutes into an app or 30 milliseconds. In my case, I stripped the test case back to the above code, but my real application is a command-line application with no UI or anything and it runs very quickly; certainly less than the default 10 second sample interval.
Ok, so I can live with a couple of seconds upon exit of my app in instrumentation mode, but what I REALLY want, is to simply have Instruments snapshot memory on exit, then do whatever it needs over time while the app is running.
So... the question is: Is there a way to make Instruments snapshot memory on exit of an application, regardless of the sampling interval?
Cheers,
Shane

Instruments, in Leaks mode can be really powerful for leak tracing, but I've found that it's more biased towards event-based GUI apps than command line programs (particularly those which exit after a short time). There used to be a CHUD API where you could programmatically control aspects of the instrumentation, but last time I tried it the frameworks were no longer provided as part of the SDK. Perhaps some of this is now replaced with Dtrace.
Also, ensure you're up to date with Xcode as there were some recent improvements in this area which might make it easier to do what you need. You could also keep the short delay before exit but make it conditional on the presence of an environment variable, and then set that environment variable in the Instruments launch properties for your app, so that running outside Instruments doesn't have the delay.

Most unit testing code executes the desired code paths and exits. Although this is perfectly normal for unit testing, it creates a problem for the leaks tool, which needs time to analyze the process memory space. To fix this problem, you should make sure your unit-testing code does not exit immediately upon completing its tests. You can do this by putting the process to sleep indefinitely instead of exiting normally.
https://developer.apple.com/library/ios/documentation/Performance/Conceptual/ManagingMemory/Articles/FindingLeaks.html

I've just decided to leave the 2 second delay during my debug+leaking build.

Related

Get line of last executed return statement

Is it possible, in any debugger for C/C++ to see where the last return statement happened? Or even a "return stack", like the call stack but the other way around?
bool foo(int bar)
{
if(bar == 1) return false;
if(bar == 2) return false;
return true;
}
void baz()
{
bool result = foo(4);
}
So after the call to foo in baz, you can see which of the three return statements in foo that was hit.
Here are some reverse debugger for C/C++ which can step back in time in your debug session:
Only Linux:
rr from mozilla
open source, really fast, small record traces, has to record the whole application
undo reverse debugger
commercial, not so fast as rr, small record traces, recording of application can start anywhere
GDB reverse debugging
standard gdb from GNU, can record small functions (limited), start debugging, set breakpoint, then start recording until next breakpoint, you can reverse debug between those points
Not tested by me:
RogueWave TotalView Debugger
commercial, comes with full IDE
Windows 10 (7 not supported)
WinGDB Reverse Debugging
not tested, cannot say much about it. It seems to be the only one working with windows
What to add, multithreading can be handled by these debuggers, but only if the execution of the threads is serialized to one core (pseudo-parallalization). You can't reverse debug multicore-multithreaded applications. The reason is simple, the reverse execution has to synchronize the executed threads, which can't be done if two or more threads are executed at the "same time".

Can a process be limited on how much physical memory it uses?

I'm working on an application, which has the tendency to use excessive amounts of memory, so I'd like to reduce this.
I know this is possible for a Java program, by adding a Maximum heap size parameter during startup of the Java program (e.g. java.exe ... -Xmx4g), but here I'm dealing with an executable on a Windows-10 system, so this is not applicable.
The title of this post refers to this URL, which mentions a way to do this, but which also states:
Maximum Working Set. Indicates the maximum amount of working set assigned to the process. However, this number is ignored by Windows unless a hard limit has been configured for the process by a resource management application.
Meanwhile I can confirm that the following lines of code indeed don't have any impact on the memory usage of my program:
HANDLE h_jobObject = CreateJobObject(NULL, L"Jobobject");
if (!AssignProcessToJobObject(h_jobObject, OpenProcess(PROCESS_ALL_ACCESS, FALSE, GetCurrentProcessId())))
{
throw "COULD NOT ASSIGN SELF TO JOB OBJECT!:";
}
JOBOBJECT_EXTENDED_LIMIT_INFORMATION tagJobBase = { 0 };
tagJobBase.BasicLimitInformation.MaximumWorkingSetSize = 1; // far too small, just to see what happens
BOOL bSuc = SetInformationJobObject(h_jobObject, JobObjectExtendedLimitInformation, (LPVOID)&tagJobBase, sizeof(tagJobBase));
=> bSuc is true, or is there anything else I should expect?
In top of this, the mentioned tools (resource managed applications, like Hyper-V) seem not to work on my Windows-10 system.
Next to this, there seems to be another post about this subject "Is there any way to force the WorkingSet of a process to be 1GB in C++?", but here the results seem to be negative too.
For a good understanding: I'm working in C++, so the solution, proposed in this URL are not applicable.
So now I'm stuck with the simple question: is there a way, implementable in C++, to limit the memory usage of the current process, running on Windows-10?
Does anybody have an idea?
Thanks in advance

What are the possible causes of "BUG: scheduling while atomic?"

There is another process continuously creating files that need processing by this code.
This code constantly scans the file-system for new files that need processing by comparing the contents of the file-system against a sqlite database that contains the processing results - one record for each file. This process is running at nice -n 19 so as not to interfere with the creation of new files by the other process.
It all works perfectly for a large number (>1k) of files, but then blows up with BUG: scheduling while atomic.
According to this
"Scheduling while atomic" indicates that you've tried to sleep
somewhere that you shouldn't
But the only sleep in the code is like this
void doFiles(void) {
for (...) { // for each file in the file-system
... // check database - do processing if needed
}
sleep(1);
}
int main(int argc, char *argv[], char *envp[]) {
while (true) doFiles();
return -1;
}
The code will hit this sleep after it has checked every file in the file-system against the database. The process needs to be repeated since new files will be added from time to time. There is no multi-threading in this code. Are there other possible causes for "BUG: scheduling while atomic" besides a misplaced sleep?
Edit: additional error output:
note: mirlin[1083] exited with preempt_count 1
BUG: scheduling while atomic: mirlin/1083/0x40000002
Modules linked in: g_cdc_ms musb_hdrc nop_usb_xceiv irqk edmak dm365mmap cmemk
Backtrace:
[<c002a5a0>] (dump_backtrace+0x0/0x110) from [<c028e56c>] (dump_stack+0x18/0x1c)
r6:c1099460 r5:c04ea000 r4:00000000 r3:20000013
[<c028e554>] (dump_stack+0x0/0x1c) from [<c00337b8>] (__schedule_bug+0x58/0x64)
[<c0033760>] (__schedule_bug+0x0/0x64) from [<c028e864>] (schedule+0x84/0x378)
r4:c10992c0 r3:00000000
[<c028e7e0>] (schedule+0x0/0x378) from [<c0033a80>] (__cond_resched+0x28/0x38)
[<c0033a58>] (__cond_resched+0x0/0x38) from [<c028ec6c>] (_cond_resched+0x34/0x44)
r4:00013000 r3:00000001
[<c028ec38>] (_cond_resched+0x0/0x44) from [<c0082f64>] (unmap_vmas+0x570/0x620)
[<c00829f4>] (unmap_vmas+0x0/0x620) from [<c0085c10>] (exit_mmap+0xc0/0x1ec)
[<c0085b50>] (exit_mmap+0x0/0x1ec) from [<c0037610>] (mmput+0x40/0xfc)
r9:00000001 r8:80000005 r6:c04ea000 r5:00000000 r4:c0427300
[<c00375d0>] (mmput+0x0/0xfc) from [<c003b5e4>] (exit_mm+0x150/0x158)
r5:c10992c0 r4:c0427300
[<c003b494>] (exit_mm+0x0/0x158) from [<c003cd44>] (do_exit+0x198/0x67c)
r7:c03120d1 r6:c10992c0 r5:0000000b r4:c10992c0
...
As others have said, you can sleep() anytime you want to in user code.
Looks like a problem with a driver on your platform. The driver may not actually call sleep() or schedule(), but often it will make a call of an kernel function which will, in turn, call one of these.
This also looks like it is using memory mapped file I/O on an embedded TI ARM processor.
This error was caused by a bad build.
A clean build by itself did not help.
A fresh checkout and build was required to resolve this issue.

VC2008 C++ Memory Leaks

please note my english skill is very low. but i'll try my best to explain.
I making a mfc project in Visual Studio 2008 sp1.
this Project included a static library that maked by 2008/sp1/native C++
the problem is that step:
1) build and start debug on mfc project
2) click x button on main window or alt+f4 to exit program
3) the main window is closed at once
4) but when i looking process tab of taskmgr, it still alive.
5) if i try kill mfc project process on taskmgr, it killed at once
6) but visual studio still debugging mode and very long time taken to the IDE is returnning normal.
7) the time is 5~10 minutes
8) and output log, detected memory leaks!!
9) the log is very large almost 11megabytes text
and i find the point.
1) the static library always create a library's main functional class's instance on start-up, using new operator (the start-up is static time, front of main)
2) static library's constructor has next code
note : i'm sorry i try to looking the 'Code' tab in this editor but i can't make code section so i write the code and ordering "br" html tag.
VPHYSICS::VPHYSICS(){
m_tickflowed = 0;
QueryPerformanceFrequency(&cpu_freq);
SetTickTime(30);
m_state[VPHYSTATE_SPEED_MAX]=SPEED_SCALAR_MAX;
m_state[VPHYSTATE_LIMITED_ACCELARATION]=FALSE;
m_state[VPHYSTATE_FRICTIONENABLE]=TRUE;
m_state[VPHYSTATE_FRICTIONFACTOR]=1.0f;
m_state[VPHYSTATE_GRAVITY]=9.8065f;
m_state[VPHYSTATE_ENGINESPEED_DELAY_HIGH]=0.0f;
m_state[VPHYSTATE_ENGINESPEED_DELAY_LOW]=0.0f;
m_state[VPHYSTATE_FRICTION_RATIO]=1.0f;
m_state[VPHYSTATE_DIMENSION_GLOBAL]=2;
m_state[VPHYSTATE_COLLISION_UNFRICTIONABLE]=TRUE;
m_state[VPHYSTATE_PAULI_EXCLUSION_ENABLE]=TRUE;
m_state[VPHYSTATE_PAULI_EXCLUSION_RATIO]=1.0f;
m_state[VPHYSTATE_FRICTION_SMOOTHLY]=1.0f;
m_state[VPHYSTATE_COLLHANDLER_OUTER]=TRUE;
m_dwSuspendedCount=0;
InitializeCriticalSection(&m_criRegister);
InitializeCriticalSection(&cri_out);
ZeroMemory(m_objs,sizeof(m_objs));
m_bThreadDestroy=FALSE;
m_hPhysicalHandle=0;
m_nPhysicalThread1ID=0;
m_nTimeMomentTotalCount=0;
m_hGarbageCollector=0;
m_nGarbageCollectorID=0;
m_PhyProcessIterID=NULL;
for(DWORD i = 1 ; i < MAX_OBJECT_NUMBER ; i++)
{
m_objAvaliable.push_back(i);
}
}
//this code is my static library, using Physics Engine of Game.
and the problem is when destroying this instace.
when the delete operator calling(at end of program), it takes very long time.
when i remove the
for(DWORD i = 1 ; i < MAX_OBJECT_NUMBER ; i++)
{
m_objAvaliable.push_back(i);
}
, or decrease MAX_OBJECT_NUMBER(originally it was #define MAX_OBJECT_NUMBER 100000, but i decrease it to 5 or 10), the 'long time' is disappeared!!
the type of 'm_objAvaliable' is std::list<DWORD>
this member variable seems not causing of memory leak. (because this container don't have any relation of heap allocation)
and the other project including this library don't has this problem.
(but included by mfc project is first time and i can see only this problem in this case)
Does anyone imagine a solution that problem???
if you want more information, comment to this article. i'll reply ASAP
more : it only happen in DEBUG mode. on Release mode, this problem don't occur.
I believe the problem you are experiencing is not, in fact, a problem at all. MFC uses it's own debug version of new (in Release mode, it uses the regular, default new). MFC does this so it can try and be helpful in telling you that you have memory leaks.
Trouble is, I think that you deallocation of the objects in the static library is occurring after MFC has decided to dump any allocations it thinks haven't been properly deallocated. Given that you have so many objects, It's spending an awfully long time dumping this stuff to the console.
At the end of the day, MFC thinks there are memory leaks when there aren't.
Your solutions are:
Stop MFC using DEBUG NEW. Remove any lines in your MFC project that are #define new DEBUG_NEW. The disadvantage to this method is that, of course, if you inadvertently create real memory leaks, it can't track them.
Have some kind of initialisation, de-initialisation functions in your static library. Call the de-initialisation function when your MFC application exits, prior to when MFC starts to trawl through allocations it still thinks exist.

Make main() "uncrashable"

I want to program a daemon-manager that takes care that all daemons are running, like so (simplified pseudocode):
void watchMe(filename)
{
while (true)
{
system(filename); //freezes as long as filename runs
//oh, filename must be crashed. Nevermind, will be restarted
}
}
int main()
{
_beginThread(watchMe, "foo.exe");
_beginThread(watchMe, "bar.exe");
}
This part is already working - but now I am facing the problem that when an observed application - say foo.exe - crashes, the corresponding system-call freezes until I confirm this beautiful message box:
This makes the daemon useless.
What I think might be a solution is to make the main() of the observed programs (which I control) "uncrashable" so they are shutting down gracefully without showing this ugly message box.
Like so:
try
{
char *p = NULL;
*p = 123; //nice null pointer exception
}
catch (...)
{
cout << "Caught Exception. Terminating gracefully" << endl;
return 0;
}
But this doesn't work as it still produces this error message:
("Untreated exception ... Write access violation ...")
I've tried SetUnhandledExceptionFilter and all other stuff, but without effect.
Any help would be highly appreciated.
Greets
This seems more like a SEH exception than a C++ exception, and needs to be handled differently, try the following code:
__try
{
char *p = NULL;
*p = 123; //nice null pointer exception
}
__except(GetExceptionCode() == EXCEPTION_ACCESS_VIOLATION ?
EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH)
{
cout << "Caught Exception. Terminating gracefully" << endl;
return 0;
}
But thats a remedy and not a cure, you might have better luck running the processes within a sandbox.
You can change the /EHsc to /EHa flag in your compiler command line (Properties/ C/C++ / Code Generation/ Enable C++ exceptions).
See this for a similar question on SO.
You can run the watched process a-synchronously, and use kernel objects to communicate with it. For instance, you can:
Create a named event.
Start the target process.
Wait on the created event
In the target process, when the crash is encountered, open the named event, and set it.
This way, your monitor will continue to run as soon as the crash is encountered in the watched process, even if the watched process has not ended yet.
BTW, you might be able to control the appearance of the first error message using drwtsn32 (or whatever is used in Win7), and I'm not sure, but the second error message might only appear in debug builds. Building in release mode might make it easier for you, though the most important thing, IMHO, is solving the cause of the crashes in the first place - which will be easier in debug builds.
I did this a long time ago (in the 90s, on NT4). I don't expect the principles to have changed.
The basic approach is once you have started the process to inject a DLL that duplicates the functionality of UnhandledExceptionFilter() from KERNEL32.DLL. Rummaging around my old code, I see that I patched GetProcAddress, LoadLibraryA, LoadLibraryW, LoadLibraryExA, LoadLibraryExW and UnhandledExceptionFilter.
The hooking of the LoadLibrary* functions dealt with making sure the patching was present for all modules. The revised GetProcAddress had provide addresses of the patched versions of the functions rather than the KERNEL32.DLL versions.
And, of course, the UnhandledExceptionFilter() replacement does what you want. For example, start a just in time debugger to take a process dump (core dumps are implemented in user mode on NT and successors) and then kill the process.
My implementation had the patched functions implemented with __declspec(naked), and dealt with all the registered by hand because the compiler can destroy the contents of some registers that callers from assembly might not expect to be destroyed.
Of course there was a bunch more detail, but that is the essential outline.