Get line of last executed return statement - c++

Is it possible, in any debugger for C/C++ to see where the last return statement happened? Or even a "return stack", like the call stack but the other way around?
bool foo(int bar)
{
if(bar == 1) return false;
if(bar == 2) return false;
return true;
}
void baz()
{
bool result = foo(4);
}
So after the call to foo in baz, you can see which of the three return statements in foo that was hit.

Here are some reverse debugger for C/C++ which can step back in time in your debug session:
Only Linux:
rr from mozilla
open source, really fast, small record traces, has to record the whole application
undo reverse debugger
commercial, not so fast as rr, small record traces, recording of application can start anywhere
GDB reverse debugging
standard gdb from GNU, can record small functions (limited), start debugging, set breakpoint, then start recording until next breakpoint, you can reverse debug between those points
Not tested by me:
RogueWave TotalView Debugger
commercial, comes with full IDE
Windows 10 (7 not supported)
WinGDB Reverse Debugging
not tested, cannot say much about it. It seems to be the only one working with windows
What to add, multithreading can be handled by these debuggers, but only if the execution of the threads is serialized to one core (pseudo-parallalization). You can't reverse debug multicore-multithreaded applications. The reason is simple, the reverse execution has to synchronize the executed threads, which can't be done if two or more threads are executed at the "same time".

Related

Why would a PyDev breakpoint on a return statement only work part of the time?

I have some code that calculates the price of a stock option using Monte Carlo and returns a discounted price. The final few lines of the relevant method look like this:
if(payoffType == pt.LongCall or payoffType == pt.LongPut):
discountedPrice=discountedValue
elif(payoffType == pt.ShortCall or payoffType == pt.ShortPut):
discountedPrice=(-1.0)*discountedValue
else:
raise Exception
#endif
print "dv:", discountedValue, " px:", discountedPrice
return discountedPrice
At a higher level of the program, I create four pricers, which are passed to instances of a portfolio class that calls the price() method on the pricer it has received.
When I set the breakpoint on the if statement or the print statement, the breakpoints work as expected. When I set the breakpoint on the return statement, the breakpoint is interpreted correctly on the first pass through the pricing code, but then skipped on subsequent passes.
On occasion, if I have set a breakpoint somewhere in the flow of execution between the first pass through the pricing code and the second pass, the breakpoint will be picked up.
I obviously have a workaround, but I'm curious if anyone else has observed this behavior in the PyDev debugger, and if so, does anyone know the root cause?
The issues I know of are:
If you have a StackOverflowError anywhere in the code, Python will disable the tracing that the debugger uses.
I know there are some issues with asynchronous code which could make the debugger break.
A workaround is using a programmatic breakpoint (i.e.: pydevd.settrace -- the remote debugger session: http://www.pydev.org/manual_adv_remote_debugger.html has more details on it) -- it resets the tracing even if Python broke it in a stack overflow error and will always be hit to (the issue on asynchronous code is that the debugger tries to run with untraced threads, but sometimes it's not able to restore it on some conditions when dealing with asynchronous code).

How to detect file leak and the corresponding code in Solaris?

How to detect file leak and the corresponding stack in Solaris? I see the information was well reported by valgrind on Linux. Please let me know if we have any tools on Solaris also?
On Linux you can use strace to log all file open and close calls. Then you can analyse the log on Resource Leak - the number of open calls should match the number of close calls. If this is not true then you have a leak. On Solaris there is a similar tool - DTrace.
You can, in Solaris, look at currently open filedescriptors of a process by simply using the pfiles command. If you want to track files being opened/closed, truss (the Solaris equivalent to strace) comes to mind, with a filter for file-related syscalls (truss -e open,close but there are others that create filedescriptors).
If you find that the pfiles output grows, first identify whether what you're leaking are ordinary files or things like sockets / pipes. If it's leaking ordinary files, then a dtrace script can be used; the following is a base for own experiments, I currently don't have a Solaris system at hand to try it out and refine it. See below.
#!/usr/bin/dtrace -s
syscall::open:entry { self->t = ustack(); }
syscall::open:return /arg0 >= 0/ { trackedfds[arg0] = self->t; }
syscall::open:return { self->t = 0; }
syscall::close:entry { self->t = arg0; }
syscall::close:return /arg0 >= 0/ { trackedfds[self->t] = 0; }
syscall::close:return { self->t = 0; }
END { printa(trackedfds); }
This builds an associative array indexed by filedescriptor number whose contents are the userside stacktrace at the time of the open() system call. On successful close, the entry for the given filedescriptor number is discarded, and when the program exits (or the script is stopped) the remaining contents of said associative array are printed - if anything's left, that'd be a candidate for leaks.
Note that the END {} probe might not be the correct place for this; proc::exit or something of the like may be required. It depends on when exactly this triggers, before or after the cleanup done at program teardown (exiting / killing a program closes all its filedescriptors, which would erase the trackedfds[] array). That's why I've said above this is a starting point, I can't check without a Solaris system.

Make main() "uncrashable"

I want to program a daemon-manager that takes care that all daemons are running, like so (simplified pseudocode):
void watchMe(filename)
{
while (true)
{
system(filename); //freezes as long as filename runs
//oh, filename must be crashed. Nevermind, will be restarted
}
}
int main()
{
_beginThread(watchMe, "foo.exe");
_beginThread(watchMe, "bar.exe");
}
This part is already working - but now I am facing the problem that when an observed application - say foo.exe - crashes, the corresponding system-call freezes until I confirm this beautiful message box:
This makes the daemon useless.
What I think might be a solution is to make the main() of the observed programs (which I control) "uncrashable" so they are shutting down gracefully without showing this ugly message box.
Like so:
try
{
char *p = NULL;
*p = 123; //nice null pointer exception
}
catch (...)
{
cout << "Caught Exception. Terminating gracefully" << endl;
return 0;
}
But this doesn't work as it still produces this error message:
("Untreated exception ... Write access violation ...")
I've tried SetUnhandledExceptionFilter and all other stuff, but without effect.
Any help would be highly appreciated.
Greets
This seems more like a SEH exception than a C++ exception, and needs to be handled differently, try the following code:
__try
{
char *p = NULL;
*p = 123; //nice null pointer exception
}
__except(GetExceptionCode() == EXCEPTION_ACCESS_VIOLATION ?
EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH)
{
cout << "Caught Exception. Terminating gracefully" << endl;
return 0;
}
But thats a remedy and not a cure, you might have better luck running the processes within a sandbox.
You can change the /EHsc to /EHa flag in your compiler command line (Properties/ C/C++ / Code Generation/ Enable C++ exceptions).
See this for a similar question on SO.
You can run the watched process a-synchronously, and use kernel objects to communicate with it. For instance, you can:
Create a named event.
Start the target process.
Wait on the created event
In the target process, when the crash is encountered, open the named event, and set it.
This way, your monitor will continue to run as soon as the crash is encountered in the watched process, even if the watched process has not ended yet.
BTW, you might be able to control the appearance of the first error message using drwtsn32 (or whatever is used in Win7), and I'm not sure, but the second error message might only appear in debug builds. Building in release mode might make it easier for you, though the most important thing, IMHO, is solving the cause of the crashes in the first place - which will be easier in debug builds.
I did this a long time ago (in the 90s, on NT4). I don't expect the principles to have changed.
The basic approach is once you have started the process to inject a DLL that duplicates the functionality of UnhandledExceptionFilter() from KERNEL32.DLL. Rummaging around my old code, I see that I patched GetProcAddress, LoadLibraryA, LoadLibraryW, LoadLibraryExA, LoadLibraryExW and UnhandledExceptionFilter.
The hooking of the LoadLibrary* functions dealt with making sure the patching was present for all modules. The revised GetProcAddress had provide addresses of the patched versions of the functions rather than the KERNEL32.DLL versions.
And, of course, the UnhandledExceptionFilter() replacement does what you want. For example, start a just in time debugger to take a process dump (core dumps are implemented in user mode on NT and successors) and then kill the process.
My implementation had the patched functions implemented with __declspec(naked), and dealt with all the registered by hand because the compiler can destroy the contents of some registers that callers from assembly might not expect to be destroyed.
Of course there was a bunch more detail, but that is the essential outline.

Memory leak checking using Instruments on Mac

I've just been pulling my hair out trying to make Instruments cough up my deliberately constructed memory leaks. My test example looks like this:
class Leaker
{
public:
char *_array;
Leaker()
{
_array=new char[1000];
}
~Leaker()
{
}
};
void *leaker()
{
void *p=malloc(1000);
int *pa=new int[2000];
{
Leaker l;
Leaker *pl=new Leaker();
}
return p;
}
int main (int argc, char **argv)
{
for (int i=0; i<1000; ++i) {
leaker();
}
sleep(2); // Needed to give Instruments a chance to poll memory
return 0;
}
Basically Instruments never found the obvious leaks. I was going nuts as to why, but then discovered "sec Between Auto Detections" in the "Leaks Configuration" panel under the Leaks panel. I dialed it back as low as it would go, which was 1 second, and placed the sleep(2) in in my code, and voila; leaks found!
As far as I'm concerned, a leak is a leak, regardless of whether it happens 30 minutes into an app or 30 milliseconds. In my case, I stripped the test case back to the above code, but my real application is a command-line application with no UI or anything and it runs very quickly; certainly less than the default 10 second sample interval.
Ok, so I can live with a couple of seconds upon exit of my app in instrumentation mode, but what I REALLY want, is to simply have Instruments snapshot memory on exit, then do whatever it needs over time while the app is running.
So... the question is: Is there a way to make Instruments snapshot memory on exit of an application, regardless of the sampling interval?
Cheers,
Shane
Instruments, in Leaks mode can be really powerful for leak tracing, but I've found that it's more biased towards event-based GUI apps than command line programs (particularly those which exit after a short time). There used to be a CHUD API where you could programmatically control aspects of the instrumentation, but last time I tried it the frameworks were no longer provided as part of the SDK. Perhaps some of this is now replaced with Dtrace.
Also, ensure you're up to date with Xcode as there were some recent improvements in this area which might make it easier to do what you need. You could also keep the short delay before exit but make it conditional on the presence of an environment variable, and then set that environment variable in the Instruments launch properties for your app, so that running outside Instruments doesn't have the delay.
Most unit testing code executes the desired code paths and exits. Although this is perfectly normal for unit testing, it creates a problem for the leaks tool, which needs time to analyze the process memory space. To fix this problem, you should make sure your unit-testing code does not exit immediately upon completing its tests. You can do this by putting the process to sleep indefinitely instead of exiting normally.
https://developer.apple.com/library/ios/documentation/Performance/Conceptual/ManagingMemory/Articles/FindingLeaks.html
I've just decided to leave the 2 second delay during my debug+leaking build.

process termination C++

I have the following problem: I have an application (server that never ends) written in C++ running as a service containing inside the main thread also 3 threads (mainly doing IO).
In the main loop I CATCH all possible exceptions.
The process terminated and nothing was printed either by the main loop or by the threads themselves. I saw in the event log that the process stopped with code 1000.
Does Windows creates Core files like in unix ?
If from the event log I get a memory address, is there any way of knowing in which part in the application it occurred?
Maybe this is a clue: at the same time that it happened I started another application (not the same type).
try to set windbg as the postmortem debugger.
install windbg
from commandline execute "windbg -I"
start you application, then you when your application get an unhandled exception,
the windbg will be activated .
from windbg use "kb" or "!uniqstack" to see the stacktrace.
look here for more commands.
and look here for how to analysis.
and try use SEH:
#include "windows.h"
#include "stdio.h"
DWORD FilterFunction()
{
printf("you will see this message first.\n");
return EXCEPTION_EXECUTE_HANDLER;
}
int main(char** argv, int c)
{
__try
{
int this_will_be_zero = (c == 9999);
int blowup = 1 / this_will_be_zero;
}
__except ( FilterFunction())
{
printf("you will see this message\n");
}
return 0;
}
Bear in mind that catch(...) does not intercept everything that can go wrong in your code, unless you are using Structured Exception Handling. Eg
#include "stdio.h"
int main(char** argv, int c)
{
try {
int this_will_be_zero = (c == 9999);
int blowup = 1 / this_will_be_zero;
} catch (...) {
printf("you won't see this message\n");
}
}
You need to use the /EHa compiler switch when building your application in order to catch windows structured exceptions (such as access violation) with the C++ try/catch constructs.
In Visual Studio this is in project properties Configuration Properties -> C/C++ -> Code Generation -> Enable C++ Exceptions. You want "Yes With SEH Exceptions (/EHa)". I remember reading setting this has some significant drawbacks, although I cannot recall what they were exactly.
Link: MSDN on C++ exception handling model
Edit: As whunmr suggests, directly using structured exceptions is probably better idea than /EHa
Does Windows creates Core files like in unix ?
it does not, automatically. however you can enable such a files by
either implementing it in your code or by using external application
as windbg, or Dr. Watson
If from the event log I get a memory address, is there any way of knowing in which part in the application it occurred?
There is no way if in general, if you don't keep debug information files (pdb)
Maybe this is a clue: at the same time that it happened I started another application (not the same type).
this is not helpful information, unless both of the applications are interacted each other
Windows will whatever program you are using as a debugger depending on the setting in:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AeDebug]
"Auto"="0"
"Debugger"="My_Debugger" -p %ld -e %ld"
"UserDebuggerHotKey"=dword:00000000
You can change My_Debugger to the full path of whatever program/IDE you are using for debugging.
This is often set to Dr Watson which will create a log entry of the crash which is not what you want.