I was remote debugging a stack overflow from a recursive function. The Visual Studio IDE only showed the first 1,000 frames (all the same function), but I needed to go up further too see what the cause was.
Does anybody know how to get VS to 'move up' in a stack listing?
Thanks.
I do not believe there is a way to do this via the UI (or even a registry hack). My guess at the reason is showing all of the frames in a stack overflow situation can have a very negative performance impact.
Most stack frames are the result of bad recursion. If this is the case, you can likely set a conditional breakpoint on the target function. Set it to break only when the hit count reaches a certain level. I'd start with a count of around 1,000. You may have to experiment a bit to get the right count but it shouldn't take more than a few tries.
I would suggest replace your debugging method and use logging to handle such problem. You might find it more productive, you just need to carefully choose what and when to print.
Any way analyzing few thousands lines of text will be much faster than going up few thousands stack frames. IMHO.
And you can use David's suggesting to control the amount of data to print (i.e pass relevant information from one recursion cycle to another)
You might also try WinDbg. It's not as friendly, but it sometimes works where the VC debugger doesn't.
I run into this now and then, what I do is add the following line to the function that is being called recursively:
static int nest; if (++nest == 100) *(char*)0 = 0;
The number 100 is arbitrary, often just 10 will work. This limits the recursion, ending it with a seg fault. The debugger should then show you the frames that started the recursion.
You could add a temporary recursion count parameter to the function, and assert when it goes over a maximum value. Give it a default value and you won't need to edit any other source
void f(int rcount /* = 0 */ )
{
Assert(rcount < 1000);
f(count+1);
}
You are trying to solve this the wrong way.
There should be enough stack frames to show you the recurring call pattern. You should already be provided with enough inferential data to figure out how an infinite cycle of calls can happen.
Another hack idea might be to drastically decrease your stack size or artificially increase the size of each frame...
Related
Curiosity: I was writing codes in Learn Ocaml and when I compiled my codes, the compiler says: "Out of stack error". I guess This is due to the amount of codes I wrote. So I'm wondering how can I check how full is the stack? I wrote about 450-500 lines of codes. With couple of new lines.
It could also be the number of codes that's been executed in the stack is taking up all the stack. So I just want to know what cause the problem and how to resolve it? Here is a picture showing the error:
The error went away when I copy only part of the codes in the editor to the toplevel.
The stack utilization doesn't depend on the size of your program or on the number of lines in it, it depends on the runtime behavior of your program, mostly due to looping recursion. For example, this small function will consume a stack of any size:
let rec f () = f () + f ()
In the picture, that you've provided, I can see at least one case of unbound recursion (i.e., a recursion that definitely will not terminate). When you call insert you're applying it to the whole tree m, instead of a subtree. So at each new invocation nothing actually changes, and you're getting an infinite loop.
I am trying to create a sampling profiler that works on linux, I am unsure how to send an interrupt or how to get the program counter (pc) so I can find out the point the program is at when I interrupt it.
I have tried using signal(SIGUSR1, Foo*) and calling backtrace, but I get the stack for the thread I am in when I raise(SIGUSR1) rather than the thread the program is being run on.
I am not really sure if this is even the right way of going about it...
Any advice?
If you must write a profiler, let me suggest you use a good one (Zoom) as your model, not a bad one (gprof).
These are its characteristics.
There are two phases. First is the data-gathering phase:
When it takes a sample, it reads the whole call stack, not just the program counter.
It can take samples even when the process is blocked due to I/O, sleep, or anything else.
You can turn sampling on/off, so as to only take samples during times you care about. For example, while waiting for the user to type something, it is pointless to be sampling.
Second is the data-presentation phase.
What you have is a collection of stack samples, where a stack sample is a vector of memory addresses, which are almost all return addresses.
Each return address indicates a line of code in a function, unless it's in some system routine you don't have symbolic information for.
The key piece of useful information is residency fraction (usually expressed as a percent).
If there are a total of m stack samples, and line of code L is present anywhere on n of the samples, then its residency fraction is n/m.
This is true even if L appears more that once on a sample, that is still just one sample it appears on.
The importance of residency fraction is it directly indicates what fraction of time statement L is responsible for.
If you have taken m=1000 samples, and L appears on n=300 of them, then L's residency fraction is 300/1000 or 30%.
This means that if L could be removed, total time would decrease by 30%.
It is typically known as inclusive percent.
You can determine residency fraction not just for lines of code, but for anything else you can describe. For example, line of code L is inside some function F.
So you can determine the residency fraction for functions, as opposed to lines of code.
That would give you inclusive percent by function.
You could look at function pairs, like on what fraction of samples do you see function F calling function G.
That would give you the information that makes up call graphs.
There are all kinds of information you can get out of the stack samples.
One that is often seen is a "butterfly view", where you have a "focus" on one line L or function F, and on one side you show all the lines or functions immediately above it in the stack samples, and on the other side all the lines of functions immediately below it.
On each of these, you can show the residency fraction.
You can click around in this to try to find lines of code with high residency fraction that you can find a way to eliminate or reduce.
That's how you speed up the code.
Whatever you do for output, I think it is very important to allow the user to actually examine a small number of them, randomly selected.
They convey far more insight than can be gotten from any method that condenses the information.
As important as it is to know what the profiler should do, it is also important to know what not to do, even if lots of other profilers do them:
self time. A useless number. Look at some reasonable-size programs and you will see why.
invocation counts. Of no help in finding code with high residency fraction, and you can't get it with samples alone anyway.
high-frequency sampling. It's amazing how many people, certainly profiler builders, think it is important to get lots of samples. Suppose line L is on 30% of 1000 samples. Then its true inclusive percent is 30 +/- 1.4 percent. On the other hand, if it is on 30% of 10 samples, its inclusive percent is 30 +/- 14 percent. It's still pretty big - big enough to fix. What happens in most profilers is people think they need "numerical precision", so they take lots of samples and accumulate what they call "statistics", and then throw away the samples. That's like digging up diamonds, weighing them, and throwing them away. The real value is in the samples themselves, because they tell you what the problem is.
You can send signal to specific thread using pthread_kill and tid (gettid()) of target thread.
Right way of creating simple profilers is by using setitimer which can send periodic signal (SIGALRM or SIGPROF) for example, every 10 ms; or posix timers (timer_create, timer_settime, or timerfd), without needs of separate thread for sending profiling signals. Check sources of google-perftools (gperftools), they use setitimer or posix timers and collects profile with backtraces.
gprof also uses setitimer for implementing cpu time profiling (9.1 Implementation of Profiling - " Linux 2.0 ..arrangements are made for the kernel to periodically deliver a signal to the process (typically via setitimer())").
For example: result of codesearch for setitimer in gperftools's sources: https://code.google.com/p/gperftools/codesearch#search/&q=setitimer&sq=package:gperftools&type=cs
void ProfileHandler::StartTimer() {
if (!allowed_) {
return;
}
struct itimerval timer;
timer.it_interval.tv_sec = 0;
timer.it_interval.tv_usec = 1000000 / frequency_;
timer.it_value = timer.it_interval;
setitimer(timer_type_, &timer, 0);
}
You should know that setitimer has problems with fork and clone; it doesn't work with multithreaded applications. There is try to create helper wrapper: http://sam.zoy.org/writings/programming/gprof.html (wrong one) but I don't remember, does it work correctly (setitimer usually send process-wide signal, and not thread-wide). UPD: seems that since linux kernel 2.6.12, setitimer's signal is directed to the process as whole (any thread may get it).
To direct signal from timer_create to specific thread, you need gettid() (#include <sys/syscall.h>, syscall(__NR_gettid)) and SIGEV_THREAD_ID flag. Don't checked how to create periodic posix timer with thread_create (probably with timer_settime and non-zero it_interval).
PS: there is some overview of profiling in wikibooks: http://en.wikibooks.org/wiki/Introduction_to_Software_Engineering/Tools/Profiling
In order to know the limit of the recursive calls in C++ i tried this function !
void recurse ( int count ) // Each call gets its own count
{
printf("%d\n",count );
// It is not necessary to increment count since each function's
// variables are separate (so each count will be initialized one greater)
recurse ( count + 1 );
}
this program halt when count is equal 4716 ! so the limit is just 4716 !!
I'm a little bit confused !! why the program stops exeuction when the count is equal to 4716 !!
PS: Executed under Visual studio 2010.
thanks
The limit of recursive calls depends on the size of the stack. The C++ language is not limiting this (from memory, there is a lower limit of how many function calls a standards conforming compiler will need to support, and it's a pretty small value).
And yes, recursing "infinitely" will stop at some point or another. I'm not entirely sure what else you expect.
It is worth noting that designing software to do "boundless" recursion (or recursion that runs in to the hundreds or thousands) is a very bad idea. There is no (standard) way to find out the limit of the stack, and you can't recover from a stack overflow crash.
You will also find that if you add an array or some other data structure [and use it, so it doesn't get optimized out], the recursion limit goes lower, because each stack-frame uses more space on the stack.
Edit: I actually would expect a higher limit, I suspect you are compiling your code in debug mode. If you compile it in release mode, I expect you get several thousand more, possibly even endless, because the compiler converts your tail-recursion into a loop.
The stack size is dependent on your environment.
In *NIX for instance, you can modify the stack size in the environment, then run your program and the result will be different.
In Windows, you can change it this way (source):
$ editbin /STACK:reserve[,commit] program.exe
You've probably run out of stack space.
Every time you call the recursive function, it needs to push a return address on the stack so it knows where to return to after the function call.
It crashes at 4716 because it just happens to run out of stack space after about 4716 iterations.
Now I am debugging a large project, which has a stack corruption: the application fails.
I would like to know how to find (debug) such stack corruption code with Visual Studio 2010?
Here's an example of some code which causes stack problems, how would I find less obvious cases of this type of corruption?
void foo()
{
int i = 10;
int *p = &i;
p[-2] = 100;
}
Update
Please note that this is just an example. I need to find such bad code in the current project.
There's one technique that can be very effective with these kinds of bugs, but it'll only work on a subset of them that has a few characteristics:
the corrupting value must be stable (ie., as in your example, when the corruption occurs, it's always 100), or at least something that can be readily identified in a simple expression
the corruption has to occur at a particular address on the stack
the corrupting value is unusual enough that you won't be hit with a slew of false positives
Note that the second condition may seem unlikely at first glance because the stack can be used in so many different ways depending on the runtime actions. However, stack usage is generally pretty deterministic. The problem is that a particular stack location can be used for so many different things that the problem is really item #3.
Anyway, if your bug has these characteristics, you should identify the stack address (or one of them) that gets corrupted, then set a memory breakpoint for a write to that address with a condition that causes it to break only if the value written is the corrupting value. In visual Studio, you can do this by creating a "New Data Breakpoint..." in the Breakpoints window then right clicking the breakpoint to set the condition.
If you end up getting too many false positives, it might help to narrow the scope of the breakpoint by leaving it disabled until some point in the execution path that's closer to the bug (if you can identify such a time), or set the hit count high enough to remove most of the false positives.
An additional complication is the address of the stack may change from run to run - in this case, you'll have to take care to set the breakpoint on each run (the lower bits of the address should be the same).
I believe your Questions quotes an example of stack corruption and the question you are asking is not why it crashes.
If it is so, It crashes because it creates an Undefined Behavior because the index -2 points to an unknown memory location.
To answer the question on profiling your application:
You can use Rational Purify Plus for Visual studio to check for memory overrites and access errors.
This is UB: p[-2] = 100;
You can access p with the operator[] in this(p[i]) way, but in this case i is an invalid value. So p[-2] points to an invalid memory location and causes Undefined Behaviour.
To find it you should debug your app and find where it crashes, and hopefully, it'll be at a place where something is actually wrong.
The deceptively simple foundation of dynamic code generation within a C/C++ framework has already been covered in another question. Are there any gentle introductions into topic with code examples?
My eyes are starting to bleed staring at highly intricate open source JIT compilers when my needs are much more modest.
Are there good texts on the subject that don't assume a doctorate in computer science? I'm looking for well worn patterns, things to watch out for, performance considerations, etc. Electronic or tree-based resources can be equally valuable. You can assume a working knowledge of (not just x86) assembly language.
Well a pattern I've used in emulators goes something like this:
typedef void (*code_ptr)();
unsigned long instruction_pointer = entry_point;
std::map<unsigned long, code_ptr> code_map;
void execute_block() {
code_ptr f;
std::map<unsigned long, void *>::iterator it = code_map.find(instruction_pointer);
if(it != code_map.end()) {
f = it->second
} else {
f = generate_code_block();
code_map[instruction_pointer] = f;
}
f();
instruction_pointer = update_instruction_pointer();
}
void execute() {
while(true) {
execute_block();
}
}
This is a simplification, but the idea is there. Basically, every time the engine is asked to execute a "basic block" (usually a everything up to next flow control op or whole function in possible), it will look it up to see if it has already been created. If so, execute it, else create it, add it and then execute.
rinse repeat :)
As for the code generation, that gets a little complicated, but the idea is to emit a proper "function" which does the work of your basic block in the context of your VM.
EDIT: note that I haven't demonstrated any optimizations either, but you asked for a "gentle introduction"
EDIT 2: I forgot to mention one of the most immediately productive speed ups you can implement with this pattern. Basically, if you never remove a block from your tree (you can work around it if you do but it is way simpler if you never do), then you can "chain" blocks together to avoid lookups. Here's the concept. Whenever you return from f() and are about to do the "update_instruction_pointer", if the block you just executed ended in either a call, unconditional jump, or didn't end in flow control at all, then you can "fixup" its ret instruction with a direct jmp to the next block it'll execute (cause it'll always be the same one) if you have already emited it. This makes it so you are executing more and more often in the VM and less and less in the "execute_block" function.
I'm not aware of any sources specifically related to JITs, but I imagine that it's pretty much like a normal compiler, only simpler if you aren't worried about performance.
The easiest way is to start with a VM interpreter. Then, for each VM instruction, generate the assembly code that the interpreter would have executed.
To go beyond that, I imagine that you would parse the VM byte codes and convert them into some sort of suitable intermediate form (three address code? SSA?) and then optimize and generate code as in any other compiler.
For a stack based VM, it may help to to keep track of the "current" stack depth as you translate the byte codes into intermediate form, and treat each stack location as a variable. For example, if you think that the current stack depth is 4, and you see a "push" instruction, you might generate an assignment to "stack_variable_5" and increment a compile time stack counter, or something like that. An "add" when the stack depth is 5 might generate the code "stack_variable_4 = stack_variable_4+stack_variable_5" and decrement the compile time stack counter.
It is also possible to translate stack based code into syntax trees. Maintain a compile-time stack. Every "push" instruction causes a representation of the thing being pushed to be stored on the stack. Operators create syntax tree nodes that include their operands. For example, "X Y +" might cause the stack to contain "var(X)", then "var(X) var(Y)" and then the plus pops both var references off and pushes "plus(var(X), var(Y))".
Get yourself a copy of Joel Pobar's book on Rotor (when it's out), and delve through the source to the SSCLI. Beware, insanity lies within :)