In order to know the limit of the recursive calls in C++ i tried this function !
void recurse ( int count ) // Each call gets its own count
{
printf("%d\n",count );
// It is not necessary to increment count since each function's
// variables are separate (so each count will be initialized one greater)
recurse ( count + 1 );
}
this program halt when count is equal 4716 ! so the limit is just 4716 !!
I'm a little bit confused !! why the program stops exeuction when the count is equal to 4716 !!
PS: Executed under Visual studio 2010.
thanks
The limit of recursive calls depends on the size of the stack. The C++ language is not limiting this (from memory, there is a lower limit of how many function calls a standards conforming compiler will need to support, and it's a pretty small value).
And yes, recursing "infinitely" will stop at some point or another. I'm not entirely sure what else you expect.
It is worth noting that designing software to do "boundless" recursion (or recursion that runs in to the hundreds or thousands) is a very bad idea. There is no (standard) way to find out the limit of the stack, and you can't recover from a stack overflow crash.
You will also find that if you add an array or some other data structure [and use it, so it doesn't get optimized out], the recursion limit goes lower, because each stack-frame uses more space on the stack.
Edit: I actually would expect a higher limit, I suspect you are compiling your code in debug mode. If you compile it in release mode, I expect you get several thousand more, possibly even endless, because the compiler converts your tail-recursion into a loop.
The stack size is dependent on your environment.
In *NIX for instance, you can modify the stack size in the environment, then run your program and the result will be different.
In Windows, you can change it this way (source):
$ editbin /STACK:reserve[,commit] program.exe
You've probably run out of stack space.
Every time you call the recursive function, it needs to push a return address on the stack so it knows where to return to after the function call.
It crashes at 4716 because it just happens to run out of stack space after about 4716 iterations.
Related
Repeated runs of the following C++ program give a different maximum number of recursion calls (varying by approximately 100 function calls) before a segmentation fault.
#include <iostream>
void recursion(int i)
{
std::cout << "iteration: " << ++i << std::endl;
recursion(i);
}
int main()
{
recursion(0);
};
I compiled the file main.cpp with
g++ -O0 main.cpp -o main
Here and here the same issue as above is discussed for java. In both cases, the answers are based on java related concepts, JIT, garbage collection, HotSpot optimizer, etc.
Why does the maximum number of recursions vary for C++?
Your recursion never logically terminates. It only terminates when your program crashes due to lack of stack space.
A certain amount of stack space is used for every recursive call, but in C++, it's not defined exactly how much stack space is available and how much is used per recursive call.
The stack space used per call may vary by optimization settings, linker options, alignment requirements, how your program is launched, and a ton of other things.
Bottom line: you have coded a bug, and you are running afoul of undefined behavior in your compiler and platform. If you want to figure out exactly how much stack space your program has on its current thread, your platform will have APIs you can call to get that value.
What happens when you blow the stack is not a guaranteed crash. Depending on the system, you could just be trashing memory in a relatively random bit of your memory space.
What is in that memory might depend on what memory allocations occurred, how much contiguous memory the OS handed to you when you asked for some, ASLR, or whatever.
Undefined behaviour in C++ is not predictable.
Beyond the C++ aspect: Following the comments of Eljay and n.'pronouns'.m, I turned of ASLR. This post describes how to do that. In short, ASLR can be disabled via
echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
and enabled via
echo 2 | sudo tee /proc/sys/kernel/randomize_va_space
After disabling ASLR, the number of recursions before the system segmentation fault is constant for repeated execution of the described program.
I am running the following routine in with --track-allocation=user. The routine is called in a for loop. Still I am surprised at the allocation generated in the first line: I would expect numqps to be allocated on the stack and thus not contributing to the final allocation count.
- function buildpoints{T}(cell::Cell{T}, uv)
-
2488320 numqps::Int = size(uv,2)
12607488 mps = Array(Point{T}, numqps)
0 for i in 1 : numqps
0 mps[i] = buildpoint(cell, Vector2(uv[1,i], uv[2,i]))
- end
-
0 return mps
-
- end
EDIT: A bit further along in the memory profiling output I find:
1262976 numcells(m::Mesh) = size(m.faces,2)
It seems the size function on Arrays is implemented not very efficiently?
Apparently I was calling size on a variable declared as
type MyType{T}
A::Array{T}
end
So the type of A was only partially declared, i.e. only the eltype was supplied, not the number of dimensions. I noticed similar allocation overheads when accessing elements (A[i,j]). Allocation disappeared when I declared instead
type MyType{T}
A::Array{T,2}
end
In interpreting the results, there are a few important details. Under
the user setting, the first line of any function directly called from
the REPL will exhibit allocation due to events that happen in the REPL
code itself. More significantly, JIT-compilation also adds to
allocation counts, because much of Julia’s compiler is written in
Julia (and compilation usually requires memory allocation). The
recommended procedure is to force compilation by executing all the
commands you want to analyze, then call Profile.clear_malloc_data()
(page 594) to reset all allocation counters. Finally, execute the
desired commands and quit Julia to trigger the generation of the .mem
files.
I was doing a question where I used a recursive function to create a segment tree. For larger values it started giving segmentation fault. So I thought before it might be because of array index value out of bound but later I thought it might be because of program stack going too big.
I wrote this code to count what is the maximum number of recursive calls allowed before the system give seg-fault.
#include<iostream>
using namespace std;
void recur(long long int);
int main()
{
recur(0);
return 0;
}
void recur(long long int v)
{
v++;
cout<<v<<endl;
recur(v);
}
After running the above code I got value of v to be 261926 and 261893 and 261816 before getting segmentation fault and all values were close to these.
Now I know that this would depend on machine to machine, and the size of the stack of the function being called but can someone explain the basics of how to keep safe from seg-faults and what is a soft limit that one can keep in mind.
The number of recursion levels you can do depends on the call-stack size combined with the size of local variables and arguments that are placed on such a stack. Aside from "how the code is written", just like many other memory related things, this is very much dependent on the system you're running on, what compiler you are using, optimisation level [1], and so on. Some embedded systems I've worked on, the stack would be a few hundred bytes, my first home computer had 256 bytes of stack, where modern desktops have megabytes of stack (and you can adjust it, but eventually you will run out)
Doing recursion at unlimited depth is not a good idea, and you should look at changing your code to so that "it doesn't do that". You need to understand the algorithm and understand to what depth it will recurse, and whether that is acceptable in your system. There is unfortunately nothing anyone can do at the time stack runs out (at best your program crashes, at worst it doesn't, but instead causes something ELSE to go wrong, such as the stack or heap of some other application gets messed up!)
On a desktop machine, I'd think it's acceptable to have a recursion depth of a hew hundred to some thousands, but not much more than this - and that is if you have small usage of stack in each call - if each call is using up kilobytes of stack, you should limit the call level even further, or reduce the need for stack-space.
If you need to have more recursion depth than that, you need to re-arrange the code - for example using a software stack to store the state, and a loop in the code itself.
[1] Using g++ -O2 on your posted code, I got to 50 million and counting, and I expect if I leave it long enough, it will restart at zero because it keeps going forever - this since g++ detects that this recursion can be converted into a loop, and does that. Same program compiled with -O0 or -O1 does indeed stop at a little over 200000. With clang++ -O1 it just keeps going. The clang-compiled code is still running as I finished writing the rest of the code, at 185 million "recursions".
There is (AFAIK) no well established limit. (I am answering from a Linux desktop point of view).
On desktops, laptops the default stack size is a few megabytes in 2015. On Linux you could use setrlimit(2) to change it (to a reasonable figure, don't expect to be able to set it to a gigabyte these days) - and you could use getrlimit(2) or parse /proc/self/limits (see proc(5)) to query it . On embedded microcontrollers - or inside the Linux kernel- , the entire stack may be much more limited (to a few kilobytes in total).
When you create a thread using pthread_create(3) you could use an explicit pthread_attr_t and use pthread_attr_setstack(3) to set the stack space.
BTW, with recent GCC, you might compile all your software (including the standard C library) with split stacks (so pass -fsplit-stack to gcc or g++)
At last your example is a tail call, and GCC could optimize that (into a jump with arguments). I checked that if you compile with g++ -O2 (using GCC 4.9.2 on Linux/x86-64/Debian) the recursion would be transformed into a genuine loop and no stack allocation would grow indefinitely (your program run for nearly 40 millions calls to recur in a minute, then I interrupted it) In better languages like Scheme or Ocaml there is a guarantee that tail calls are indeed compiled iteratively (then the tail recursive call becomes the usually -or even the only- looping construct).
CyberSpok is excessive in his comment (hinting to avoid recursions). Recursions are very useful, but you should limit them to a reasonable depth (e.g. a few thousands), and you should take care that call frames on the call stack are small (less than a kilobyte each), so practically allocate and deallocate most of the data in the C heap. The GCC -fstack-usage options is really useful for reporting stack usage of every compiled function. See this and that answers.
Notice that continuation passing style is a canonical way to transform recursions into iterations (then you trade stack frames with dynamically allocated closures).
Some clever algorithms replace a recursion with fancy modifying iterations, e.g. the Deutche-Shorr-Waite graph marking algorithm.
For Linux based applications, we can use getrlimit and setrlimit API's to know various kernel resource limits, like size of core file, cpu time, stack size, nice values, max. no. of processes etc. 'RLIMIT_STACK' is the resource name for stack defined in linux kernel. Below is simple program to retrieve stack size :
#include <iostream>
#include <sys/time.h>
#include <sys/resource.h>
#include <errno.h>
using namespace std;
int main()
{
struct rlimit sl;
int returnVal = getrlimit(RLIMIT_STACK, &sl);
if (returnVal == -1)
{
cout << "Error. errno: " << errno << endl;
}
else if (returnVal == 0)
{
cout << "stackLimit soft - max : " << sl.rlim_cur << " - " << sl.rlim_max << endl;
}
}
The following program will call fun 2 ^ (MAXD + 1) times. The maximum recursion depth should never go above MAXD, though (if my thinking is correct). Thus it may take some time to compile, but it should not eat my RAM.
#include<iostream>
const int MAXD = 20;
constexpr int fun(int x, int depth=0){
return depth == MAXD ? x : fun(fun(x + 1, depth + 1) + 1, depth + 1);
}
int main(){
constexpr int i = fun(1);
std::cout << i << std::endl;
}
The problem is that eating my RAM is exactly what it does. When I turn MAXD up to 30, my laptop starts to swap after GCC 4.7.2 quickly allocates 3 gb or so. I have not yet tried it with clang 3.1, as I don't have access to it right now.
My only guess is that this has something to do with GCC trying to be too clever and memoize the function calls, like it does with templates. If this is so, does it not seem strange that they don't have a limit on how much memoization they do, like the size of a MRU cache table or something? I have not found a switch to disable it.
Why would I do this?
I am toying with the idea of making an advanced compile time library, like genetic programming or something. Since the compilers do not have compile time tail call optimization, I am worried that anything that loops will need recursion and (even if I turn up the maximum recursion depth parameter, which seems slightly ugly to require) will quickly allocate all my RAM and fill it with pointless stack frames. Thus I came up with the above solution for getting arbitrarily many function calls without a deep stack. Such a function could be used for folding/looping or trampolining.
EDIT:
Now I have tried it in clang 3.1, and it will not leak memory at all, no matter how long I make it work (i.e how high I make MAXD). CPU usage is almost 100% and memory usage is almost 0%, just like expected. Perhaps this is just a bug in GCC then.
This may not be the definitive document regarding constexpr, but it's the primary doc linked to from the gcc constexpr wiki.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2235.pdf
... and it says...
We (still) prohibit recursion in all its form in constant expressions.
That is not strictly necessary because an implementation limit on
recursion depth in constant expression evaluation would save us from
the possibility of the compiler recursing forever. However, until we
see a convincing use case for recursion, we don’t propose to allow it.
So, I expect you're bumping up against language boundary and the way that gcc has chosen to implement constexpr (maybe attempting to generate the entire function inline, then evaluating/executing it)
Your answer is in your comment "by running the function runtime and observing that while I can make it run for a long time", which is caused by your inner most recursive call to fun(x + 1, depth + 1).
When you changed it to a runtime function rather than a compile time function by removing constexpr and observed that it ran a long time that's an indicator that it is recursing very deeply.
When the function is executed by the compiler it has to recurse deeply, but doesn't use the stack for recursion since it isn't actually generating and executing machine code.
I was remote debugging a stack overflow from a recursive function. The Visual Studio IDE only showed the first 1,000 frames (all the same function), but I needed to go up further too see what the cause was.
Does anybody know how to get VS to 'move up' in a stack listing?
Thanks.
I do not believe there is a way to do this via the UI (or even a registry hack). My guess at the reason is showing all of the frames in a stack overflow situation can have a very negative performance impact.
Most stack frames are the result of bad recursion. If this is the case, you can likely set a conditional breakpoint on the target function. Set it to break only when the hit count reaches a certain level. I'd start with a count of around 1,000. You may have to experiment a bit to get the right count but it shouldn't take more than a few tries.
I would suggest replace your debugging method and use logging to handle such problem. You might find it more productive, you just need to carefully choose what and when to print.
Any way analyzing few thousands lines of text will be much faster than going up few thousands stack frames. IMHO.
And you can use David's suggesting to control the amount of data to print (i.e pass relevant information from one recursion cycle to another)
You might also try WinDbg. It's not as friendly, but it sometimes works where the VC debugger doesn't.
I run into this now and then, what I do is add the following line to the function that is being called recursively:
static int nest; if (++nest == 100) *(char*)0 = 0;
The number 100 is arbitrary, often just 10 will work. This limits the recursion, ending it with a seg fault. The debugger should then show you the frames that started the recursion.
You could add a temporary recursion count parameter to the function, and assert when it goes over a maximum value. Give it a default value and you won't need to edit any other source
void f(int rcount /* = 0 */ )
{
Assert(rcount < 1000);
f(count+1);
}
You are trying to solve this the wrong way.
There should be enough stack frames to show you the recurring call pattern. You should already be provided with enough inferential data to figure out how an infinite cycle of calls can happen.
Another hack idea might be to drastically decrease your stack size or artificially increase the size of each frame...