Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I would like to call function recursively up to 10000 times but it is clearly to big for my stack. That is why I tried use loop/recur but I faced an issue. My function ends like this:
...
(max (map #(recur %) subcollection))
It seems recur won't work for deep first search.
So what is "state-of-art" of doing long DFS (longer than stack can handle)
Thank you.
recur is to code a tail recursion as the JVM does not support automatic tail recursion optimisation.
Tail recursion means that calling the recursive function needs to be the last thing that has to be done. If that is the case, you don't have to return to the calling method and therefore you don't have to keep a stackframe for it.
What you wrote needs to do more things after the recursive function returns. Therefore the stack-frame needs to be preserved.
The “state of art” to get this working is that you have to reformulate your recursive function to become a tail recursive function. Normally this can be achieved best by using an accumulator that you pass with your recursive function calls.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
For this function TreeNode* buildTree(vector& preorder, vector& inorder). Why there is a * in front of the function name? Why they use & in front of the variables?
Try looking at * not as being in front of function name, but rather as being behind TreeNode. In other words, that * changes function return type from structure to pointer to a structure.
& in front of variables means that they are, essentially, still pointers, but they can't be NULL (there are ways to get around that, but that would just confuse the issue further). So vector& preorder means that preorder is pointer (the term used is reference) to vector, but you don't need to check if that pointer is NULL. You can simply go ahead and use it.
Without much loss you could change those & into *, changing . in your function to -> and things would work the same. I bet assembly code would be exactly the same as well.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I am trying to create a recursive function in c++ that takes in a deque of integers as a parameter, loops through each element one by one, and returns the deque. I have found a few previous posts on StackOverflow that do something similar, but I am unable to understand what is happening in their answers. I am relatively new to C++. While it may be far easier and more efficient to do this by using an iterative algorithm, I am required to use recursion (it's an assignment question). Help is greatly appreciated.
it should be something like this
deque <int> x;
void Calc (deque <int> d){
if (d.empty()) return;
x.push_back(d.front());
d.pop_front();
Calc(d);
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The utilisation of static variables recursive algorithms can help reduce the overhead by a large margin? Or it is neglijable?
For example, in a backtracking algorithm, having the solution vector as a static variable is going to make the algorithm better than having it as a parameter?
Or it is just a rule of thumb to never use static variables?
Use static variables, but it is not going to save you a lot. Every time you make a recursive call, a number of things have to go on the stack: the return address, the parameters, all newly allocated variables for the function. If you put in the variable as a parameter rather than a static, then you are allocating only one more value per recursive call. This means that even in the worst case, it will less than double the memory.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I'm trying to simulate functional programming in c++ , I'm stucking on "wait" function , assume I want to wait for 100 seconds without using any kind of loops , just recursion . How can I avoid Stack Overflow ?
Make the calls tail-recursive and hope for the compiler to reuse the stack frames. Although I don't think C++ compilers are required to perform this optimization, so all you can do is to rely on implementation.
But why do this if you can simply this_thread::sleep_for()?
I think the real question should be:
How do functional programming languages like Scheme and Haskell use recursion to achieve looping without causing a stack overflow?
And the answer is: they use a trick called tail-recursion to turn the recursive call into a goto. So you'll have to either find a C++ compiler that implements tail-recursion, or simulate it in your code.
Here's an example to give you an idea of how tail-recursion works:
countdown x = if x == 0
then 0
else countdown (x - 1)
countdown 1000000
Notice that, in the recursive step, it just calls the function with different arguments, and then returns its value. So the compiler "cheats" by converting it into code that works like this:
int countdown(int x) {
start:
if (x == 0) return 0;
x = x - 1;
goto start;
}
By the way, if you have to write your code to take advantage of tail-recursion. It doesn't just automatically work. More information here: What is tail recursion?
using a branching recursion. think about a grossly non-optimal recursive fibonacci program. They have exponential running time vs required stack depth.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
In my project I have a lot of MATLAB functions. For each function I call Initialize function, when the application starts. I tried to call this functions using parallel_invoke. I tried it several times and allways it takes more time, that code without this. Can somebody explain this ?
Is there is something specific in MATLAB or Initialize functions ?
The Matlab Runtime only has a single interpreter thread, so calling Matlab functions in parallel does not gain you anything: when the first function A is called, the MCR acquires a lock and only releases it when that function exits. Calling another function B during that period results in trying to acquire the lock, which then obviously just blocks until A finishes. The reason you see it taking up more time is probably due to the overhead of the locking/parallel_invoke.
I'm not sure what you mean with for each function I call Initialize function: unless you are using multiple Matlab dlls (which will afaik be less performant than having a single dll) you only need to call it's Initalize/Terminate once.