Infinite recursion is most often not desired, and when it happens it usually causes a stack overflow or segfaults.
But for theory's sake, and plain curiosity, I've been thinking if it'd be possible to create actual infinite recursion, intentionally.
Working in C++ and C where the stack, usually, grows for each function call, and each function returns and pops the part of the stack it handled.
Here's the thought. Would it be possible to force a function to clear out it's own stack space and then call another function so that the new function effectively replaces the first function, without the first function needing to return and then fire again via a loop.
I'm not only thinking about plain loops as a possible use for this, if there would be any. Loops usually do a good job at what they do. But what if you were to use it for sending signals through a node network, that carry on indefinitely in their own process threads until they reach a certain condition. It might be a tool that could be used for some problems.
Remember, I'm not really asking if it's practical, only if it's possible to do. For science!
Would it be possible to force a function to clear out it's own stack
space and then call another function so that the new function
effectively replaces the first function
This is used for tail-call-optimization, so yes, it is possible and used in practice. In languages like C++ this a nice feature, because sometimes algorithms are simpler to express using recursion, but more efficiently implemented using a loop. The transformation can in some cases be done automatically by the compiler.
There is a technique called trampolining that is used to implement continuation-passing style programming without the use of tail-call optimization. If you consider any language without support for TCO, such as JavaScript, and research solutions for CPS in that language, then it is likely that the solution involves trampolining.
Essentially, with trampolining there is a function called a trampoline which iteratively calls thunk-returning functions.
I know that you said "without the first function needing to return and then fire again via a loop"—that's what trampolining is—but considering that this is C++, leaving scopes by, for example, returning is central to the core design of C++'s automatic resource management via destructors (see: RAII). You could alternatively use C's setjmp()/longjmp() functions to wipe out stack, but then you would need to be very careful in making sure that all resources are released properly.
This does remind me of an optimisation that can be done in assembler code. Say you have this:
call FuncA
call FuncB
you can replace it with:
push ReturnAddress
push FuncB
jmp FuncA
ReturnAddress:
This causes the ret at the end of FuncA to jump to FuncB directly rather than back to the caller and then onto FuncB. Not really possible in higher level languages.
There's also this:
call FuncA
ret
which can be changed to:
jmp FuncA
Related
Quick question and I apologize if it sounds naive.
What is faster in c++. A code like this:
ProgramsManager::CurrentProgram->Uniforms->Set(n1);
ProgramsManager::CurrentProgram->Uniforms->Set(n2);
ProgramsManager::CurrentProgram->Uniforms->Set(n3);
ProgramsManager::CurrentProgram->Uniforms->Set(...);
Or this one?
Uniforms* u = ProgramsManager::CurrentProgram->Uniforms;
u->Set(n1);
u->Set(n2);
u->Set(n3);
u->Set(...);
I know the second piece of code is faster in interpreted languages, but I feel like it makes no difference in compiled languages. Am I right?
Thank you in advance
The second might be faster, but it won't be faster by a lot.
The reason it might be faster is if the compiler cannot prove to itself that ProgramsManager::CurrentProgram->Uniforms could be changed by the calls to ...->Set. If it can't prove this, it will have to re-evaluate the expression ProgramsManager::CurrentProgram->Uniforms for each line.
However, modern CPUs are usually fairly quick at this kind of thing, and compilers are getting better.
There are 3 choices here, not 2.
Call a single parameter function.
Call one function with many parameters.
Call a single function with container, like struct or vector.
Fundamental Overhead
When calling a function there is an overhead of instructions. Usually this involves placing values in registers or on the stack or something else.
Lower level, there may be the possibility of the processor having to reload it's instruction cache / pipe line.
Optimizing The Function Call
For optimizing function calls, the best method is to avoid the call by pasting the code (a.k.a. inlining). This removes the overhead.
The next best is to reduce the number of function calls. For example, passing more parameters will use less function calls and less overhead.
Many Parameters versus One Container
The optimal function call passes values by registers. Extra parameters, more than the available registers, results in using the stack memory. This means that the function will need code to retrieve the values from the stack memory.
Passing many parameters using the stack incurs an overhead. Also, the function signature will need to change if more parameters are added or removed.
Placing variables into a container reduces the overhead. Only a pointer (or reference) to the container needs to be passed. This usually involves only a register since pointers usually fit into a register (many compilers pass structures by reference using pointers).
Another benefit to the container is that the container can change without having to change the function signature.
I read about function pointers in C.
And everyone said that will make my program run slow.
Is it true?
I made a program to check it.
And I got the same results on both cases. (measure the time.)
So, is it bad to use function pointer?
Thanks in advance.
To response for some guys.
I said 'run slow' for the time that I have compared on a loop.
like this:
int end = 1000;
int i = 0;
while (i < end) {
fp = func;
fp ();
}
When you execute this, i got the same time if I execute this.
while (i < end) {
func ();
}
So I think that function pointer have no difference of time
and it don't make a program run slow as many people said.
You see, in situations that actually matter from the performance point of view, like calling the function repeatedly many times in a cycle, the performance might not be different at all.
This might sound strange to people, who are used to thinking about C code as something executed by an abstract C machine whose "machine language" closely mirrors the C language itself. In such context, "by default" an indirect call to a function is indeed slower than a direct one, because it formally involves an extra memory access in order to determine the target of the call.
However, in real life the code is executed by a real machine and compiled by an optimizing compiler that has a pretty good knowledge of the underlying machine architecture, which helps it to generate the most optimal code for that specific machine. And on many platforms it might turn out that the most efficient way to perform a function call from a cycle actually results in identical code for both direct and indirect call, leading to the identical performance of the two.
Consider, for example, the x86 platform. If we "literally" translate a direct and indirect call into machine code, we might end up with something like this
// Direct call
do-it-many-times
call 0x12345678
// Indirect call
do-it-many-times
call dword ptr [0x67890ABC]
The former uses an immediate operand in the machine instruction and is indeed normally faster than the latter, which has to read the data from some independent memory location.
At this point let's remember that x86 architecture actually has one more way to supply an operand to the call instruction. It is supplying the target address in a register. And a very important thing about this format is that it is normally faster than both of the above. What does this mean for us? This means that a good optimizing compiler must and will take advantage of that fact. In order to implement the above cycle, the compiler will try to use a call through a register in both cases. If it succeeds, the final code might look as follows
// Direct call
mov eax, 0x12345678
do-it-many-times
call eax
// Indirect call
mov eax, dword ptr [0x67890ABC]
do-it-many-times
call eax
Note, that now the part that matters - the actual call in the cycle body - is exactly and precisely the same in both cases. Needless to say, the performance is going to be virtually identical.
One might even say, however strange it might sound, that on this platform a direct call (a call with an immediate operand in call) is slower than an indirect call as long as the operand of the indirect call is supplied in a register (as opposed to being stored in memory).
Of course, the whole thing is not as easy in general case. The compiler has to deal with limited availability of registers, aliasing issues etc. But is such simplistic cases as the one in your example (and even in much more complicated ones) the above optimization will be carried out by a good compiler and will completely eliminate any difference in performance between a cyclic direct call and a cyclic indirect call. This optimization works especially well in C++, when calling a virtual function, since in a typical implementation the pointers involved are fully controlled by the compiler, giving it full knowledge of the aliasing picture and other relevant stuff.
Of course, there's always a question of whether your compiler is smart enough to optimize things like that...
I think when people say this they're referring to the fact that using function pointers may prevent compiler optimizations (inlining) and processor optimizations (branch prediction). However, if function pointers are an effective way to accomplish something that you're trying to do, chances are that any other method of doing it would have the same drawbacks.
And unless your function pointers are being used in tight loops in a performance critical application or on a very slow embedded system, chances are the difference is negligible anyway.
And everyone said that will make my
program run slow. Is it true?
Most likely this claim is false. For one, if the alternative to using function pointers are something like
if (condition1) {
func1();
} else if (condition2)
func2();
} else if (condition3)
func3();
} else {
func4();
}
this is most likely relatively much slower than just using a single function pointer. While calling a function through a pointer does have some (typically neglectable) overhead, it is normally not the direct-function-call versus through-pointer-call difference that is relevant to compare.
And secondly, never optimize for performance without any measurements. Knowing where the bottlenecks are is very difficult (read impossible) to know and sometimes this can be quite non-intuitively (for instance the linux kernel developers have started removing the inline keyword from functions because it actually hurt performance).
A lot of people have put in some good answers, but I still think there's a point being missed. Function pointers do add an extra dereference which makes them several cycles slower, that number can increase based on poor branch prediction (which incidentally has almost nothing to do with the function pointer itself). Additionally functions called via a pointer cannot be inlined. But what people are missing is that most people use function pointers as an optimization.
The most common place you will find function pointers in c/c++ APIs is as callback functions. The reason so many APIs do this is because writing a system that invokes a function pointer whenever events occur is much more efficient than other methods like message passing. Personally I've also used function pointers as part of a more-complex input processing system, where each key on the keyboard has a function pointer mapped to it via a jump table. This allowed me to remove any branching or logic from the input system and merely handle the key press coming in.
Calling a function via a function pointer is somewhat slower than a static function call, since the former call includes an extra pointer dereferencing. But AFAIK this difference is negligible on most modern machines (except maybe some special platforms with very limited resources).
Function pointers are used because they can make the program much simpler, cleaner and easier to maintain (when used properly, of course). This more than makes up for the possible very minor speed difference.
A lot of good points in earlier replies.
However take a look at C qsort comparison function. Because the comparison function cannot be inlined and needs to follow standard stack based calling conventions, the total running time for the sort can be an order of magnitude (more exactly 3-10x) slower for integer keys, than otherwise same code with a direct, inlineable, call.
A typical inlined comparison would be a sequence of simple CMP and possibly CMOV/SET instruction. A function call also incurs the overhead of a CALL, setting up stack frame, doing the comparison, tearing down stack frame and returning the result. Note, that the stack operations can cause pipeline stalls due to CPU pipeline length and virtual registers. For example if value of say eax is needed before the instruction that last modified eax has finished executing (which typically takes about 12 clock cycles on the newest processors). Unless the CPU can execute other instructions out of order to wait for that, a pipeline stall will occur.
Using a function pointer is slower that just calling a function as it is another layer of indirection. (The pointer needs to be dereferenced to get the memory address of the function). While it is slower, compared to everything else your program may do (Read a file, write to the console) it is negligible.
If you need to use function pointers, use them because anything that tries to do the same thing but avoids using them will be slower and less maintainable that using function pointers.
Possibly.
The answer depends on what the function pointer is being used for and hence what the alternatives are. Comparing function pointer calls to direct function calls is misleading if a function pointer is being used to implement a choice that's part of our program logic and which can't simply be removed. I'll go ahead and nonetheless show that comparison and come back to this thought afterwards.
Function pointer calls have the most opportunity to degrade performance compared to direct function calls when they inhibit inlining. Because inlining is a gateway optimization, we can craft wildly pathological cases where function pointers are made arbitrarily slower than the equivalent direct function call:
void foo(int* x) {
*x = 0;
}
void (*foo_ptr)(int*) = foo;
int call_foo(int *p, int size) {
int r = 0;
for (int i = 0; i != size; ++i)
r += p[i];
foo(&r);
return r;
}
int call_foo_ptr(int *p, int size) {
int r = 0;
for (int i = 0; i != size; ++i)
r += p[i];
foo_ptr(&r);
return r;
}
Code generated for call_foo():
call_foo(int*, int):
xor eax, eax
ret
Nice. foo() has not only been inlined, but doing so has allowed the compiler to eliminate the entire preceding loop! The generated code simply zeroes out the return register by XORing the register with itself and then returns. On the other hand, compilers will have to generate code for the loop in call_foo_ptr() (100+ lines with gcc 7.3) and most of that code effectively does nothing (so long as foo_ptr still points to foo()). (In more typical scenarios, you can expect that inlining a small function into a hot inner loop might reduce execution time by up to about an order of magnitude.)
So in a worst case scenario, a function pointer call is arbitrarily slower than a direct function call, but this is misleading. It turns out that if foo_ptr had been const, then call_foo() and call_foo_ptr() would have generated the same code. However, this would require us to give up the opportunity for indirection provided by foo_ptr. Is it "fair" for foo_ptr to be const? If we're interested in the indirection provided by foo_ptr, then no, but if that's the case, then a direct function call is not a valid option either.
If a function pointer is being used to provide useful indirection, then we can move the indirection around or in some cases swap out function pointers for conditionals or even macros, but we can't simply remove it. If we've decided that function pointers are a good approach but performance is a concern, then we typically want to pull indirection up the call stack so that we pay the cost of indirection in an outer loop. For example, in the common case where a function takes a callback and calls it in a loop, we might try moving the innermost loop into the callback (and changing the responsibility of each callback invocation accordingly).
In implementing a menu on an embedded system in C(++) (AVR-Gcc), I ended up with void function pointer that take arguments, and usually make use of them.
// void function prototype
void (*auxFunc)(char *);
In some cases (in fact quite a few), the function actually doesn't need the argument, so I would do something like:
if (something) doAuxFunc(NULL);
I know I could just overload to a different function type, but I'm actually trying not to do this as I am instantiating multiple objects and want to keep them light.
Is calling multiple functions with NULL pointers (when they are intended for an actual pointer) worse than implementing many more function prototypes?
Checking for NULLs is a very small overhead even on a microcontroller - comparison against 0 is supposed to be lightning fast. If you overload several functions, you'll crucify readability for (a very slight) improvement in performance. Just let GCC's optimizer do its stuff, it's pretty good at it :)
Look at the disassembly, it should be generating a null (zero) to pass as the first argument, which either burns a register or a stack location, if it burns a register then it may cost you a push and pop if the calling function is starving for registers. (just using a function call may cost you pushes and pops if the function is starving for registers in order to implement the calling convention).
So there is likely a cost, but it may not be enough of a cost to change the way you do things.
Checking for 0 is really cheap, overloading is even cheaper, since it is decided at compile time which function to chose.
But if you think that your interfaces get too complicated with overloading and your function is small you should declare it inline and put it in a header. Checkig for 0 can then easily be optimized away by any decent modern compiler.
I think the "tradeoff" is ridiculously low for each approach but this is the time to do benchmarks for yourself. If you do so, please post some results :)
From http://en.wikipedia.org/wiki/Stack_pointer#Structure
I am wondering why the return address for a function is placed above the parameters for that function?
It makes more sense to have Return Address pushed onto the stack before the Parameters for Drawline because the parameters are not required any more when the Return Address is popped for returning back to the calling function.
What are the reasons for preferring the implementation shown in diagram above?
The return address is usually pushed via the call machine command, [which in the native language's instruction set] while the parameters and variables are pushed with several machine commands - which the compiler creates.
Thus, the return address is the last thing pushed by the caller, and before anything [local variables] pushed by the callee.
The parameters are all pushed before the return address, because the jump to the actual function and the insertion of the return address to the stack is done in the same machine command.
Also, another reason is - the caller is the one allocating space on stack for the parameters - It [the caller] should also be the one who cleans it up.
The reason is simple: The function arguments are pushed onto the stack by the calling function (which is the only one which can do it because only it has the necessary information; after all the whole point of doing so is to pass that information to the called function). The return address is pushed to the stack by the function call mechanism. The function is called after the calling function has set up the parameters, because after the call it's the called function which is executed, not the calling one.
OK, so now you could argue that the calling function could put the parameters beyond the currently used stack, and the called function could then just adjust the stack pointer accordingly. But that would not work out well because at any time there could be an interrupt or a signal, which would push the current state onto the stack in order to restore later (I wouldn't be surprised if a task switch did so, too). But if you set up the parameters beyond the current stack, those asynchronous events would overwrite it, and since you cannot predict when they will happen, you cannot avoid that (beyond disabling, which may have other drawbacks or even be impossible, in the case of task switch). Basically, everything beyond the current stack has to be considered volatile.
Also note that this is independent of the question of who cleans up the parameters. In principle, the called function could call call destructors of the arguments even if physically they lie in the caller's stack frame. Also, many processors (including the x86) have instructions which automatically pop extra space above the return address on return (for example, Pascal compilers usually did that because in Pascal you don't have any cleanup beyond returning memory, and at least fr the processors of the time, it was more efficient to clean up with that processor instruction (I have no idea if that is still true for modern processors). However C didn't use that mechanism due to variable-length argument lists: For those, the mechanism wasn't applicable because you'd need to know at compile time how much extra space to release, and K&R C did not require to forward-declare variadic functions (C89 does, but few if any compilers take advantage of that, due to compatibility with old code), so there was no way for the calling function to know whether to clean up the arguments unless it had to do that always.
Say that you are several levels deep into a recursive function. The function was originally called in main. Is there a way for you to break out of the recursion and go straight back to main without having to go through all the other functions above?
You could use exceptions for that - either throw some suitable exception or craft your own and use it. Although using exceptions for flow control is generally not recommended this is the only reliable way here.
In C you can use longjmp/setjmp for this, but I don't think it's safe to use this in C++ (bypasses destructors ?). You'll probably have to use exceptions.
The question is how you got there? What kind of algorithm buries you deep into a recursion without a way of getting out of it?
Any recursive function must have a way to end the recursion, it recurses only if a condition is either true or false. When that doesn't hold, the recursion ends and the function returns instead of recursing deeper.
Why don't you just end the recursion this way, returning through all the levels?
If you're desperate, an exception is the way to go, but that's (rightly, IMO) frowned upon.
You might be able to get the location off the stack and use assembler to jmp to it, but why would you want to?
Also you have to consider that when you have moved on to pastures new someone is going to have to maintain it.
Make your function so it's tail call optimizable. Then there's no "functions above" to worry about.
No, you can't break from your recursion and return directly back to your main(). If your recursive function does not do other work after the recursive call you would effectively accomplish the same thing. I recommend restructuring your recursive function. A description of why you want to break from the recursion early would also be helpful.
I got the same problem in nqueens backtracking algo.
The simplest way would be to add a global boolean variable and use it to prevent any further action in your parent functions.