Can anyone explain how the Man Or Boy Test returns a value of -67?
I tried in vain to write down the result, or trace it with a debugger. Any help would be appreciated.
A list of different implementations can be found here.
This is a nice page on this Man or Boy test. It shows the following interesting facts:
k = 10: A = -67 and A is called 722 times, B is called (A - 1) times.
Writing a complete calltrace is a bit useless in this case, as the function is recursive in nature, with the addition that the functions are not pure (as you can see in the Haskell translation, it requires the use of STate Monads, wrapped around k, to keep the impurity away): each function's scope (in this case the variable k: it's decreased by one) is modified each call or recursion and these modifications are required for the calculation of the correct answer.
I find the JavaScript translation a bit more readable, than the original ALGOL60 implementation:
function A(k, x1, x2, x3, x4, x5) {
function B() {
return A(--k, B, x1, x2, x3, x4);
}
return k <= 0 ? x4() + x5() : B();
}
function K(n) { return function() {return n}; }
alert(A(10, K(1), K(-1), K(-1), K(1), K(0)));
The trick is bookkeeping: what references to functions cause which side-effects (modification of variables) and in term cause a correct function evaluation. However, this bookkeeping is tedious, as I explained earlier.
Modern languages, such as this JavaScript example, have correct interpreters/compilers to handle these bookkeeping cases. The time ALGOL60 compilers were made, some of the implementations were not correct. The test was made to separate the incorrect implementations from those that are correct.
Related
There is an issue with the recursive definition of the fibonacci sequence when it comes to efficiency. It is defined as follows:
private fib(int n) {
if(n < 2) return n;
else return fib(n - 1) + fib(n-2);
}
Suppose we call fib(5). This makes 1 call to fib(4) , two calls to fib(3), three calls to fib(2), five calls to fib(1) and three calls to fib(0).
In his book
Programming Abstractions in Java by Eric Roberts
Roberts mentions that we can resolve this efficiency issue by realizing that the fibonacci sequence is just a special case of the additiveSequence(int n, int t0, int t1) method. Basically, the Fibonacci sequence is just an additive sequence that strictly begins with 0 and 1. There are an infinite number of sequences that match the recurrence relation expressed by Fibonacci.
The author resolves the efficiency issue as follows:
private int fib(int n) {
return additiveSequence(n, 0, 1);
}
So my questions is, by making the fib sequence a wrapper for the more general additiveSequence method, are we really improving efficiency ? Wouldn't the implementation of additiveSequence have the same exact "problem" in terms of efficiency that fib had, given that it does follow the same exact reccurence relation ?
Here's an example implementation of an additive sequence calculation, where ti = ti-1 + ti-2:
int additiveSequence(int n, int t0, int t1) {
if(n==0) return t0;
if(n==1) return t1;
return additiveSequence(n-1, t1, t0+t1);
}
This method returns the n-th value in the series. Work through some examples and you should be able to convince yourself that each ti will be calculated only once. Compare that with your naively implemented fib method and you can see why this approach is much faster.
The Fibonacci series is this kind of additive sequence, with the starting conditions t0 = 0 and t1 = 1. There's nothing particularly special about it, other than the fact that the obvious way to code it is a poor one. The author's point, presumably, is that implementation makes a huge difference in processing time. It does not appear to be clearly explained, however.
Suppose I am solving a dynamic programming problem recursively (top down). For example, a recursive solution to the longest common subsequence problem:
LCS(S,n,T,m)
{
if (n==0 || m==0) return 0;
if (S[n] == T[m]) result = 1 + LCS(S,n-1,T,m-1);
else result = max( LCS(S,n-1,T,m), LCS(S,n,T,m-1) );
return result;
}
Often in such a DP problem at some point we have to take the max of some expressions, representing returns to different choices we can make. In the above case we have the max of two simple expressions, but in worse cases it can be the max of three or four quite complicated expressions involving long function calls. In such situations, I am often tempted to give these complicated expressions their own variable names, to make the code more readable. In the above case that would mean I would write
LCS(S,n,T,m)
{
if (n==0 || m==0) return 0;
if (S[n] == T[m]) result = 1 + LCS(S,n-1,T,m-1);
else
a = LCS(S,n-1,T,m);
b = LCS(S, n, T, m-1);
result = max(a, b);
return result;
}
(In this simplified case a and b are not complicated, but in other cases they are, and there may be even more arguments to the max function, so this could really help it be more understandable.)
My Question: Is this a terrible idea? As I understand it, I'm adding a variable to each layer of the call stack, and I'm thinking that could be wasteful. But on the other hand, at each layer it has to calculate the temporary variable LCS(S,n,T,m) anyway (I'm thinking in terms of C++, say), and as far as I know, there might be not much difference in cost between the two ways.
If this is a terrible idea, is there a more efficient way to break up a complicated recursive function call to make it more readable?
C++ has the "As-If" rule, which states that a compiler can do whatever it wants so long as the observable effects are indistinguishable from what is defined by the standard to happen. In this case, it's trivial to prove both fragments have the same meaning, and a compiler will likely emit identical instructions for both.
Note: You aren't doing dynamic programming here, as you don't memoise parameter / result pairs.
I have a function which internally uses some helper functions to keep its body organized and clean. They're very simple (but not always short) (they're more than just 2), and could be easily inlined inside the function's body, but I don't want to do so myself because, as I said, I want to keep that function's body organized.
All those functions need to be passed some arguments by reference and modify them, and I can write them in two ways (just a silly example):
With normal functions:
void helperf1(int &count, int &count2) {
count += 1;
count2 += 2;
}
int helperf2 (int &count, int &count2) {
return (count++) * (count2--);
}
//actual, important function
void myfunc(...) {
int count = count2 = 0;
while (...) {
helperf1(count, count2);
printf("%d\n", helperf2(count, count2));
}
}
Or with lambda functions that capture those arguments I explicitly pass in the example above:
void myfunc(...) {
int count = count2 = 0;
auto helperf1 = [&count, &count2] () -> void {
count += 1;
count2 += 2;
};
auto helperf2 = [&count, &count2] () -> int {
return (count++) * (count2--);
};
while (...) {
helperf1();
printf("%d\n", helperf2());
}
}
However, I am not sure on what method I should use. With the first, one, there is the "overhead" of passing the arguments (I think), while with the second those arguments could be (are them?) already included in there so that that "overhead" is removed. But they're still lambda functions which should (I think, again) not be as fast as normal functions.
So what should I do? Use the first method? Use the second one? Or sacrifice readability and just inline them in the main function's body?
Your first and foremost concern should be readability (and maintainability)!
Which of regular or lambda functions is more readable strongly depends on the given problem (and a bit on the taste of the reader/maintainer).
Don't be concerned about performance until you find that performance actually is an issue! If performance is an issue, start by benchmarking, not by guessing which implementation you think is faster (in many situations compilers are pretty good at optimizing).
Performance wise, there is no real issue here. Nothing to decide, choose whatever.
But, Lambda expressions won't do you any good for the purpose you want them.
They won't make the code any cleaner.
As a matter of fact I believe they will make the code a bit harder to read compared to a nice calculator object having these helper functions as member functions properly named with clean semantics and interface.
Using Lambda is more readable but they are actually there for more serious reasons , Lambda expressions are also known as "anonymous functions", and are very useful in certain programming paradigms, particularly functional programming, which lambda calculus ( http://en.wikipedia.org/wiki/Lambda_calculus )
Here you can find the goals of using lambdas :
https://dzone.com/articles/why-we-need-lambda-expressions
If you won't need the two helper functions somewhere else in your code, then use your lambda method , but if you will call one of them again somewhere in your project avoid writing them each time as lambdas , you can make a header file called "helpers.(h/hpp)" & a source file called "helper.(c/cpp)" then append all the helper functions there then you gain the readability of both the helper file and the caller file
You can avoid this unskilled habit and challange yourself by writing complex code that you have you read it more than once each time you want to edit it , that increases your programming skills and if you are working in a team , it won't be a problem , use comments , that will let them show more respect to your programming skills (if your complex code is doing the expected behaviour and giving the expected output)
And don't be concerned about performance until you find yourself writing a performance critical algorithm , if not , the difference will be in few milliseconds and the user won't notice it , so you will be loosing you time in an optimization that compiler can do by itself most of the time if you ask him to optimize your code .
There is a C++ function that calculates something (I am not sure if C++ matters here at all, anyway...). It is called in 50 or more places. Now it turned out that this function works wrongly. And in order to work correctly it needs three more arguments.
How can this code be refactored most efficiently in terms of the number of necessary changes and compactness.
BTW newly added arguments are such that it is not reasonable to have default values for them. They should be always passed to the function.
Many people asked for an example. Here it is:
//old syntax of function
int f(int a1, int a2)
{
return a + b;
}
//new syntax of function
int f(int a1, int a2, int a3, int a4, int a5)
{
if (a3 == 10)
{
return a1 + a2;
}
else
{
return a1 + a2 + a4 + a5;
}
}
Does this example help? I need a way of doing this using a general approach, like a design patterns, like a principle of refactoring, but not for a specific example...
You can define default values for those 3 params and you will need to change only places where they have to be. Or use find in your IDE in whole project and correct them.
It really depends on where the three arguments come from. If we can't have default values and it is not possible to create a common pattern for the extra parameters, then you may have no alternative but to attack each of the 50 calls in groups. In this case, you'd keep the original call and make a direct copy with a slightly different name. You then move gradually over so that eventually all the calls call the new function with the extra parameters. You can then retire the old one.
On the other hand, if we can start with defaults or at least make them independent of the calling code then the following might be a good plan. The thing to bear in mind is that as a large change, this would presumably have to be done in phases to control the potential impact if anything went wrong.
First, I would change the name of the function from xxxx to xxxx_<tag> where <tag> is a handle for the change - possibly a bug # from a defect tracker or change management system. Then I'd create a new function called xxxx which simply calls xxxx_<tag> recompile everything. So far so good:
void xxxx_tag(int p1, int p2)
{
// ....
}
void xxxx(int p1, int p2)
{
xxxx_tag(p1, p2);
}
Next I'd change the signature of xxxx_<tag> to add the extra three parameters and the call to it. Now I'd rebuild again:
void xxxx_tag(int p1, int p2, int p3, int p4, int p5)
{
// ....
}
void xxxx(int p1, int p2)
{
// XXX, YYY, and ZZZ and constants or at least can be derived at this point.
xxxx_tag(p1, p2, XXX, YYY, ZZZ);
}
Key point here is also to add comments for future maintainers describing why this wrapper exists. Unfortunately, this is as far as some of these changes get so code gets left behind the purpose of which isn't immediately obvious.
I would then plan to phase in the 50 changes in say groups of five or ten so that you change your original call to the new call:
xxxx(p1, p2);
becomes:
xxxx_tag(p1, p2, p3, p4, p5);
Each section of calling code can be individually tested so that your are happy that (a) it works like it always did (i.e. fully backwards compatible) and (b) it fixes any problems.
Finally, once all this is done, you can then do a single change to remove the new function xxxx() and rename xxxx_<tag> to xxxx Again, you'd have to fully rebuild and test.
Conclusion
Whichever way you go I'd recommend:
Do it in stages - this minimises the risk of something going wrong.
Test, test and test again - again, this reduces your exposure to problems.
I've just finished second year at Uni doing a games course, this is always been bugging me how math and game programming are related. Up until now I've been using Vectors, Matrices, and Quaternions in games, I can under stand how these fit into games.
This is a General Question about the relationship between Maths and Programming for Real Time Graphics, I'm curious on how dynamic the maths is. Is it a case where all the formulas and derivatives are predefined(semi defined)?
Is it even feasible to calculate derivatives/integrals in realtime?
These are some of things I don't see how they fit inside programming/maths As an example.
MacLaurin/Talor Series I can see this is useful, but is it the case that you must pass your function and its derivatives, or can you pass it a single function and have it work out the derivatives for you?
MacLaurin(sin(X)); or MacLaurin(sin(x), cos(x), -sin(x));
Derivatives /Integrals This is related to the first point. Calculating the y' of a function done dynamically at run time or is this something that is statically done perhaps with variables inside a set function.
f = derive(x); or f = derivedX;
Bilnear Patches We learned this as a way to possible generate landscapes in small chunks that could be 'sewen' together, is this something that happens in games? I've never heard of this (granted my knowlages is very limited) being used with procedural methods or otherwise. What I've done so far involves arrays for vertex information being processesed.
Sorry if this is off topic, but the community here seems spot on, on this kinda thing.
Thanks.
Skizz's answer is true when taken literally, but only a small change is required to make it possible to compute the derivative of a C++ function. We modify skizz's function f to
template<class Float> f (Float x)
{
return x * x + Float(4.0f) * x + Float(6.0f); // f(x) = x^2 + 4x + 6
}
It is now possible to write a C++ function to compute the derivative of f with respect to x. Here is a complete self-contained program to compute the derivative of f. It is exact (to machine precision) as it's not using an inaccurate method like finite differences. I explain how it works in a paper I wrote. It generalises to higher derivatives. Note that much of the work is done statically by the compiler. If you turn up optimization, and your compiler inlines decently, it should be as fast as anything you could write by hand for simple functions. (Sometimes faster! In particular, it's quite good at amortising the cost of computing f and f' simultaneously because it makes common subexpression elimination easier for the compiler to spot than if you write separate functions for f and f'.)
using namespace std;
template<class Float>
Float f(Float x)
{
return x * x + Float(4.0f) * x + Float(6.0f);
}
struct D
{
D(float x0, float dx0 = 0) : x(x0), dx(dx0) { }
float x, dx;
};
D operator+(const D &a, const D &b)
{
// The rule for the sum of two functions.
return D(a.x+b.x, a.dx+b.dx);
}
D operator*(const D &a, const D &b)
{
// The usual Leibniz product rule.
return D(a.x*b.x, a.x*b.dx+a.dx*b.x);
}
// Here's the function skizz said you couldn't write.
float d(D (*f)(D), float x) {
return f(D(x, 1.0f)).dx;
}
int main()
{
cout << f(0) << endl;
// We can't just take the address of f. We need to say which instance of the
// template we need. In this case, f<D>.
cout << d(&f<D>, 0.0f) << endl;
}
It prints the results 6 and 4 as you should expect. Try other functions f. A nice exercise is to try working out the rules to allow subtraction, division, trig functions etc.
2) Derivatives and integrals are usually not computed on large data sets in real time, its too expensive. Instead they are precomputed. For example (at the top of my head) to render a single scatter media Bo Sun et al. use their "airlight model" which consists of a lot of algebraic shortcuts to get a precomputed lookup table.
3) Streaming large data sets is a big topic, especially in terrain.
A lot of the maths you will encounter in games is to solve very specific problems, and is usually kept simple. Linear algebra is used far more than any calculus. In Graphics (I like this the most) a lot of the algorithms come from research done in academia, and then they are modified for speed by game programmers: although even academic research makes speed their goal these days.
I recommend the two books Real time collision detection and Real time rendering, which contain the guts of most of the maths and concepts used in game engine programming.
I think there's a fundamental problem with your understanding of the C++ language itself. Functions in C++ are not the same as mathmatical functions. So, in C++, you could define a function (which I will now call methods to avoid confusion) to implement a mathmatical function:
float f (float x)
{
return x * x + 4.0f * x + 6.0f; // f(x) = x^2 + 4x + 6
}
In C++, there is no way to do anything with the method f other than to get the value of f(x) for a given x. The mathmatical function f(x) can be transformed quite easily, f'(x) for example, which in the example above is f'(x) = 2x + 4. To do this in C++ you'd need to define a method df (x):
float df (float x)
{
return 2.0f * x + 4.0f; // f'(x) = 2x + 4
}
you can't do this:
get_derivative (f(x));
and have the method get_derivative transform the method f(x) for you.
Also, you would have to ensure that when you wanted the derivative of f that you call the method df. If you called the method for the derivative of g by accident, your results would be wrong.
We can, however, approximate the derivative of f(x) for a given x:
float d (float (*f) (float x), x) // pass a pointer to the method f and the value x
{
const float epsilon = a small value;
float dy = f(x+epsilon/2.0f) - f(x-epsilon/2.0f);
return epsilon / dy;
}
but this is very unstable and quite inaccurate.
Now, in C++ you can create a class to help here:
class Function
{
public:
virtual float f (float x) = 0; // f(x)
virtual float df (float x) = 0; // f'(x)
virtual float ddf (float x) = 0; // f''(x)
// if you wanted further transformations you'd need to add methods for them
};
and create our specific mathmatical function:
class ExampleFunction : Function
{
float f (float x) { return x * x + 4.0f * x + 6.0f; } // f(x) = x^2 + 4x + 6
float df (float x) { return 2.0f * x + 4.0f; } // f'(x) = 2x + 4
float ddf (float x) { return 2.0f; } // f''(x) = 2
};
and pass an instance of this class to a series expansion routine:
float Series (Function &f, float x)
{
return f.f (x) + f.df (x) + f.ddf (x); // series = f(x) + f'(x) + f''(x)
}
but, we're still having to create a method for the function's derivative ourselves, but at least we're not going to accidentally call the wrong one.
Now, as others have stated, games tend to favour speed, so a lot of the maths is simplified: interpolation, pre-computed tables, etc.
Most of the maths in games is designed to to as cheap to calculate as possible, trading speed over accuracy. For example, much of the number crunching uses integers or single-precision floats rather than doubles.
Not sure about your specific examples, but if you can define a cheap (to calculate) formula for a derivative beforehand, then that is preferable to calculating things on the fly.
In games, performance is paramount. You won't find anything that's done dynamically when it could be done statically, unless it leads to a notable increase in visual fidelity.
You might be interested in compile time symbolic differentiation. This can (in principle) be done with c++ templates. No idea as to whether games do this in practice (symbolic differentiation might be too expensive to program right and such extensive template use might be too expensive in compile time, I have no idea).
However, I thought that you might find the discussion of this topic interesting. Googling "c++ template symbolic derivative" gives a few articles.
There's many great answers if you are interested in symbolic calculation and computation of derivatives.
However, just as a sanity check, this kind of symbolic (analytical) calculus isn't practical to do at real time in the context of games.
In my experience (which is more 3D geometry in computer vision than games), most of the calculus and math in 3D geometry comes in by way of computing things offline ahead of time and then coding to implement this math. It's very seldom that you'll need to symbolically compute things on the fly and then get on-the-fly analytical formulae this way.
Can any game programmers verify?
1), 2)
MacLaurin/Taylor series (1) are constructed from derivatives (2) in any case.
Yes, you are unlikely to need to symbolically compute any of these at run-time - but for sure user207442's answer is great if you need it.
What you do find is that you need to perform a mathematical calculation and that you need to do it in reasonable time, or sometimes very fast. To do this, even if you re-use other's solutions, you will need to understand basic analysis.
If you do have to solve the problem yourself, the upside is that you often only need an approximate answer. This means that, for example, a series type expansion may well allow you to reduce a complex function to a simple linear or quadratic, which will be very fast.
For integrals, the you can often compute the result numerically, but it will always be much slower than an analytic solution. The difference may well be the difference between being practical or not.
In short: Yes, you need to learn the maths, but in order to write the program rather than have the program do it for you.