I'm reading a book about rendering 3d graphics and the author sometimes uses epsilon and sometimes doesn't.
Notice the if at the beginning using epsilon and the other ifs that don't.
What's the logic behind this? I can see he avoids any chance for division by zero but when not using epsilon in the function there's still a chance it will return a value that will make the outer code to divide by zero.
Book is Real-Time Rendering 3rd Edition, by the way.
The first statement, if(|f| > ϵ) is just checking to make sure f is significantly different from 0. It's important to do that in that specific spot in the code because the next two statements divide by f.
The other statements don't need to do that, so they don't need to use ϵ.
For example,
if(t1 > t2) swap(t1, t2);
is a self-contained statement that compares two numbers to each other and swaps them if the wrong one is greater. Since it's not comparing to see if a value is close to 0, there's no need to use ϵ.
If the value that is returned from this block of code can make the calling code divide by zero, that should be handled in the calling code.
Related
I was told to solve this problem:
given a1, ..., an are real numbers. Need to calculate min(a1, -a1a2, a1a2a3, ...,(-1)^(n+1) a1a2,... an)
but I cannot understand the logic of the task. Could you tell me what I should do?
For example, what is (-l)^n+1? I've never seen it before.
What you should do is:
use the n real numbers of input to ...
... calculate the n numbers defined by the quoted formula (though you only need one value at a time to be more efficient)
while doing so keep track of the smallest number you encounter, that is the final result
concerning the (-1)^(n+1), it is reasonable to assume (as e.g. in the comment by Weather Vane and others) that it means powers of -1 (in a lazy and unexplained but non-C++ syntax)
note that you can easily calculate one value from the previous one by simple multiplication
probably you should do all of that by writing a program, an assumption based on the fact that you are asking on StackOverflow and tag a programming language
Divide handles division by zero error by returning alternate result. However if the numerator is blank then it returns blank.
There are 3 ways to solve this:
COALESCE(DIVIDE(n,d),0)
Use IF to check whether ISBLANK(n) then return 0 otherwise perform the division.
Use +0 - example: DIVIDE(n+0,d)
Which of the above is the most performant and clean approach?
The most performant and clean approach is to leave the blanks. You should be really careful when converting blanks due to the issues described in the link above, since you force a value to always return, when used in a table or chart, it will always show values for groupings where it shouldn't have to.
If you are using the measure in a card, you can get away with using any of the 3, or DIVIDE(n, d, 0) as the difference in performance between them should be negligible.
This documentation page of boost::math::tools::brent_find_minima says about its first argument:
The function to minimise: a function object (or C++ lambda) ... with no maxima occurring in that interval.
But what happens if this is not the case? (After all, this condition is rather difficult to pre-ensure, especially since the function is usually expensive to evaluate at many points.) Best would be to detect violations to this condition on the fly.
If this condition is violated, does boost throw an exception, or does it exhibit undefined behavior?
A workaround I am thinking of is to build the checking into the lambda ("function to minimize"), by capturing and maintaining a std::map<double,double> holding all the points that have been evaluated, and comparing each new evaluation with its nearest neighbor in each direction, to check whether there may be a local maximum. But I don't want to do all that if it isn't necessary.
There is no way for this to be done. If you read Corless's A Graduate Introduction to Numerical Methods, you'll read a very interesting point: All numerically defined functions are discontinuous halfway between representables, and have zero derivatives between representables. Basically they can be thought of as a sum of Heaviside functions.
So none of them are differentiable in the mathematical sense. Ok, maybe you think this is a bit unfair-the scale should be zoomed out. But how much? We know that |x-1| isn't differentiable at x=1, but how could a computer tell that? How does it know that there isn't some locally smooth mollifier that makes it differentiable between x=1-eps and x=1+eps? I don't think there's a good answer to this question.
One of the most difficult problems in this class arises in quadrature. Some of these methods work fast when the complex extension of the function has poles far from the real axis. Try to numerically determine that.
Function spaces are impossible to determine numerically. Users just have to get it right.
I'm writing a Monte Carlo algorithm, in which at one point I need to divide by a random variable. More precisely: the random variable is used as a step width for a difference quotient, so I actually first multiply something by the variable and then again divide it out of some locally linear function of this expression. Like
double f(double);
std::tr1::variate_generator<std::tr1::mt19937, std::tr1::normal_distribution<> >
r( std::tr1::mt19937(time(NULL)),
std::tr1::normal_distribution<>(0) );
double h = r();
double a = ( f(x+h) - f(x) ) / h;
This works fine most of the time, but fails when h=0. Mathematically, this is not a concern because in any finite (or, indeed, countable) selection of normally-distributed random variables, all of them will be nonzero with probability 1. But in the digital implementation I will encounter an h==0 every ≈2³² function calls (regardless of the mersenne twister having a period longer than the universe, it still outputs ordinary longs!).
It's pretty simple to avoid this trouble, at the moment I'm doing
double h = r();
while (h==0) h=r();
but I don't consider this particularly elegant. Is there any better way?
The function I'm evaluating is actually not just a simple ℝ->ℝ like f is, but an ℝᵐxℝⁿ -> ℝ in which I calculate the gradient in the ℝᵐ variables while numerically integrating over the ℝⁿ variables. The whole function is superimposed with unpredictable (but "coherent") noise, sometimes with specific (but unknown) outstanding frequencies, that's what gets me into trouble when I try it with fixed values for h.
your way seems elegant enough, maybe a little different:
do {
h = r();
} while (h == 0.0);
The ratio of two normally-distributed random variables is the Cauchy distribution. The Cauchy distribution is one of those nasty distributions with an infinite variance. Very nasty indeed. A Cauchy distribution will make a mess of your Monte Carlo experiment.
In many cases where the ratio of two random variables is computed, the denominator is not normal. People often use a normal distribution to approximate this non-normally distributed random variable because
normal distributions are usually so easy to work with,
usually have such nice mathematical properties,
the normal assumption appears to be more or less correct, and
the real distribution is a bear.
Suppose you are dividing by distance. Distance is semi-positive definite by definition, and is often positive definite as a random variable. So right off the bat distance can never be normally distributed. Nonetheless, people often assume a normal distribution for distance in cases where the mean is much, much larger than the standard deviation. When this normal assumption is made you need to protect against those non-real values. One simple solution is a truncated normal.
If you want to preserve normal distribution you have to either exclude 0 or assign 0 to a new previously non-occurring value. Since the second is most likely not possible in the finite ranges of computer science the first is our only option.
A function (f(x+h)-f(x))/h has a limit as h->0 and therefore if you encounter h==0 you should use that limit. The limit would be f'(x) so if you know the derivative you can use it.
If what you are actually doing is creating number of discrete points though that approximate a normal distribution, and this is good enough for your distribution, create it in a way that none of them will actually have the value 0.
Depending on what you're trying to compute, perhaps something like this would work:
double h = r();
double a;
if (h != 0)
a = ( f(x+h) - f(x) ) / h;
else
a = 0;
If f is a linear function, this should (I think?) remain continuous at h = 0.
You might also want to instead consider trapping division-by-zero exceptions to avoid the cost of the branch. Note that this may or may not have a detrimental effect on performance - benchmark both ways!
On Linux, you will need to build the file that contains your potential division by zero with -fnon-call-exceptions, and install a SIGFPE handler:
struct fp_exception { };
void sigfpe(int) {
signal(SIGFPE, sigfpe);
throw fp_exception();
}
void setup() {
signal(SIGFPE, sigfpe);
}
// Later...
try {
run_one_monte_carlo_trial();
} catch (fp_exception &) {
// skip this trial
}
On Windows, use SEH:
__try
{
run_one_monte_carlo_trial();
}
__except(GetExceptionCode() == EXCEPTION_INT_DIVIDE_BY_ZERO ?
EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH)
{
// skip this trial
}
This has the advantage of potentially having less effect on the fast path. There is no branch, although there may be some adjustment of exception handler records. On Linux, there may be a small performance hit due to the compiler generating more conservative code for for -fnon-call-exceptions. This is less likely to be a problem if the code compiled under -fnon-call-exceptions does not allocate any automatic (stack) C++ objects. It's also worth noting that this makes the case in which division by zero does happen VERY expensive.
#include <vector>
std::vector<long int> as;
long int a(size_t n){
if(n==1) return 1;
if(n==2) return -2;
if(as.size()<n+1)
as.resize(n+1);
if(as[n]<=0)
{
as[n]=-4*a(n-1)-4*a(n-2);
}
return mod(as[n], 65535);
}
The above code sample using memoization to calculate a recursive formula based on some input n. I know that this uses memoization, because I have written a purely recursive function that uses the same formula, but this one much, much faster for much larger values of n. I've never used vectors before, but I've done some research and I understand the concept of them. I understand that memoization is supposed to store each calculated value, so that instead of performing the same calculations over again, it can simply retrieve ones that have already been calculated.
My question is: how is this memoization, and how does it work? I can't seem to see in the code at which point it checks to see if a value for n already exists. Also, I don't understand the purpose of the if(as[n]<=0). This formula can yield positive and negative values, so I'm not sure what this check is looking for.
Thank you, I think I'm close to understanding how this works, it's actually a bit more simple than I was thinking it was.
I do not think the values in the sequence can ever be 0, so this should work for me, as I think n has to start at 1.
However, if zero was a viable number in my sequence, what is another way I could solve it? For example, what if five could never appear? Would I just need to fill my vector with fives?
Edit: Wow, I got a lot of other responses while checking code and typing this one. Thanks for the help everyone, I think I understand it now.
if (as[n] <= 0) is the check. If valid values can be negative like you say, then you need a different sentinel to check against. Can valid values ever be zero? If not, then just make the test if (as[n] == 0). This makes your code easier to write, because by default vectors of ints are filled with zeroes.
The code appears to be incorrectly checking is (as[n] <= 0), and recalculates the negative values of the function(which appear to be approximately every other value). This makes the work scale linearly with n instead of 2^n with the recursive solution, so it runs a lot faster.
Still, a better check would be to test if (as[n] == 0), which appears to run 3x faster on my system. Even if the function can return 0, a 0 value just means it will take slightly longer to compute (although if 0 is a frequent return value, you might want to consider a separate vector that flags whether the value has been computed or not instead of using a single vector to store the function's value and whether it has been computed)
If the formula can yield both positive and negative values then this function has a serious bug. The check if(as[n]<=0) is supposed to be checking if it had already cached this value of computation. But if the formula can be negative this function recalculates this cached value alot...
What it really probably wanted was a vector<pair<bool, unsigned> >, where the bool says if the value has been calculated or not.
The code, as posted, only memoizes about 40% of the time (precisely when the remembered value is positive). As Chris Jester-Young pointed out, a correct implementation would instead check if(as[n]==0). Alternatively, one can change the memoization code itself to read as[n]=mod(-4*a(n-1)-4*a(n-2),65535);
(Even the ==0 check would spend effort when the memoized value was 0. Luckily, in your case, this never happens!)
There's a bug in this code. It will continue to recalculate the values of as[n] for as[n] <= 0. It will memoize the values of a that turn out to be positive. It works a lot faster than code without the memoization because there are enough positive values of as[] so that the recursion is terminated quickly. You could improve this by using a value of greater than 65535 as a sentinal. The new values of the vector are initialized to zero when the vector expands.