Related
I just need a random uint, better ranging from 0-6, but there is no enumeration type in openGL. I learned that I can get a random float ranging 0-1 from the code below:
frac(sin(dot(uv, float2(12.9898, 78.233))) * 43758.5453123)
I tried to do 1/above and get floor(), but it doesn't work. Then how can I get a random int? or is there a way to get the last digit of the float(so presumably still random)?
First, let's define what we mean by "random". In the context of this answer, a "random" variable is a variable whose values are unpredictable. That is, there is no function that determines/computes an outcome for the random variable when being evaluated (with any possible inputs). Or at least, no such function has been found (yet).
Obviously, when we are talking about computing here, there is no such thing as a true random variable as described above, because anything we do in computing (and by extension in a shader) is necessarily bound to the set of functions that are computable.
Your proposed function in the question:
f(uv) = frac(sin(dot(uv, float2(12.9898, 78.233))) * 43758.5453123)
is just a computable function. It takes as input a vector uv, which itself is a deterministic/computable value - such as derived from a built-in or custom varying variable giving you the "coordinates" of the current fragment.
After evaluation, the function's result itself was computable/deterministic and happens to be a value (which the input vector uv maps to). Taking different IEEE 754 rules and precisions aside (which may vary between different GPUs such as desktop ones and mobile ones), the function itself is purely deterministic/computable and therefore does not give you a random value.
We humans may think that the output is random, because we lack the intuition for the functions used to compute the result, such that when we "see" a number 0.623513632 followed by another number 0.9734126 for only slight variations in the input vector, we could draw the conclusion that "yeah, that looks pretty random", when it fact it obviously isn't. It is just what that function computed, given two input values.
So, when you already have a deterministic function like the above and wanted to obtain values in the closed range [0, 6] from it as a GLSL uint, you can simply scale the output of said function by multiplying the function's result with 7.0 and truncating the result:
g(uv) = uint(f(uv) * 7.0)
If you wanted to obtain true random numbers drawn from a random variable (whose deterministic function simply hasn't been found yet), you can obtain such values from universe background radiation (such as from random.org) and use that as an input to your shader (such as via textures or buffer objects).
But, from a computational perspective, a shader is just a function taking in values (ints, floats, ...) and computing (by means of computable functions) a deterministic result.
All we can do is to shuffle/scramble/diffuse the input bits in such a way, that the result "looks" like random to us. We then call these "pseudo-random" values.
Taking this a step further, we could now ask the question of the distribution quality of the obtained pseudo-random values. This has two qualities:
how evenly distributed are the pseudo-random values in their domain/interval? I.e. do all possible values have the same probability of occurring? Or: Do you even want to have uniformly-distributed values or should the values follow another distribution (like Guassian?)
how well are two values drawn from two sequential input values spaced apart? I.e. what is the frequency of the pseudo-random values?
There are different (deterministic) algorithms/functions depending on which distribution and which frequency spectrum your values should have. But first, you should define an answer to the two questions for your use-case.
And by the way, the commonly used function in your question to obtain pseudo-random numbers in a shader has a terrible distribution quality.
Last but not least, it should also be mentioned that true randomness (i.e. non-determinism), like when you do use an entropy source as input values, is oftentimes an undesirable property in computation, because it:
makes it difficult to repeat the same computation / output when needed, which is useful in various algorithms in the context of path tracing
makes it difficult to reproduce/debug/inspect your function for a particular run when every following execution/run will yield a different output
I'm writing a C++ program that takes the FFT of a real input signal containing double values and returns a vector X containing std::complex<double> values. Once I have the vector of results I then attempt to calculate the magnitude and phase of the result.
I am running into an issue with calculating the phase angle when one of the outputs is "zero". Zero is in quotes because when a calculation that results in 0 returns a double, the returned value will be very near zero, but not quite exactly zero.
For example, at index 3 my output array has the calculated "zero" value:
X[3] = 3.0531133177191805e-16 - i*5.5511151231257827e-17
I am trying to use the standard library std::arg function that is supposed to return the phase angle of a complex number. std::arg(X[3])
While X[3] is essentially 0, it is not EXACTLY 0 and the way phase is calculated this causes a problem because the calculation uses the ratio of the imaginary part divided by the ratio of the real part which is far from 0!
Doing the actual calculation results in a far from desirable result.
How can I make C++ realize that the result is really 0 so I can get the correct phase angle?
I'm looking for a more elegant solution than using an arbitrary hard-coded "epsilon" value to compare the double to, but so far searching online I haven't had any luck coming up with something better.
If you are computing the floating-point FFT of an input signal, then that signal will include noise, thus have a signal-to-noise ratio, including sensor noise, thermal noise, quantization noise, timing jitter noise, etc.
Thus the threshold for discarding FFT results as below your noise floor most likely isn't a matter of computational mathematics, but part of your physical or electronic data acquisition analysis. You will have to plug that number in, and set the phase to 0.0 or NaN or whatever your default flagging value is for a non-useful (at or below the noise floor) FFT result.
It was brought to my attention that my original answer will not work when the input to the FFT has been scaled. I believe I have an actual valid solution now... The original answer is kept below so that the comments still make sense.
From the comments on this answer and others, I've gathered that calculating the exact rounding error in the language may technically be possible but it is definitely not practical. The best practical solution seems to be to allow the user to provide their own noise threshold (in dB) and ignore any data points whose power level falls below that threshold. It would be impossible to come up with a generic threshold for all situations, but the user can provide a reasonable threshold based on the signal-to-noise ratio of the signal being analyzed and pass that in.
A generic phase calculation function is shown below that calculates the phase angles for a vector of complex data points.
std::vector<double> Phase(std::vector<std::complex<double>> X, double threshold, double amplitude)
{
size_t N = X.size();
std::vector<double> X_phase(N);
std::transform(X.begin(), X.end(), X_phase.begin(), [threshold, amplitude](const std::complex<double>& value) {
double level = 10.0 * std::log10(std::norm(value) / std::pow(amplitude, 2.0));
return level > threshold ? std::arg(value) : 0.0;
});
return X_phase;
}
This function takes 3 arguments:
The vector of complex signal data you want to calculate the phase of.
A sensible threshold -- Can be calculated from the signal-to-noise ratio of whatever measurement device was used to capture the signal. If your signal contains no noise other than the rounding errors of the language itself you can set this to some arbitrary really low value, like -120dB.
The maximum possible amplitude of your input signal. If your signal is calculated, this should simply be set to the amplitude of your signal. If your signal is measured, this should be set to the maximum amplitude the measuring device is capable of measuring (If your signal comes from reading an audio file, often its data will be normalized between -1.0 and 1.0. In this case you would just set the amplitude value to 1.0).
This new implementation still provides me with the correct results, but is much more robust. By leaving the threshold calculation to the user they can set the most sensible value themselves based on the characteristics of the measurement device used to measure their input signal.
Please let me know if you notice any errors or any ways I can further improve the design!
Original Answer
I found a solution that seems generic enough.
In the #include <limits> header, there is a constant value for std::numeric_limits<double>::digits10.
According the the documentation:
The value of std::numeric_limits<T>::digits10 is the number of base-10 digits that can be represented by the type T without change, that is, any number with this many significant decimal digits can be converted to a value of type T and back to decimal form, without change due to rounding or overflow.
Using this I can filter out any output values that have a magnitude lower than this limit:
Calculate the phase of X[3]:
int N = X.size();
auto tmp = std::abs(X[3])/N > std::pow(10, -std::numeric_limits<double>::digits10)
? value
: 0.0
double phase = std::arg(tmp);
This effectively filters out any values that are not precisely zero due to rounding errors within the C++ language itself. It will NOT however filter out garbage data caused by noise in the input signal.
After adding this to my phase calculation I get the expected results.
The map from complex numbers to magnitude and phase is discontinuous at 0.
This is a discontinuity caused by the choice of coordinates you are using.
The solution will depend on why you chose those coordinates in a situation where values near the discontinuity are possible.
It isn't "really" zero. If you factored in error bars properly, your answer would really be a small magnitude (hopefully) and a unconstrained angle.
I am using the boost::math::pdf to calculate a probability from a normal distribution. I give a variable which corresponds to distance to the mean and boost::math::pdf gives me a porbability in return.
It works, but i really dont get how because in a continuous distribution (and a normal distribution is a continuous distribution ) you need to integrate between two values to get a probability.
If the distribution is discrete then, a point really does corresponds to a probability but from everything i've read i got the impression that i deal with a continuous distribution.
I would really appriciate it if anyone can shed light upon the topic. How do you get the the probability of just one value with boost::math::pdf ?
PS: Since computer work in a discrete way, i though maybe the normal distribution i am using is discrete after all but that doesnt make sense tbh.
PDF stands for Probability Density Function, which is just a specialized function whose area under the curve from -infinity to +infinity equals 1 (for continuous probability distributions).
You are giving it the X value, and it is returning the resulting Y value. Your interpretation of that value is not correct - it is NOT the probability of the result equaling EXACTLY that X value (you are correct in that probability is weakly zero).
I recommend you read up about PDF (see the link above) so you understand the library function.
I am implementing a conventional (that means not fast), separated Fourier transform for images. I know that in floating point a sum over one period of sin or cos in equally spaced samples is not perfectly zero, and that this is more a problem with the conventional transform than with the fast.
The algorithm works with 2D double arrays and is correct. The inverse is done inside (over a double sign flag and conditional check when using the asymmetric formula), not outside with conjugations. Results are nearly 100% like expected, so its a question about details:
When I perform a forward transform, save logarithmed magnitude and angle to images, reload them, and do an inverse transform, I experience different types of rounding errors with different types of implemented formulas:
F(u,v) = Sum(x=0->M-1) Sum(y=0->N-1) f(x,y) * e^(-i*2*pi*u*x/M) * e^(-i*2*pi*v*y/N)
f(x,y) = 1/M*N * (like above)
F(u,v) = 1/sqrt(M*N) * (like above)
f(x,y) = 1/sqrt(M*N) * (like above)
So the first one is the asymmetric transform pair, the second one the symmetric. With the asymmetric pair, the rounding errors are more in the bright spots of the image (some pixel are rounded slightly outside value range (e.g. 256)). With the symmetric pair, the errors are more in the constant mid-range area of the image (no exceeding of value range!). In total, it seems that the symmetric pair produces a bit more rounding errors.
Then, it also depends of the input: when image stored in [0,255] the rounding errors are other than when in [0,1].
So my question: how should an optimal, most accurate algorithm be implemented (theoretically, no code): asymmetric/symmetric pair? value range of input in [0,255] or [0,1]? How linearly upscaling result before saving logarithmed one to file?
Edit:
my algorithm simply computes the separated asymmetric or symmetric DFT formula. Factors are decomposed into real and imaginary part using Eulers identity, then expanded and sumed up separately as real and imaginary part:
sum_re += f_re * cos(-mode*pi*((2.0*v*y)/N)) - // mode = 1 for forward, -1
f_im * sin(-mode*pi*((2.0*v*y)/N)); // for inverse transform
// sum_im permutated in the known way and + instead of -
This value grouping indside cos and sin should give in my eyes the lowest rounding error (compared to e.g. cos(-mode*2*pi*v*y/N)), because not multiplicating/dividing significantly false rounded transcedental pi several times, but only one time. Isn't it?
The scale factor 1/M*N or 1/sqrt(M*N) is applied separately after each separation outside of the innermost sum. Better inside? Or combined completely at the end of both separations?
For some deeper analysis, I have quitted the input->transform->save-to-file->read-from-file->transform^-1->output workflow and chosen to compare directly in double-precision: input->transform->transform^-1->output.
Here the results for an real life 704x528 8-bit image (delta = max absolute difference between real part of input and output):
with input inside [0,1] and asymmetric formula: delta = 2.6609e-13 (corresponds to 6.785295e-11 for [0,255] range).
with input insde [0,1] and symmetric formula: delta = 2.65232e-13 (corresponds to 6.763416e-11 for [0,255] range).
with input inside [0,255] and asymmetric formula: delta = 6.74731e-11.
with input inside [0,255] and symmetric formula: delta = 6.7871e-11.
These are no real significant differences, however, the full ranged input with the asymmetric transform performs best. I think the values may get worse with 16-bit input.
But in general I see, that my experienced issues are more because of scaling-before-saving-to-file (or inverse) rounding errors, than real transformation rounding errors.
However, I am curious: what is the most used implementation of the Fourier transform: the symmetric or asymmetric? Which value range is in general used for the input: [0,1] or [0,255]? And usual shown spectra in log scale: e.g. [0,M*N] after asymmetric transform of [0,1] input is directly log-scaled to [0,255] or before linearly scaled to [0,255*M*N]?
The errors you report are tiny, normal, and generally can be ignored. Simply scale your results and clamp any results outside the target interval to the endpoints.
In library implementations of FFTs (that is, FFT routines written to be used generally by diverse applications, not custom designed for a single application), little regard is given to scaling; the routine often simply returns data that has been naturally scaled by the arithmetic, with no additional multiplication operations used to adjust the scale. This is because the scale is often either irrelevant for the application (e.g., finding the frequencies with the largest energies works no matter what the scale is) or that the scale may be distributed through multiply operations and performed just once (e.g., instead of scaling in a forward transform and in an inverse transform, the application can get the same effect by explicitly scaling just once). So, since scaling is often not needed, there is no point in including it in a library routine.
The target interval that data are scaled to depends on the application.
Regarding the question on what transform to use (logarithmic or linear) for showing spectra, I cannot advise; I do not work with visualizing spectra.
Scaling causes roundoff errors. Hence, solution 1 (which scales once) is better than solution 2 (which does it twice). Similarly, scaling once after summation is better than scaling everything before summation.
Do you run y from 0 to 2*N or from -N to +N ? Mathematically it's the same, but you have an extra bit of precision in the latter case.
BTW, what's mode doing in cos(-mode * stuff) ?
I am trying to implement a root finding algorithm. I am using the hybrid Newton-Raphson algorithm found in numerical recipes that works pretty nicely. But I have a problem in bracketing the root.
While implementing the root finding algorithm I realised that in several cases my functions have 1 real root and all the other imaginary (several of them, usually 6 or 9). The only root I am interested is in the real one so the problem is not there. The thing is that the function approaches the root like a cubic function, touching with the point the y=0 axis...
Newton-Rapson method needs some brackets of different sign and all the bracketing methods I found don't work for this specific case.
What can I do? It is pretty important to find that root in my program...
EDIT: more problems: sometimes due to reaaaaaally small numerical errors, say a variation of 1e-6 in some value the "cubic" function does NOT have that real root, it is just imaginary with a neglectable imaginary part... (checked with matlab)
EDIT 2: Much more information about the problem.
Ok, I need root finding algorithm.
Info I have:
The root I need to find is between [0-1] , if there are more roots outside that part I am not interested in them.
The root is real, there may be imaginary roots, but I don't want them.
Probably all the rest of the roots will be imaginary
The root may be double in that point, but I think that actually doesn't mater in numerical analysis problems
I need to use the root finding algorithm several times during the overall calculations, but the function will always be a polynomial
In one of the particular cases of the root finding, my polynomial will be similar to a quadratic function that touches Y=0 with the point. Example of a real case:
The coefficient may not be 100% precise and that really slight imprecision may make the function not to touch the Y=0 axis.
I cannot solve for this specific case because in other cases it may be that the polynomial is pretty normal and doesn't make any "strange" thing.
The method I am actually using is NewtonRaphson hybrid, where if the derivative is really small it makes a bisection instead of NewRaph (found in numerical recipes).
Matlab's answer to the function on the image:
roots:
0.853553390593276 + 0.353553390593278i
0.853553390593276 - 0.353553390593278i
0.146446609406726 + 0.353553390593273i
0.146446609406726 - 0.353553390593273i
0.499999999999996 + 0.000000040142134i
0.499999999999996 - 0.000000040142134i
The function is a real example I prepared where I know that the answer I want is 0.5
Note:
I still haven't check completely some of the answers I you people have give me (Thank you!), I am just trying to give al the information I already have to complete the question.
Assuming you have a one-dimensional polynomial problem (which I assume from the imaginary solutions) you can use Sturm sequences to bracket all real roots. See Sturm's theorem.
Welcome to the wonderful world of numerical methods. Watch your hairline; it might start receding as you pull your hair out in frustration.
First off, with numerical root finding, you are toast if you can't bracket the problem. Newton Raphson is nice for polishing off a solution once you get close, and it only works if the derivative near the root is well away from zero. You always need to have some slower technique at hand as a backup because Newton Raphson can send you off to never-never land (i.e., somewhere well outside the bracket). If your function is not a polynomial, the first thing to try is Brent's method. If your function is a polynomial, try Laguerre's method or Jenkins-Traub.
BTW, it sounds like you have a pathological problem. You shouldn't expect particularly good performance. Pathological problems are, well, pathological.
Addendum
If you are having problems with things that appear to be roots, but aren't, you need to take care how you evaluate your function. If you do have a polynomial, form each term of the polynomial, sort by absolute value, and add smallest to largest. This produces better accuracy most of the time, but fails if you have large terms whose sum is nearly zero. If that's the case, you might want to add those canceling terms separately, add the rest smallest to largest, and then compute a grand total -- and your still kinda screwed. That big addition that nearly cancels loses a lot of precision. There's no escape other than extended precision arithmetic.
Ander, thanks for responding to my question (about the interval); sorry for the delay in following up - I have very busy work. Also - before I found the additional information you've provided - I had in mind to explain quite a few things how to handle this and was contemplating how to present that. However, I now believe your case is not too difficult and we can get at it without too much additional stuff, since you apparently have an explicit polynomial expression (coefficients to the various powers).
Let's start with a simple case, to pinpoint the approach.
Step 1.
If you have a 2nd degree polynomial, its derivative is first order and has a simple zero (which you can find by bracketing or simply by explicitly solving the equation). (Yes, I know there's a closed formula for the roots of a 2nd degree polynomial also, but for the sake of the current argument, let us forget that).
The zero's of the 2nd degree polynomial are then located one at the left side and one at the right side of the zero of the derivative. So, if you also have the interval where the roots of the original function (the 2nd degree polynomial) are to be found, you now have two intervals - left and right of the derivative-zero, each with one zero.
It is important to realize that the original function is MONOTONIC on each subinterval (decreasing on one of them, increasing on the other). Therefore, simply by checking the function values at the ends of the (sub)interval you can determine whether or not they actually bracket a zero. If not, there's a multiple zero (double, in this case) exactly at the zero of the derivative IF the function is zero there (otherwise, it is a double imaginary root of which you've now found the real part).
In case the zero of the derivative lies OUTSIDE the total interval, you will have at most one root inside your interval and you need to check only that particular (sub)interval.
Step 2.
Consider now a 3rd order polynomial.
Its derivative is 2nd order.
The derivative of THAT 2nd order polynomial is again 1st order and you proceed as before to get two subintervals to find the roots of the derivative of the original function. These two roots give you THREE (at most) intervals where you will find the 3 roots of the original (3rd order) function.
And also here, you will have intervals (3) where the original function is monotonic (alternatingly increasing/decreasing), making the analysis per subinterval quite easy.
Again, zeros may coincide (2 or even all 3) and may in addition turn out to be complex-valued (i.e. having non-zero imaginary parts). The analysis of the cases is straightforward: check function values at the borders of the intervals to assess whether not there's a sign-change (function is monotonic on each subinterval) and/or whether the function is zero at one of the subinterval-borders.
Step 4.
Generalize this with the known polynomial. Let's say - your example - it is 6th order:
a) construct the 5th derivative (i.e. reducing the original to a 1st order polynomial). Compute it's zero (it is at precisely 0.5 in your example). In this case you're already done, but suppose you don't realize that. So you have now 2 intervals 0..0.5 and 0.5..1
b) construct the 4th derivative. Inspect its values at the subinterval-boundaries (0, 0.5, 1)
For each subinterval determine if it has a real zero inside. If so, you re-partition your original interval in 3 subintervals, using the two found zeros (you forget about the zero of the 5th derivative). If they coincide (at the previous cut, 0.5) you stick with that 0.5 (don't care whether you've found a true double zero of your 4th derivative there or a "double imaginary") and still have only 2 intervals, but for the sake of the argument let's say you now have 3.
c) construct the 3rd derivative and do likewise as before. You will then have 4 (at most) intervals.
d) And so on. After having processed the 2nd derivative in this fashion you have 5 (at most) intervals, and after processing the 1st derivative you have 6 intervals (or less...) and knowing the function is monotonic on each subinterval, you'll quickly determine in each of them if there's a real root, as always using the know monotonicity of the function in each of the final subintervals.
Adding a note on numerical accuracy at evaluating a function:
A first (probably sufficient, in this case) method to reduce noise is NOT to evaluate your function in the way suggested by the original form (i.e. a6 x*6 + a5 x*5 +..), but to rewrite it as:
a0 + x*(a1 + x*(a2 + x*(a3 + x*(a4 + x*(a5 + x*a6)))))
So, in evaluating you proceed:
tmp = a6
tmp = x*tmp + a5
tmp = x*tmp + a4
etcetera.
In case this little rewriting is not sufficient for numerical stability, you should rewrite your polynomial in (for instance) a chebyshev-polynomial expansion and evaluate that one with its recurrence relations. Both (getting the expansion and applying the recurrence relations for evaluation) are rather simple. I can explain, if you need help, but I guess it won't be necessary here.
In all cases, you HAVE to allow for some inaccuracy, i.e. accept that a computation will, generally speaking, NEVER give you the mathematically exact function value. So the assessment whether the function is presumably zero at some point must include some "tolerance", there's no way around this, unfortunately; the best you can aim for is to minimize the noise.
Well, if your function touches zero but never crosses it, you seem to be looking for a minimum (or a maximum). In which case, you're better off telling computer to do exactly that --- either find the root of a derivative (if you can calculate it analytically), or use a minimization routine. Then check that the function value at the minimum is 'close enough' to zero.
Just to reiterate what was already said by other people:
don't start with Newton-Raphson method; it's almost always better to start with Brent or even a straightforward bisection (provided you can bracket the root).
An instability where 'small numerical errors' of the order of 1e-6 have bad effects is worth investigating. Immediate suspects: mixing floats and doubles, loss of precision somewhere etc.
EDIT: So, depending on some parameters, your function has either a zero crossing, or a minimum with zero value, is this correct? In this case, what I'd do is this: use a simple and robust bracketing strategy (e.g. start from [-1, 1], multiply the endpoints by 1.1, check the signs, keep multiplying, something like this). If that succeeds, there's a zero crossing, use a root finding routine. If bracketing fails, use minimization.
Using Newton-Raphson is an act of desperation. You are much better off finding the continued fraction that represents your function and calculating that. A CF will converge much faster and will produce the real root(s). Also, because the CF produces a ratio of two integers you have tight control over numeric precision and don't have to worry about accumulation of rounding errors and other similar hair-pulling-out problems.
To find the real roots of any polynomial function refer to "A Continued Fraction Algorithm for Approximating All Real Polynomial Roots" by David Rosen (1978).
------------ ADDENDUM 1 --- 11 OCT-----------------
Ok, you are solving a sextic. You have several options. The simplest is to use a Taylor approximation (say to the 3rd degree) in conjunction with Halley's method. This is much superior to Newton because it has cubic convergence and you can detect imaginary solutions. The disadvantage is that you will have rounding problems which may result in an incorrect answer.
The ideal option is to find the continued fraction that represents the monic root, because this CF will be computable as an integer ratio of any desired precision, thus elminating the problem of rounding.
One approach to computing this CF is via the Jacobi-Perron algorithm. See the paper Hendy and Jeans: http://www.ams.org/mcom/1981-36-154/S0025-5718-1981-0606514-X/S0025-5718-1981-0606514-X.pdf. This paper shows the exact algorithm for computing cubic and quartic roots via CF approximation.
Note that if the sextic is reducible then it can converted into a quartic and quadratic: http://elib.mi.sanu.ac.rs/files/journals/tm/21/tm1124.pdf. The quartic is then solvable by the algorithm in the Hendy paper.
The general solution to generate a CF for a sextic can be done via the Rogers-Ramunajan CF. See the following paper for the method: http://arxiv.org/pdf/1111.6023v2. This will generate the CF for any sextic.
As in your case, you are interested in the real factorization of a real polynomial. One may see that all complex roots come in conjugate pairs which correspond to a real quadratic factor. By finding this real quadratic and completing the square to get the form (x-r)^2 + s you will be able to see the "real" even order root r with an "error" given by s. If s > 0 is too large, you may discard it as probably being complex. If s < 0 is also large, then you have two faraway real roots given by x = r ± √(-s). If s is very small then you might suspect r is a real double root and keep it.
Finding such a quadratic factor may be done using Bairstow's method, which actually applies a two-dimensional Newton method. This gives x^2 + ux + v and r = -u/2; s = v - r^2.