I'm looking at presentation by Timothy Lottes where he derives a generic tonemapper (slides 37 and following).
Although the purpose of the different parameters is explained nicely I find it quite hard to adjust them properly. I wrote a simple script to compare different tonemappers and have trouble to find reasonable settings for the generic tonemapper.
Generally I cannot get the shoulder of the curve to behave comparable to the other operators. Maybe it is a mistake in my implementation (original source code is in the slides).
def generic(x):
a = 1.2 # contrast
d = 1.1 # shoulder
mid_in = 1
mid_out = 0.18
hdr_max = 16
# It seems to work better when omitting the minus
b = (-math.pow(mid_in, a) + math.pow(hdr_max, a) * mid_out) / (math.pow(math.pow(hdr_max, a), d) - math.pow(math.pow(mid_in, a), d) * mid_out)
c = (math.pow(math.pow(hdr_max, a), d) * math.pow(mid_in, a) - math.pow(hdr_max, a) * math.pow(math.pow(mid_in, a), d) * mid_out) / (math.pow(math.pow(hdr_max, a), d) - math.pow(math.pow(mid_in, a), d) * mid_out)
z = math.pow(x, a)
y = z / (math.pow(z, d) * b + c)
return y
Has anybody experimented with this by chance?
Apparently there is a problem in the code presented in the slides slide. Bart Wronski gives some corrected code in the comments section of his blog post.
I have also updated the github project to reflect this.
Related
As part of a raytracer experiment I'm working on in my high school classes, I need to make it so that I can get the 4 parts of a plane equation from 3 different points. By 4 parts i mean in the equation Ax + By + Cz = D I need to find A, B, C, and D. I understand the math behind this as its relatively simple vector math, but my code doesn't seem to work.
The function I use to construct the Plane object from the 3 points is as follows:
Plane::Plane(Vec3 A, Vec3 B, Vec3 C)
{
//Getting both vectors
Vec3 AB = B - A;
Vec3 AC = C - A;
//Cross Product
Vec3 cr = AB.cross(AC);
a = cr.getX();
b = cr.getY();
c = cr.getZ();
d = a * A.getX() + b * B.getY() + c * C.getZ();
}
In this, Vec3 is just a vector class that holds (x, y, z), and the function names are pretty self explanatory (I hope).
An example of what it outputs:
If I put the vectors (-3, 0, 1), (2, 3, 0), and (0, 2, 3) into this, I get the following results
A = 8
B = -13
C = 1
D = -60
A, B, and C in this are correct, but D is not.
I'm not entirely certain what's wrong with the code, since it will sometimes get the output correctly on certain vectors, sometimes get parts correct, or sometimes get nothing correct at all, which leads me to believe there's a math mistake. Any help is appreciated.
Since in your example, you get the values for A, B, and C correct, the first place to look is in the calculation of D.
In your calculation of d, you use parts of three different vectors. This is not what the equation for D says to do. You want to use the three parts from one vector.
d = a * A.getX() + b * A.getY() + c * A.getZ();
This should work for any of the three vectors.
Intro
We have been working on a recent project and have been looking for a suitable system to calculate some values. SymPy was recommended as being a rich mathematical library. However, we have been unable to make it "work" with our project.
The issue we have been struggling with specifically is that many of the values we would be using have been rounded numerous times and are likely susceptible to float errors. To work around this on a previous project, we used Interval Arithmetic for JavaScript to fairly effective use. mpmath for Python appears to be similar, but SymPy not only uses mpmath, but also offers other potentially useful functions we may need in the future.
Problem
A sample equation that we have been working on lately is a = b * (1 + c * d * e) and we are looking to solve for e when all other variables are known. However, some of the variables need to be represented as a range of values as we don't know the exact value, but a small range.
Code
from sympy import *
from sympy.sets.setexpr import SetExpr
a, b, c, d, e = symbols('a b c d e')
b = 40
c = 1
d = 0.1
a = SetExpr(Interval(45.995, 46.005))
equ = Eq(b * (1 + c * d * e), a)
solveset(equ, e)
ValueError: The argument '45.995*I' is not comparable.
This was just the latest attempt, but I have tried setting domains, setting inequalities for symbols, using AccumBounds, and numerous other solutions but I can't help but think that we have completely overlooked something simple.
Solution
It appears that that using one interval is doable with the code provided by the selected answer, but it doesn't extend to multiple symbols requiring intervals or ranges of values. It appears we will be extending the mpmath library to support additional interval functions.
There is an intervalmath module in SymPy in the plotting module for some reason. It doesn't subclass Basic though so can't be used directly in an expression. We can however use lambdify to substitute it into an expression as
from sympy import *
from sympy.plotting.intervalmath import interval
b = 40
c = 1
d = 0.1
a, e = symbols('a, e', real=True)
equ = Eq(b * (1 + c * d * e), a)
sol_e, = solveset(equ, e)
f_e = lambdify(a, sol_e)
int_a = interval(45.995, 46.005)
int_e = f_e(a=int_a)
print(int_e)
This gives
[1.498750, 1.501250]
I don't think the intervalmath module is used much though so there's a good chance it might not fully work in your real problem.
Using sets is probably a better approach and it seems that imageset can do this:
In [16]: set_e = imageset(Lambda(a, sol_e), Interval(45.995, 46.005))
In [17]: set_e
Out[17]: [1.49875, 1.50125]
I'm not sure how well this works with more than one symbol/interval though.
EDIT: For completeness I'm showing how you would use intervalmath with more than one interval:
from sympy import *
from sympy.plotting.intervalmath import interval
b = 40
d = 0.1
a, c, e = symbols('a, c, e', real=True)
equ = Eq(b * (1 + c * d * e), a)
sol_e, = solveset(equ, e)
f_e = lambdify((a, c), sol_e)
int_a = interval(45.995, 46.005)
int_c = interval(0.95, 1.05)
int_e = f_e(a=int_a, c=int_c)
print(int_e)
That gives
[1.427381, 1.580263]
This question already has answers here:
Why does changing 0.1f to 0 slow down performance by 10x?
(6 answers)
Closed 8 years ago.
I am a circuit designer, not a software engineer, so I have no idea how to track down this problem.
I am working with some IIR filter code and I am have problems with extremely slow execution times when I process extremely small values through the filter. To find the problem, I wrote this test code.
Normally, the loop will run in about 200 ms or so. (I didn't measure it.) But when TestCheckBox->Checked, it requires about 7 seconds to run. The problem lies with the reduction in size of A, B, C and D within the loop, which is exactly what happens to the values in an IIR filter after it's input goes to zero.
I believe the problem lies with the fact that the variable's expononent value becomes less than -308. A simple fix is to declare the variables as long doubles, but that isn't an easy fix in the actual code, and it doesn't seem like I should have to do this.
Any ideas why this happens and what a simple fix might be?
In case its matters, I am using C++ Builder XE3.
int j;
double A, B, C, D, E, F, G, H;
//long double A, B, C, D, E, F, G, H; // a fix
A = (double)random(100000000)/10000000.0 - 5.0;
B = (double)random(100000000)/10000000.0 - 5.0;
C = (double)random(100000000)/10000000.0 - 5.0;
D = (double)random(100000000)/10000000.0 - 5.0;
if(TestCheckBox->Checked)
{
A *= 1.0E-300;
B *= 1.0E-300;
C *= 1.0E-300;
D *= 1.0E-300;
}
for(j=0; j<=1000000; j++)
{
A *= 0.9999;
B *= 0.9999;
C *= 0.9999;
D *= 0.9999;
E = A * B + C - D; // some exercise code
F = A - C * B + D;
G = A + B + C + D;
H = A * C - B + G;
E = A * B + C - D;
F = A - C * B + D;
G = A + B + C + D;
H = A * C - B + G;
E = A * B + C - D;
F = A - C * B + D;
G = A + B + C + D;
H = A * C - B + G;
}
EDIT:
As the answers said, the cause of this problem is denormal math, something I had never heard of. Wikipedia has a pretty nice description of it as does the MSDN article given by Sneftel.
http://en.wikipedia.org/wiki/Denormal_number
Having said this, I still can't get my code to flush denormals. The MSDN article says to do this:
_controlfp(_DN_FLUSH, _MCW_DN)
These definitions are not in the XE3 math libraries however, so I used
controlfp(0x01000000, 0x03000000)
per the article, but this is having no affect in XE3. Nor is the code suggested in the Wikipedia article.
Any suggestions?
You're running into denormal numbers (ones less than DBL_MIN, in which the most significant digit is treated as a zero). Denormals extend the range of the representable floating-point numbers, and are important to maintain certain useful error bounds in FP arithmetic, but operating on them is far slower than operating on normal FP numbers. They also have lower precision. So you should try to keep all your numbers (both intermediate and final quantities) greater than DBL_MIN.
In order to increase performance, you can force denormals to be flushed to zero by calling _controlfp(_DN_FLUSH, _MCW_DN) (or, depending on OS and compiler, a similar function). http://msdn.microsoft.com/en-us/library/e9b52ceh.aspx
You've entered the realm of floating-point underflow, resulting in denormalized numbers - depending on the hardware you're likely trapping into software, which will be much much slower than hardware operations.
I have been attempting to translate a function from C++ to Python for a while but I cannot understand the function well enough to translate it on my own.
//C++
float Cubic::easeInOut(float t,float b , float c, float d) {
if ((t/=d/2) < 1) return c/2*t*t*t + b;
return c/2*((t-=2)*t*t + 2) + b;
}
//Python
def rotate(t, b, c, d):
t = t/(d/2)
if (t < 1):
return c/2*t*t*t + b
t = t-2
return c/2*((t)*t*t + 2) + b
Edit: this is what i got so far but it doesn't return a list that rises from 0.0 to 1.0.
Has anyone ever done this in python before?
Hint: first, simplify the C++
struct Cubic {
float easeInOut(float t,float b , float c, float d) {
t = t / (d/2);
if (t < 1)
return c/2*t*t*t + b;
t = t - 2;
return c/2*(t*t*t + 2) + b;
}
}
Now if you can't figure out how to translate that to python, then you need to learn more python. I was able to translate this to python and I don't even know python.
Actually, now that you've posted your python, and you claim it's wrong, it occurs to me that all numbers in python are (probably, I'm guessing here) doubles, which means each time you divide it will do so slightly differently than C++ would. A quick glance at the Python docs says "The / (division) and // (floor division) operators yield the quotient of their arguments.", so apparently you should use // if you want it to act like the C++.
Does it help if you replace all the numeric constants (e.g. 2) with their float constant equivalents (e.g. 2.0) ?
def rotate(t, b, c, d):
t = t/(d/2.0)
if t < 1.0:
return c/2.0*t*t*t + b
t = t-2.0
return c/2.0*((t)*t*t + 2.0) + b
Your code is a correct translation, but it's not intended to return a list. This easing function returns a single eased time value for the provided time (t). It's intended to be called multiple times with a t value that varies from 0 to d and returns a result that varies from b to b+c in a smooth (non-linear) fashion.
You want the return to go from 0 to 1, so you should call this with b=0.0 and c=1.0. The d value you should set to the duration of time you want to ease over.
To get a list for eased values from 0 to 1, for t from 0 to 10 you could do something like this:
[rotate(t,0.0,1.0,10.0) for t in range(11)]
result:
[0.0, 0.004000000000000001, 0.03200000000000001, 0.108, 0.25600000000000006, 0.5, 0.744, 0.8919999999999999, 0.968, 0.996, 1.0]
I am wondering if there is a C/C++ library or Matlab code technique to determine real and complex numbers using a minimization solver. Here is a code snippet showing what I would like to do. For example, suppose that I know Utilde, but not x and U variables. I want to use optimization (fminsearch) to determine x and U, given Utilde. Note that Utilde is a complex number.
x = 1.5;
U = 50 + 1i*25;
x0 = [1 20]; % starting values
Utilde = U * (1 / exp(2 * x)) * exp( 1i * 2 * x);
xout = fminsearch(#(v)optim(v, Utilde), x0);
function diff = optim(v, Utilde)
x = v(1);
U = v(2);
diff = abs( -(Utilde/U) + (1 / exp(2 * x)) * exp( 1i * 2 * x ) );
The code above does not converge to the proper values, and xout = 1.7318 88.8760. However, if U = 50, which is not a complex number, then xout = 1.5000 50.0000, which are the proper values.
Is there a way in Matlab or C/C++ to ensure proper convergence, given Utilde as a complex number? Maybe I have to change the code above?
If there isn't a way to do this natively in Matlab, then perhaps one
gist of the question is this: Is there a multivariate (i.e.
Nelder-Mead or similar algorithm) optimization library that is able
to work with real and complex inputs and outputs?
Yet another question is whether the function is convergent or not. I
don't know if it is the algorithm or the function. Might I need to change something in the Utilde = U * (1 / exp(2 * x)) * exp( 1i * 2 * x) expression to make it convergent?
The main problem here is that there is no unique solution to this optimization or parameter fitting problem. For example, looking at the expected and actual results above, Utilde is equivalent (ignoring round-off differences) for the two (x, U) pairs, i.e.
Utilde(x = 1.5, U = 50 + 25i) = Utilde(x = 1.7318, U = 88.8760)
Although I have not examined it in depth, I even suspect that for any value of x, you can find an U that computes to Utilde(x, U) = Utilde(x = 1.5, U = 50 + 25i).
The solution here would thus be to further constrain the parameter fitting problem so that the solver yields any solution that can be considered acceptable. Alternatively, reformulate Utilde to have a unique value for any (x, U) pair.
UPDATE, AUG 1
Given reasonable starting values, it actually seems like it is sufficient to restrict x to be real-valued. Performing unconstrained non-linear optimization using the diff function formulated above, I get the following result:
x = 1.50462926953244
U = 50.6977768845879 + 24.7676554234729i
diff = 3.18731710515855E-06
However, changing the starting guess to values more distant from the desired values does yield different solutions, so restricting x to be real-values does not alone provide a unique solution to the problem.
I have implemented this in C#, using the BOBYQA optimizer, but the numerics should be the same as above. If you want to try outside of Matlab, it should also be relatively simple to turn the C# code below into C++ code using the std::complex class and an (unconstrained) nonlinear C++ optimizer of your own choice. You could find some C++ compatible codes that do not require gradient computation here, and there is also various implementations available in Numerical Recipes. For example, you could access the C version of NR online here.
For reference, here are the relevant parts of my C# code:
class Program
{
private static readonly Complex Coeff = new Complex(-2.0, 2.0);
private static readonly Complex UTilde0 = GetUTilde(1.5, new Complex(50.0, 25.0));
static void Main(string[] args)
{
double[] vars = new[] {1.0, 25.0, 0.0}; // xstart = 1.0, Ustart = 25.0
BobyqaExitStatus status = Bobyqa.FindMinimum(GetObjfnValue, vars.Length, vars);
}
public static Complex GetUTilde(double x, Complex U)
{
return U * Complex.Exp(Coeff * x);
}
public static double GetObjfnValue(int n, double[] vars)
{
double x = vars[0];
Complex U = new Complex(vars[1], vars[2]);
return Complex.Abs(-UTilde0 / U + Complex.Exp(Coeff * x));
}
}
The documentation for fminsearch says how to deal with complex numbers in the limitations section:
fminsearch only minimizes over the real numbers, that is, x must only consist of real numbers and f(x) must only return real numbers. When x has complex variables, they must be split into real and imaginary parts.
You can use the functions real and imag to extract the real and imaginary parts, respectively.
It appears that there is no easy way to do this, even if both x and U are real numbers. The equation for Utilde is not well-posed for an optimization problem, and so it must be modified.
I've tried to code up my own version of the Nelder-Mead optimization algorithm, as well as tried Powell's method. Neither seem to work well for this problem, even when I attempted to modify these methods.