I am running an MCMC sampler which requires the calculation of the hypergeometric function at each step using scipy.special.hyp2f1().
At certain points on my grid (which I do not care about) the solutions to the hypergeometric function are quite unstable and SciPy prints the warning:
Warning! You should check the accuracy
This is rather annoying, and over 1000s of samples may well slow down my routine.
I have tried using special.errprint(0) with no luck, as well as disabling all warnings in Python using both the warnings module and the -W ignore flag.
The offending function (called from another file) is below
from numpy import pi, hypot, real, imag
import scipy.special as special
def deflection_angle(p, (x1, x2)):
# Find the normalisation constant
norm = (p.f * p.m * (p.r0 ** (t - 2.0)) / pi) ** (1.0 / t)
# Define the complex plane
z = x1 + 1j * x2
# Define the radial coordinates
r = hypot(x1, x2)
# Truncate the radial coordinates
r_ = r * (r < p.r0).astype('float') + p.r0 * (r >= p.r0).astype('float')
# Calculate the radial part
radial = (norm ** 2 / (p.f * z)) * ((norm / r_) ** (t - 2))
# Calculate the angular part
h1, h2, h3 = 0.5, 1.0 - t / 2.0, 2.0 - t / 2.0
h4 = ((1 - p.f ** 2) / p.f ** 2) * (r_ / z) ** 2
special.errprint(0)
angular = special.hyp2f1(h1, h2, h3, h4)
# Assemble the deflection angle
alpha = (- radial * angular).conjugate()
# Separate real and imaginary parts
return real(alpha), imag(alpha)`
Unfortunately, hyp2f1 is notoriously hard to compute over some non-trivial areas of the parameter space. Many implementations would dilently produce inaccurate or wildly wrong results. Scipy.special tries hard to at least monitor convergence. An alternative could be to usr arbitrary precision implementations, e.g. mpmath. But these would certainly be quite a bit slower, so MCMC users beware.
EDIT: Ok, this seems to be scipy version dependent. I tried #wrwrwr's example on scipy 0.13.3, and it reproduces what you see: "Warning! You should check the accuracy" is printed regardless of the errprint status. However, doing the same with the dev version, I get
In [12]: errprint(True)
Out[12]: 0
In [13]: hyp2f1(0.5, 2/3., 1.5, 0.09j+0.75j)
/home/br/virtualenvs/scipy_py27/bin/ipython:1: SpecialFunctionWarning: scipy.special/chyp2f1: loss of precision
#!/home/br/virtualenvs/scipy_py27/bin/python
Out[13]: (0.93934867949609357+0.15593972567482395j)
In [14]: errprint(False)
Out[14]: 1
In [15]: hyp2f1(0.5, 2/3., 1.5, 0.09j+0.75j)
Out[15]: (0.93934867949609357+0.15593972567482395j)
So, apparently it got fixed at some point between 2013 and now. You might want to upgrade your scipy version.
Related
I am not pratice in Sympy manipulation.
I need to find roots on particular poly:
-4x**(11/2)-24x**(9/2)-16x**(7/2)+2x**(5/2)+16x**(5)+23x**(4)+5x**(3)-x**(2)
I verified that I have 2 real solution and I find one of them with Sympy function
nsolve(mypoly,x,1).
Why the previous step doesn't look the other?
How can I proceed to find ALL roots?
Thank you to all for assistance
A.
To my knowledge, nsolve looks in the proximity of the provided initial guess to find one root for each equations.
I would plot the expression to find suitable initial guesses:
from sympy import *
from sympy.plotting import PlotGrid
expr = -4*x**(S(11)/2)-24*x**(S(9)/2)-16*x**(S(7)/2)+2*x**(S(5)/2)+16*x**(5)+23*x**(4)+5*x**(3)-x**(2)
p1 = plot(expr, (x, 0, 0.5), adaptive=False, n=1000, ylim=(-0.01, 0.05), show=False)
p2 = plot(expr, (x, 0, 5), adaptive=False, n=1000, ylim=(-200, 200), show=False)
PlotGrid(1, 2, p1, p2)
Now, we can do:
nsolve(expr, x, 0.2)
# out: 0.169003536680445
nsolve(expr, x, 4)
# out: 4.28968831654177
EDIT: to find all roots (even the complex one), we can:
compute the derivative of the expression.
convert both the expression and the derivative to numerical functions with sympy's lambdify.
visually inspect the expression in the complex plane to determine good initial values for the root finding algorithm. I'm going to use this plotting module, SymPy Plotting Backend which exposes a very handy function, plot_complex, to generate domain coloring plots. In particular, I will plot alternating black and white stripes corresponding to modulus.
use scipy's newton method to compute the actual roots. EDIT: I just discovered that nsolve works too :)
# step 1 and 2
f = lambdify(x, expr)
f_der = lambdify(x, expr.diff(x))
# step 3
from spb import plot_complex
r = (x, -1-0.8j, 4.5+0.8j)
w = r[1].real - r[2].real
h = r[1].imag - r[2].imag
# number of discretization points, watch out memory usage
n1 = 1500
n2 = int(h / w * n1)
plot_complex(expr, r, {"interpolation": "spline36"}, grid=False, coloring="e", n1=n1, n2=n2, size=(10, 5))
In the above picture we see circular stripes getting bigger and deforming. The center of these circular stripes represent a pole or a zero. But this is an easy case: there are no poles. So, from the above pictures we count 7 zeros. We already know 3, the two computed above and the value 0. Let's find the others:
from scipy.optimize import newton
r1 = newton(f, x0=-0.9+0.1j, fprime=f_der)
r2 = newton(f, x0=-0.9-0.1j, fprime=f_der)
r3 = newton(f, x0=0.6+0.6j, fprime=f_der)
r4 = newton(f, x0=0.6-0.6j, fprime=f_der)
for r in (r1, r2, r3, r4):
print(r, ": is it a zero?", expr.subs(x, r).evalf())
# out:
# (-0.9202719950522663+0.09010409402273806j) : is it a zero? -8.21787666002984e-15 + 2.06697764417957e-15*I
# (-0.9202719950522663-0.09010409402273806j) : is it a zero? -8.21787666002984e-15 - 2.06697764417957e-15*I
# (0.6323265751497729+0.6785871500619469j) : is it a zero? -2.2103533615688e-15 - 2.77549897301442e-15*I
# (0.6323265751497729-0.6785871500619469j) : is it a zero? -2.2103533615688e-15 + 2.77549897301442e-15*I
As you can see, inserting those values into the original expression get values very very close to zero. It is perfectly normal to see these kind of errors.
I just discovered that you can use also use nsolve instead of newton to compute complex roots. This makes step 1 and 2 unnecessary.
nsolve(expr, x, -0.9+0.1j)
# out: −0.920271995052266+0.0901040940227375𝑖
I'm new to SymPy and I'm trying to use it to sum two Poisson distributions
Here's what I have so far (using jupyter notebook)
from sympy import *
from sympy.stats import *
init_printing(use_latex='mathjax')
lamda_1, lamda_2 = symbols('lamda_1, lamda_2')
n_1 = Symbol('n_1')
n_2 = Symbol('n_2')
n = Symbol('n')
#setting up distributions
N_1 = density(Poisson('N_1', lamda_1))(n_1)
N_2 = density(Poisson('N_2', lamda_2))(n_2)
display(N_1)
display(N_2)
print('setting N_2 in terms of N and N_1')
N_2 = N_2.subs(n_2,n-n_1)
display(N_2)
print("N_1 * N_2")
N = N_1 * N_2
#display(N)
Sum(N,(n_1,0,n))
#summation(N,(n_1,0,n))
Everything works fine until I try and run the summation. No errors just doesn't do anything and jupyter says it's running. I've let it run for 10 mins and nothing...
When declaring symbols, include their properties: being positive, integer, nonnegative, etc. This helps SymPy decide whether some transformations are legitimate.
lamda_1, lamda_2 = symbols('lamda_1, lamda_2', positive=True)
n_1, n_2, n = symbols('n_1 n_2 n', nonnegative=True, integer=True)
Unfortunately, summation still fails because SymPy cannot come up with the key trick: multiplying and dividing by factorial(n). It seems one has to tell it to do that.
s = summation(N*factorial(n), (n_1, 0, n))/factorial(n)
print(s.simplify())
This prints
Piecewise(((lamda_1 + lamda_2)**n*exp(-lamda_1 - lamda_2)/factorial(n), ((-n >= 0) & (lamda_1/lamda_2 <= 1)) | ((-n < 0) & (lamda_1/lamda_2 <= 1))), (lamda_2**n*exp(-lamda_1 - lamda_2)*Sum(lamda_1**n_1*lamda_2**(-n_1)/(factorial(n_1)*factorial(n - n_1)), (n_1, 0, n)), True))
which is a piecewise formula full of unnecessary conditions... but if we ignore those conditions (they are just artifacts of how SymPy performed the summation), the correct result
((lamda_1 + lamda_2)**n*exp(-lamda_1 - lamda_2)/factorial(n)
is there.
Aside: avoid doing import * from both sympy and sympy.stats, there are notational clashes such as E being 2.718... versus expected value. from sympy.stats import density, Poisson would be better. Also, N is a built-in SymPy function and is best avoided as a variable name.
i'm using aprogram to classify road singns, and i want get confidence of prediction between 0-1.
Well, I tried to calculate the confidence and compare it with probabilities, but it was not work, because there's images representing (for exp 60 Km / h), and have a rates below than 0.9, and another (also representing 60 Km / h) have a higher rate to 0.9.
but the same thing is repeated with unrecognized traffic sing : there's images that does not represent a traffic sing, and which have a rate less than 0.9, and others which have a rate higher than 0 9.
i tried this
decision = svmob.predict(testData, true);
confidence = 1.0 / (1.0 + exp(-decision));
that i found here but it does'n work in OpenCv3.0.
can you help me please.
than I tried this:
int classObject = decision.at<float>(currentFile) < 0.0 ? 1 : -1;
float confidence = classObject == -1 ? (1.0 / (1.0 + exp(-decision.at<float>(currentFile)))) : (1.0 - (1.0 / (1.0 + exp(-decision.at<float>(currentFile)))));
if(confidence<0.9)
printf("le panneau n'est pas reconnu");
else
printf("decision = %f, response = %f\n",
decision.at<float>(0), response);
I want to know ho to do it, please?
in opencv3.0 we should use the interface predict(p, noArray(), cv::ml::StatModel::RAW_OUTPUT). Its' effect is equal to predict(p, true) in opencv2.4
Opencv documention explain the interface:C++: float StatModel::predict(InputArray samples, OutputArray results=noArray(), int flags=0 ) const
Parameters:
samples – The input samples, floating-point matrix
results – The optional output matrix of results.
flags – The optional flags, model-dependent. Some models, such as Boost, SVM recognize StatModel::RAW_OUTPUT flag, which makes the method return the raw results (the sum), not the class label.
I am trying to optimize parameters for a known function to fit an experimental data plot. The function is fairly involved
where x sweeps along a know set of numbers and p, g and c are the independent parameters to be optimized. Any ideas or resources that could be of assistance?
I would not recommend Genetic Algorithms. Instead go for straight forward Optimization.
Scipy has some resources.
You haven't provided any data or so, so I'll just go for something that should run. Below is something to get you started. I can't know if it works without seeing the data. Also, there must probably is a way to dynamically feed objectivefunc your x and y data. That's probably in the docs to scipy.optimize.minimize.
What I've done. Create a function to minimize. Here, I've called it objectivefunc. For that I've taken your function y = x^2 * p^2 * g / ... and transformed it to be of the form x^2 * p^2 * g / (...) - y = 0. Then square the left hand side and try to minimise it. Because you will have multiple (x/y) data samples, I'd minimise the sum of the squares. Put it all in a function and pass it to the minimize from scipy.
import numpy as np
from scipy.optimize import minimize
def objectivefunc(pgq):
"""Your function transformed so that it can be minimised.
I've renamed the input pgq, so that pgq[0] is p, pgq[1] is g, etc.
"""
p = pgq[0]
g = pgq[1]
q = pgq[2]
x = [10, 9.4, 17] # Some input data.
y = [12, 42, 0.8]
sum_ = 0
for i in range(len(x)):
sum_ += (x[i]**2 * p**2 * g - y[i] * ( (c**2 - x**2)**2 + x**2 * g**2) )**2
return sum_
pgq = np.array([1.3, 0.7, 0.5]) # Supply sensible initivial values
res = minimize(objectivefunc, pgq, method='nelder-mead',
options={'xtol': 1e-8, 'disp': True})
Have you tired old good Levenberg-Marquardt as implemented in Levenberg-Marquardt.vi. If it does not suite your needs, you can try Waptia libraryfor LabVIEW with one of the genetic algorithms implemented.
Executing the following code gives a plot where the last 2 minor grid lines are missing in both x and y. If I remove the section of the code that trims the data or extend the amount of data which is accepted (xylimit) then this can be obviated. Can anyone see what I'm doing wrong?
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from pylab import *
l=201
x=linspace(-l,l,201)
y=linspace(-l,l,201)
z=np.random.rand(l,l)
xylimit=100
i=0
while i<len(x[:]):
if abs(x[i])>xylimit:
x=np.delete(x,i,0)
y=np.delete(y,i,0)
else:
i+=1
i=0
while i<len(y[:]):
if abs(y[i])>xylimit:
x=np.delete(x,i,0)
y=np.delete(y,i,0)
else:
i+=1
z=np.random.rand(len(x),len(x))
xgridlines = getp(gca(), 'xgridlines')
ygridlines = getp(gca(), 'ygridlines')
plt.minorticks_on()
plt.grid(b=True, which='both',linestyle='-')
l=plt.contourf(x,y,z,np.linspace(0,1,255))
plt.show()
It seems to be a bug in matplotlib's minor tick locator (matplotlib.ticker.AutoMinorLocator). The tick at 90.0 just does not exist. If you do this:
xlim(-98.5,100.0)
The tick miraculously appears. But if you use any number which will prevent the last major line to be drawn (i.e. anything below 100.0), the tick disappears.
The lack of the tick can be verified by:
>>> gca().get_xaxis().get_minorticklocs()
array([-90., -80., -70., -60., -40., -30., -20., -10., 10., 20., 30.,
40., 60., 70., 80.])
There is an ugly manual hack to get around the problem
gca().get_xaxis().set_minor_locator(matplotlib.ticker.FixedLocator(
array([-90., -80., -70., -60., -40., -30., -20., -10., 10., 20., 30.,
40., 60., 70., 80., 90.])))
This unfortunately requires setting the positions by hand.
A minimal example of the bug (namespaces omitted, I use ipython+pylab):
figure()
plot([0,100], [0,100])
axis([0,99.5,0,99.5])
minorticks_on()
grid(which='both')
And the bug... It seems to be on line 1732 of ticker.py now available at github (or line 1712 in version 1.3.1). As the version may change, I'll paste it here:
if len(majorlocs) > 0:
t0 = majorlocs[0]
tmin = np.ceil((vmin - t0) / minorstep) * minorstep
tmax = np.floor((vmax - t0) / minorstep) * minorstep
locs = np.arange(tmin, tmax, minorstep) + t0
cond = np.abs((locs - t0) % majorstep) > minorstep / 10.0
locs = locs.compress(cond)
else:
locs = []
The bug itself is on the line locs = .... Actually there are two bugs:
the use of arange in a way shown here, because in my simple example tmin = 0.0, tmax = 95.0 and minorstep = 5.0; in general case it is sheer luck if arangewill give the 95.0 or not. (Actually with integers it works always the wrong way but with anything else is very prone to round-off errors.)
even if the maths went right above (they do in the example), the last sample will be missed
In my opinion this should be rewritten to use integers. As a temporary cure, one could replace the line with:
locs = np.arange(tmin, tmax + minorstep / 2., minorstep) + t0
This got the lines back at least in my matplotlib.
So, the short answer: Find the ticker.py somewhere in python package folders, make the small edit, and you are done.