I need to solve this inequality:
enter image description here
import sympy as sp
from sympy.abc import x,n
epsilon = 0.01
a = 2/3
xn = (n**3 + n**2)**(1/3) - (n**3 - n**2)**(1/3)
i tried to solve it like this:
ans = sp.solve_univariate_inequality(sp.Abs(xn-a) < epsilon,n,relational=False)
but received:
NotImplementedError: The inequality, Abs((x3 -
x2)0.333333333333333 - (x3 + x**2)**0.333333333333333 + 2/3) <
0.01, cannot be solved using solve_univariate_inequality.
And tried so, but it didn't work
ans = sp.solve_poly_inequality(sp.Poly(xn-2/3-0.01, n, domain='ZZ'), '==')
but received:
sympy.polys.polyerrors.PolynomialError: (n3 +
n2)**0.333333333333333 contains an element of the set of generators.
Same:
ans = sp.solveset(sp.Abs(xn-2/3) < 0.01, n)
ConditionSet(n, Abs((n3 - n2)0.333333333333333 - (n3 +
n**2)**0.333333333333333 + 0.666666666666667) < 0.01, Complexes)
How can this inequality be solved?
from sympy import *
from sympy.abc import x,n
a = S(2) / 3
epsilon = 0.01
xn = (n**3 + n**2)**(S(1)/3) - (n**3 - n**2)**(S(1)/3)
ineq = Abs(xn-a) < epsilon
Let's verify where ineq is satisfied with plot_implicit. Note that the left hand side of the inequality is valid only for n>=1.
plot_implicit(ineq, (n, 0, 5), adaptive=False)
So, the inequality is satisfied for values of n greater than 3.something.
We can use a numerical approach to find the root of this equation: Abs(xn-a) - epsilon.
from scipy.optimize import bisect
func = lambdify([n], Abs(xn-a) - epsilon)
bisect(func, 1, 5)
# out: 3.5833149415284424
Related
I am using IPOPT via Pyomo (the AMPL interface) to solve a simple problem and am trying to validate that the primal Lagrangian gradient is zero at the solution. I'm running the following script, in which I construct what I would expect to be the gradient of the Lagrangian with respect to primal variables.
import pyomo.environ as pyo
from pyomo.common.collections import ComponentMap
m = pyo.ConcreteModel()
m.ipopt_zL_out = pyo.Suffix(direction=pyo.Suffix.IMPORT)
m.ipopt_zU_out = pyo.Suffix(direction=pyo.Suffix.IMPORT)
m.ipopt_zL_in = pyo.Suffix(direction=pyo.Suffix.EXPORT)
m.ipopt_zU_in = pyo.Suffix(direction=pyo.Suffix.EXPORT)
m.dual = pyo.Suffix(direction=pyo.Suffix.IMPORT_EXPORT)
m.v1 = pyo.Var(initialize=-2.0)
m.v2 = pyo.Var(initialize=2.0)
m.v3 = pyo.Var(initialize=2.0)
m.v1.setlb(-10.0)
m.v2.setlb(1.5)
m.v1.setub(-1.0)
m.v2.setub(10.0)
m.eq_con = pyo.Constraint(expr=m.v1*m.v2*m.v3 - 2.0 == 0)
obj_factor = 1
m.obj = pyo.Objective(
expr=obj_factor*(m.v1**2 + m.v2**2 + m.v3**2),
sense=pyo.minimize,
)
solver = pyo.SolverFactory("ipopt")
solver.solve(m, tee=True)
grad_lag_map = ComponentMap()
grad_lag_map[m.v1] = (
(obj_factor*2*m.v1) + m.dual[m.eq_con]*m.v2*m.v3 +
m.ipopt_zL_out[m.v1] + m.ipopt_zU_out[m.v1]
)
grad_lag_map[m.v2] = (
(obj_factor*2*m.v2) + m.dual[m.eq_con]*m.v1*m.v3 +
m.ipopt_zL_out[m.v2] + m.ipopt_zU_out[m.v2]
)
grad_lag_map[m.v3] = (
(obj_factor*2*m.v3) + m.dual[m.eq_con]*m.v1*m.v2
)
for var, expr in grad_lag_map.items():
print(var.name, pyo.value(expr))
According to this, however, the gradient of the Lagrangian is not zero when constructed in this way. I can get the gradient of the Lagrangian to be zero by using the following lines to construct grad_lag_map
grad_lag_map[m.v1] = (
-(obj_factor*2*m.v1) + m.dual[m.eq_con]*m.v2*m.v3 +
m.ipopt_zL_out[m.v1] + m.ipopt_zU_out[m.v1]
)
grad_lag_map[m.v2] = (
-(obj_factor*2*m.v2) + m.dual[m.eq_con]*m.v1*m.v3 +
m.ipopt_zL_out[m.v2] + m.ipopt_zU_out[m.v2]
)
grad_lag_map[m.v3] = (
-(obj_factor*2*m.v3) + m.dual[m.eq_con]*m.v1*m.v2
)
With a minus sign in front of the objective gradient, the gradient of the Lagrangian is zero. This is surprising to me. I would not expect to see this factor of -1 for minimization problems. Can anybody confirm whether IPOPT constructs its Lagrangian with this -1 factor for minimization problems, or whether this is the artifact of some other convention I am unaware of?
This is the Gradient of the Lagrangian w.r.t. x computed in Ipopt (https://github.com/coin-or/Ipopt/blob/2b1a2f9a60fb3f8426b47edbe3b3520c7335d201/src/Algorithm/IpIpoptCalculatedQuantities.cpp#L2018-L2023):
tmp->Copy(*curr_grad_f());
tmp->AddTwoVectors(1., *curr_jac_cT_times_curr_y_c(), 1., *curr_jac_dT_times_curr_y_d(), 1.);
ip_nlp_->Px_L()->MultVector(-1., *z_L, 1., *tmp);
ip_nlp_->Px_U()->MultVector(1., *z_U, 1., *tmp);
This corresponds to an Ipopt-internal representation of a NLP, which has the form
min f(x) dual vars:
s.t. c(x) = 0, y_c
d(x) - s = 0, y_d
d_L <= s <= d_U, v_L, v_U
x_L <= x <= x_U z_L, z_U
The Lagragian for Ipopt is then
f(x) + y_c c(x) + y_d (d(x) - s) + v_L (d_L-s) + v_U (s-d_U) + z_L (x_L-x) + z_U (x-x_U)
and the gradient w.r.t. x is thus
f'(x) + y_c c'(x) + y_d d'(x) - z_L + z_U
The NLP that is used by most Ipopt interfaces is
min f(x) duals:
s.t. g_L <= g(x) <= g_U lambda
x_L <= x <= x_U z_L, z_U
The Gradient of the Lagrangian would be
f'(x) + lambda g'(x) - z_L + z_U
In your code, you have a wrong sign for z_L.
The three (real) roots of the polynomial x^3 - 3x + 1 sum up to 0. But sympy does not seem to be able to simplify this sum of roots:
>>> from sympy import *
>>> from sympy.abc import x
>>> rr = real_roots(x**3 -3*x + 1)
>>> sum(rr)
CRootOf(x**3 - 3*x + 1, 0) + CRootOf(x**3 - 3*x + 1, 1) + CRootOf(x**3 - 3*x + 1, 2)
The functions simplify and radsimp cannot simplify this expression. The minimal polynomial, however, is computed correctly:
>>> minimal_polymial(sum(rr))
_x
From this we can conclude that the sum is 0. Is there a direct way of making sympy simplify this sum of roots?
The following function computes the rational number equal to an algebraic term if possible:
import sympy as sp
# try to simplify an algebraic term to a rational number
def try_simplify_to_rational(expr):
try:
float(expr) # does the expression evaluate to a real number?
minPoly = sp.poly(sp.minimal_polynomial(expr))
print('minimal polynomial:', minPoly)
if len(minPoly.monoms()) == 1: # minPoly == x
return 0
if minPoly.degree() == 1: # minPoly == a*x + b
a,b = minPoly.coeffs()
return sp.Rational(-b, a)
except TypeError:
pass # expression does not evaluate to a real number
except sp.polys.polyerrors.NotAlgebraic:
pass # expression does not evaluate to an algebraic number
except Exception as exc:
print("unexpected exception:", str(exc))
print('simplification to rational number not successful')
return expr # simplification not successful
See the working example:
x = sp.symbols('x')
rr = sp.real_roots(x**3 - 3*x + 1)
# sum of roots equals (-1)*coefficient of x^2, here 0
print(sp.simplify(sum(rr)))
print(try_simplify_to_rational(sum(rr))) # -> 0
A more elaborate function computing also simple radical expressions is proposed in sympy issue #19726.
I'm new to SymPy and I'm trying to use it to sum two Poisson distributions
Here's what I have so far (using jupyter notebook)
from sympy import *
from sympy.stats import *
init_printing(use_latex='mathjax')
lamda_1, lamda_2 = symbols('lamda_1, lamda_2')
n_1 = Symbol('n_1')
n_2 = Symbol('n_2')
n = Symbol('n')
#setting up distributions
N_1 = density(Poisson('N_1', lamda_1))(n_1)
N_2 = density(Poisson('N_2', lamda_2))(n_2)
display(N_1)
display(N_2)
print('setting N_2 in terms of N and N_1')
N_2 = N_2.subs(n_2,n-n_1)
display(N_2)
print("N_1 * N_2")
N = N_1 * N_2
#display(N)
Sum(N,(n_1,0,n))
#summation(N,(n_1,0,n))
Everything works fine until I try and run the summation. No errors just doesn't do anything and jupyter says it's running. I've let it run for 10 mins and nothing...
When declaring symbols, include their properties: being positive, integer, nonnegative, etc. This helps SymPy decide whether some transformations are legitimate.
lamda_1, lamda_2 = symbols('lamda_1, lamda_2', positive=True)
n_1, n_2, n = symbols('n_1 n_2 n', nonnegative=True, integer=True)
Unfortunately, summation still fails because SymPy cannot come up with the key trick: multiplying and dividing by factorial(n). It seems one has to tell it to do that.
s = summation(N*factorial(n), (n_1, 0, n))/factorial(n)
print(s.simplify())
This prints
Piecewise(((lamda_1 + lamda_2)**n*exp(-lamda_1 - lamda_2)/factorial(n), ((-n >= 0) & (lamda_1/lamda_2 <= 1)) | ((-n < 0) & (lamda_1/lamda_2 <= 1))), (lamda_2**n*exp(-lamda_1 - lamda_2)*Sum(lamda_1**n_1*lamda_2**(-n_1)/(factorial(n_1)*factorial(n - n_1)), (n_1, 0, n)), True))
which is a piecewise formula full of unnecessary conditions... but if we ignore those conditions (they are just artifacts of how SymPy performed the summation), the correct result
((lamda_1 + lamda_2)**n*exp(-lamda_1 - lamda_2)/factorial(n)
is there.
Aside: avoid doing import * from both sympy and sympy.stats, there are notational clashes such as E being 2.718... versus expected value. from sympy.stats import density, Poisson would be better. Also, N is a built-in SymPy function and is best avoided as a variable name.
I'm using Newton's method, so I want to find the positions of all six roots of the sixth-order polynomial, basically the points where the function is zero.
I found the rough values on my graph with this code below but want to output those positions of all six roots. I'm thinking of using x as an array to input the values in to find those positions but not sure. I'm using 1.0 for now to locate the rough values. Any suggestions from here??
def P(x):
return 924*x**6 - 2772*x**5 + 3150*x**4 - 1680*x**3 + 420*x**2 - 42*x + 1
def dPdx(x):
return 5544*x**5 - 13860*x**4 + 12600*x**3 - 5040*x**2 + 840*x - 42
accuracy = 1**-10
x = 1.0
xlast = float("inf")
while np.abs(x - xlast) > accuracy:
xlast = x
x = xlast - P(xlast)/dPdx(xlast)
print(x)
p_points = []
x_points = np.linspace(0, 1, 100)
y_points = np.zeros(len(x_points))
for i in range(len(x_points)):
y_points[i] = P(x_points[i])
p_points.append(P(x_points))
plt.plot(x_points,y_points)
plt.savefig("roots.png")
plt.show()
The traditional way is to use deflation to factor out the already found roots. If you want to avoid manipulations of the coefficient array, then you have to divide the roots out.
Having found z[1],...,z[k] as root approximations, form
g(x)=(x-z[1])*(x-z[2])*...*(x-z[k])
and apply Newtons method to h(x)=f(x)/g(x) with h'(x)=f'/g-fg'/g^2. In the Newton iteration this gives
xnext = x - f(x)/( f'(x) - f(x)*g'(x)/g(x) )
Fortunately the quotient g'/g has a simple form
g'(x)/g(x) = 1/(x-z[1])+1/(x-z[2])+...+1/(x-z[k])
So with a slight modification to the Newton step you can avoid finding the same root over again.
This all still keeps the iteration real. To get at the complex root, use a complex number to start the iteration.
Proof of concept, adding eps=1e-8j to g'(x)/g(x) allows the iteration to go complex without preventing real values. Solves the equivalent problem 0=exp(-eps*x)*f(x)/g(x)
import numpy as np
import matplotlib.pyplot as plt
def P(x):
return 924*x**6 - 2772*x**5 + 3150*x**4 - 1680*x**3 + 420*x**2 - 42*x + 1
def dPdx(x):
return 5544*x**5 - 13860*x**4 + 12600*x**3 - 5040*x**2 + 840*x - 42
accuracy = 1e-10
roots = []
for k in range(6):
x = 1.0
xlast = float("inf")
x_points = np.linspace(0.0, 1.0, 200)
y_points = P(x_points)
for rt in roots:
y_points /= (x_points - rt)
y_points = np.array([ max(-1.0,min(1.0,np.real(y))) for y in y_points ])
plt.plot(x_points,y_points,x_points,0*y_points)
plt.show()
while np.abs(x - xlast) > accuracy:
xlast = x
corr = 1e-8j
for rt in roots:
corr += 1/(xlast-rt)
Px = P(xlast)
dPx = dPdx(xlast)
x = xlast - Px/(dPx - Px*corr)
print(x)
roots.append(x)
I am trying to integrate a second order differential equation using 'scipy.integrate.odeint'. My eqution is as follows
m*x[i]''+x[i]'= K/N*sum(j=0 to N)of sin(x[j]-x[i])
which I have converted into two first order ODEs as followed. In the below code, yinit is array of the initial values x(0) and x'(0). My question is what should be the values of x(0) and x'(0) ?
x'[i]=y[i]
y'[i]=(-y[i]+K/N*sum(j=0 to N)of sin(x[j]-x[i]))/m
from numpy import *
from scipy.integrate import odeint
N = 50
def f(theta, t):
global N
x, y = theta
m = 0.95
K = 1.0
fx = zeros(N, float)
for i in range(N):
s = 0.0
for j in range(i+1,N):
s = s + sin(x[j] - x[i])
fx[i] = (-y[i] + (K*s)/N)/m
return array([y, fx])
t = linspace(0, 10, 100, endpoint=False)
Uniformly generating random number
theta = random.uniform(-180, 180, N)
Integrating function f using odeint
yinit = array([x(0), x'(0)])
y = odeint(f, yinit, t)[:,0]
print (y)
You can choose as initial condition whatever you want.
In your case, you decided to use a random initial condition for x for all the oscillators. You can use a random initial condition for 'y' as well I guess, as I did below.
There were a few errors in the above code, mostly on how to unpack x,y from theta and how to repack them at the end (see concatenate below in the corrected code). See also the concatenate for yinit.
The rest are stylish/minor changes.
from numpy import concatenate, linspace, random, mod, zeros, sin
from scipy.integrate import odeint
Nosc = 20
assert mod(Nosc, 2) == 0
def f(theta, _):
N = theta.size / 2
x, y = theta[:N], theta[N:]
m = 0.95
K = 1.0
fx = zeros(N, float)
for i in range(N):
s = 0.0
for j in range(i + 1, N):
s = s + sin(x[j] - x[i])
fx[i] = (-y[i] + (K * s) / N) / m
return concatenate(([y, fx]))
t = linspace(0, 10, 50, endpoint=False)
theta = random.uniform(-180, 180, Nosc)
theta2 = random.uniform(-180, 180, Nosc) #added initial condition for the velocities of the oscillators
yinit = concatenate((theta, theta2))
res = odeint(f, yinit, t)
X = res[:, :Nosc].T
Y = res[:, Nosc:].T
To plot the time evolution of the system, you can use something like
import matplotlib.pylab as plt
fig, ax = plt.subplots()
for displacement in X:
ax.plot(t, displacement)
ax.set_xlabel('t')
ax.set_ylabel('x')
fig.show()
What are you modelling? At first the eq. looked a bit like kuramoto oscillators, but then I noticed you also have a x[i]'' term.
Notice how in your model, as you do not have a spring term in the equation, like a term x(t) at the LHS, the value of x converges to an arbitrary value: