I am doing my physics homework and tried to simplify an expression using the Euler formula. The minimal not-working example looks like this.
from sympy import *
x, phi = symbols("x varphi", real=True)
simplify(x * (E**(I*phi) + E**(-I*phi)))
My Jupiter notebook outputs the exact same thing back
While the desired expression using the Euler formula is
However, sympy actually knows how to use the Euler formula to represent the cosine function, because it outputs the simplified expression nicely when the x is removed:
simplify(E**(I*phi) + E**(-I*phi))
gives
Since the distributive property of multiplication apply to complex numbers, I don't see why sympy can't figure out the desired simplification of the first expression.
May be it is by design. As a workaround you can do
expr=x* (E**(I*phi) + E**(-I*phi))
expr.rewrite(cos)
which gives
2*x*cos(varphi)
I am trying to solve the following equation for r:
from sympy import pi, S, solve, solveset, nsolve, symbols
(n_go, P_l, T, gamma_w, P_g, r, R_mol) = symbols(
'n_go, P_l, T, gamma_w, P_g, r, R_mol', real=True)
expr = -P_g + P_l - 3*R_mol*T*n_go/(4*r**3*pi) + 2*gamma_w/r
soln = solveset(expr, r, domain=S.Reals)
soln1 = solve(expr, r)
soln is of the form Complement(Intersection(FiniteSet(...))), which I really don't know what to do with.
soln1 is a list of 3 expressions, two of which are complex. In fact, if I substitute values for the symbols and compute the solutions for soln1, all are complex:
vdict = {n_go: 1e-09, P_l: 101325, T: 300, gamma_w: 0.07168596252716256, P_g: 3534.48011713030, R_mol: 8.31451457896800}
for result in soln1:
print(result.subs(vdict).n())
returns:
-9.17942953565355e-5 + 0.000158143657514283*I
-9.17942953565355e-5 - 0.000158143657514283*I
0.000182122477993494 + 1.23259516440783e-32*I
Interestingly, substituting values first and then using solveset() or solve() gives a real result:
solveset(expr.subs(vdict), r, domain=S.Reals).n()
{0.000182122477993494}
Conversely, nsolve fails with this equation, unless the starting point contains the first 7 significant digits of the solution(!):
nsolve(expr.subs(vdict), r,0.000182122 )
ValueError: Could not find root within given tolerance. (9562985778.9619347103 > 2.16840434497100886801e-19)
It should not be that hard, here is the plot:
My questions:
Why is nsolve so useless here?
How can I use the solution returned from solveset to compute any numerical solutions?
Why can I not obtain a real solution from solve if I solve first and then substitute values?
The answer from Maelstrom is good but I just want to add a few points.
The values you substitute are all floats and with those values the polynomial is ill-conditioned. That means that the form of the expression that you substitute into can affect the accuracy of the returned results. That is one reason why substituting values into the solution from solve does not necessarily give exactly the same value that you get from substituting before calling solve.
Also before you substitute the symbols it isn't possible for solve to know which of the three roots is real. That's why you get three solutions from solve(expr, r) and only one solution from solve(expr.subs(vdict), r). The third solution which is real after the substitution is the same (ignoring the tiny imaginary part) as returned by solve after the substitution:
In [7]: soln1[2].subs(vdict).n()
Out[7]: 0.000182122477993494 + 1.23259516440783e-32⋅ⅈ
In [8]: solve(expr.subs(vdict), r)
Out[8]: [0.000182122477993494]
Because the polynomial is ill-conditioned and has a large gradient at the root nsolve has a hard time finding this root. However nsolve can find the root if given a narrow enough interval:
In [9]: nsolve(expr.subs(vdict), r, [0.0001821, 0.0001823])
Out[9]: 0.000182122477993494
Since this is essentially a polynomial your best bet is actually to convert it to a polynomial and use nroots. The quickest way to do this is using as_numer_denom although in this case that introduces a spurious root at zero:
In [26]: Poly(expr.subs(vdict).as_numer_denom()[0], r).nroots()
Out[26]: [0, 0.000182122477993494, -9.17942953565356e-5 - 0.000158143657514284⋅ⅈ, -9.17942953565356e-5 + 0.000158143657514284⋅ⅈ]
Your expr is essentially a cubic equation.
Applying the subs before or after solving should not substantially change anything.
soln
soln is of the form Complement(Intersection(FiniteSet(<3 cubic solutions>), Reals), FiniteSet(0)) i.e. a cubic solution on a real domain excluding 0.
The following should give you a simple FiniteSet back but evalf does not seem to be implemented well for sets.
print(soln.subs(vdict).evalf())
Hopefully something will be done about it soon.
1
The reason why nsolve is not useful is because the graph is almost asymptotically vertical. According to your graph, the gradient is roughly 1.0e8. I don't think nsolve is useful for such steep graphs.
Plotting your substituted expression we get:
Zooming out we get:
This is a pretty wild function and I suspect nsolve uses an epsilon that is to large to be useful in this situation. To fix this, you could provide more reasonable numbers that are closer to 1 when substituting. (Consider providing different units of measurement. eg. instead of meters/year consider km/hour)
2
It is difficult to tell you how to deal with the output of solveset in general because every type of set needs to be dealt with in different ways. It's also not mathematically sensible since soln.args[0].args[0].args[0] should give the first cubic solution but it forgets that this must be real and nonzero.
You can use args or preorder_traversal or things to navigate the tree. Also reading documentation of various sets should help. solve and solveset need to be used "interactively" because there are lots of possible outputs with lots of ways to understand it.
3
I believe soln1 has 3 solutions instead of 4 as you state. Otherwise, your loop would print 4 lines instead of 3. All of them are technically of complex (as is the nature with floats in Python). However, the third solution you provide has a very small imaginary component. To remove these kinds of finicky things, there is an argument called chop which should help:
for result in soln1:
print(result.subs(vdict).n(chop=True))
One of the results is 0.000182122477993494 which looks like your root.
Here is an unswer to the underlying question: How to compute the roots of the above equation efficiently?
Based on the suggestion by #OscarBenjamin, we can do even better and faster by using Poly and roots instead of nroots. Below, sympy computes in no time the roots of the equation for 100 different values of P_g, while keeping everything else constant:
from sympy import pi, Poly, roots, solve, solveset, nsolve, nroots, symbols
(n_go, P_l, T, gamma_w, P_g, r, R_mol) = symbols(
'n_go, P_l, T, gamma_w, P_g, r, R_mol', real=True)
vdict = {pi:pi.n(), n_go:1e-09, P_l:101325, T:300, gamma_w:0.0717, R_mol: 8.31451457896800}
expr = -P_g + P_l - 3*R_mol*T*n_go/(4*r**3*pi) + 2*gamma_w/r
expr_poly = Poly(expr.as_numer_denom()[0], n_go, P_l, T, gamma_w, P_g, r, R_mol, domain='RR[pi]')
result = [roots(expr_poly.subs(vdict).subs(P_g, val)).keys() for val in range(4000,4100)]
All that remains is to check if the solutions are fulfilling our conditions (positive, real). Thank you, #OscarBenjamin!
PS: Should I expand the topic above to include nroots and roots?
I have been using Latex2SymPy for a while successfully to handle all sorts of LaTeX inputs, but I have been running into troubles with a few functions. For instance:
from latex2sympy.process_latex import process_sympy
from sympy import *
inputLatex = '\\sin{-x}\\sin{-x}'
trigsimp(process_sympy(inputLatex))
sin(x)**2
That works great: trigsimp handled the simplification well. Now, if I try:
inputLatex = '\\sin{-x}'
trigsimp(process_sympy(inputLatex))
sin(-x)
Even though this is obviously correct, I expected trigsimp() to give me -sin(x) as an answer. In fact, if I run trigsimp straight from a sympy expression:
trigsimp(sin(-x))
-sin(x)
This is what I expect. Even running sin(-x) without the trigsimp() command returns me -sin(x). I checked the object type of my process_sympy('\\sin{-x}') call, and it is 'sin'.
I thought this might be something related with the way x is transformed into a Symbol type, but I then tried putting pi in the sin function.
inputLatex = '\\sin{\\pi}'
trigsimp(process_sympy(inputLatex))
sin(pi)
If I run straight sin(pi), I get 0 as an answer, with or without trigsimp().
Can any of you shed a light on this?
Although it is probably too late, you can solve that issue by calling the doit() method in the sympy expression obtained from process_sympy(inputLatex).
When calling to process_sympy, it generates an unevaluated sympy expression, not a string. This expression cannot be evaluated just by calling sympify, since it has an attribute telling not to evaluate it.
You solved the problem by converting that expression to a string and then sympifying it, but a more direct way of doing it is just by using the doit() method in the expression:
inputLatex = '\\sin{-x}'
expr = process_sympy(inputLatex)
trigsimp(expr.doit())
would do the job.
I haven't solved the whole mystery yet, but I found a dirty workaround, which is to use sympify(). So if anyone else faces the same issue, here is what I did to get it to work:
inputLatex = '\\sin{-x}'
sympify(str(process_sympy(inputLatex)))
The answer now is -sin(x), what I wanted.
As for the sin(pi) thing, the issue is that process_sympy() cannot distinguish the pi as a symbol from pi as a number. I did the test here, and type(process_sympy('\pi')) returns Symbol type, and not a number. The same solution applies here.
It is easy to obtain such rewrite in other CAS like Mathematica.
TrigReduce[Sin[x]^2]
(*1/2 (1 - Cos[2 x])*)
However, in Sympy, trigsimp with all methods tested returns sin(x)**2
trigsimp(sin(x)*sin(x),method='fu')
While dealing with a similar issue, reducing the order of sin(x)**6, I notice that sympy can reduce the order of sin(x)**n with n=2,3,4,5,... by using, rewrite, expand, and then rewrite, followed by simplify, as shown here:
expr = sin(x)**6
expr.rewrite(sin, exp).expand().rewrite(exp, sin).simplify()
this returns:
-15*cos(2*x)/32 + 3*cos(4*x)/16 - cos(6*x)/32 + 5/16
That works for every power similarly to what Mathematica will do.
On the other hand if you want to reduce sin(x)**2*cos(x) a similar strategy works. In that case you have to rewrite the cos and sin to exp and as before expand rewrite and simplify again as:
(sin(x)**2*cos(x)).rewrite(sin, exp).rewrite(cos, exp).expand().rewrite(exp, sin).simplify()
that returns:
cos(x)/4 - cos(3*x)/4
The full "fu" method tries many different combinations of transformations to find "the best" result.
The individual transforms used in the Fu-routines can be used to do targeted transformations. You will have to read the documentation to learn what the different functions do, but just running through the functions of the FU dictionary identifies TR8 as your workhorse here:
>>> for f in FU.keys():
... print("{}: {}".format(f, FU[f](sin(var('x'))**2)))
...
8<---
TR8 -cos(2*x)/2 + 1/2
TR1 sin(x)**2
8<---
Here is a silly way to get this job done.
trigsimp((sin(x)**2).rewrite(tan))
returns:
-cos(2*x)/2 + 1/2
also works for
trigsimp((sin(x)**3).rewrite(tan))
returns
3*sin(x)/4 - sin(3*x)/4
but not works for
trigsimp((sin(x)**2*cos(x)).rewrite(tan))
retruns
4*(-tan(x/2)**2 + 1)*cos(x/2)**6*tan(x/2)**2
I am trying to add 0.2 value to constant x where x = 8 in a loop that runs to 100. Following is the code
x = 8
>>> for i in range(100):
... x += 0.2
...
>>> x
but everytime I get different answer and calculation always incorrect. I read about Floating Point Arithmetic Issue and Limitations but there should be some way around this. Can I use doubles (if they exists) ? I am using Python 2.7
UPDATE:
import time
x=1386919679
while(1):
x+=0.02
print "xx %0.9f"%x
b= round (x,2)
print "bb %0.9f"%b
time.sleep(1)
output
xx 1386933518.586801529
bb 1386933518.589999914
xx 1386933518.606801510
bb 1386933518.609999895
xx 1386933518.626801491
bb 1386933518.630000114
Desired output
I want correct output, I know If just write print x it will be accurate. But my application require that I should print results with 9 precision. I am newbie so please be kind.
You can use double-precision floating point, sure. You're already using it by default.
As for a way around it:
x += 0.2 * 100
I know that sounds facile, but the solution to floating point imprecision is not setting FLOATING_POINT_IMPRECISION = False. This is a fundamental limitation of the representation, and has no general solution, only specific ones (and patterns which apply to groups of specific situations).
There's also a rational number type which can exactly store 0.2, but it's not worth considering for most real-world use cases.