SymPy cannot solve the equation cos(x) = - 1 /cosh(x) - sympy

In Wolfram|Alpha, one can solve cos λ = -1/cosh λ:
λ = ± 1.87510406871196...
λ = ± 4.69409113297417...
λ = ± 7.85475743823761...
λ = ± 10.9955407348755...
Why does cos(x) = - 1 /cosh(x) not work in SymPy?
I tried this:
from sympy import *
x = symbols('x', real=True)
eq = cos(x) + 1 /cosh(x)
ans=solve(eq)
print(ans)
# NotImplementedError: multiple generators [cos(x), exp(x)]
# No algorithms are implemented to solve equation cos(x) + 1/(exp(x)/2 + exp(-x)/2)
--------------
(2018/08/21)
Graphing Calculator
https://www.numberempire.com/graphingcalculator.php?functions=cos(x)%2C-1%2Fcosh(x)&xmin=0&xmax=10&ymin=-1.5&ymax=1.5&var=x
https://www.numberempire.com/graphingcalculator.php?functions=cos(x)*cosh(x)%2C-1&xmin=-10&xmax=10&ymin=-1.5&ymax=1.5&var=x

"Sym" in SymPy stands for symbolic. Did WolframAlpha find a symbolic solution? No, it did not; because there isn't one. So, SymPy did not find one, either.
What you got from WolframAlpha is a numeric solution. To get those, there are other Python libraries, most notably SciPy.
However, SymPy can get you numeric solutions too, by calling mpmath under the hood. This is done with nsolve. It takes a second argument, initial point of the search for solution, and returns one solution.
>>> nsolve(eq, 0)
7.85475743823761
If you want more, try multiple starting points:
>>> {nsolve(eq, n) for n in range(-10, 10)}
{4.69409113297418, -1.87510406871196, 7.85475743823761, -7.85475743823761, 1.87510406871196, -10.9955407348755, -4.69409113297418}
Here I tried 20 starting points, some roots were repeated, hence the use of a set to eliminate the repetition.
There are infinitely many solutions; whatever tool is used, you'll only get several of those. But for large x, 1/cosh(x) is effectively 0, so the roots are approximately the same as cos(x) = 0, which are pi/2 + pi*k, any integer k.

Related

Sympy performance issue with solving exp related equation

I have an equation: f = (100*E**(-x*1600)-50), it takes a very long time to give a solution for x but in another online solver, it only takes a few ms. How can I do to boost the performance?
raw code:
f = (100*E**(-x*1600)-50)
solve(f,x)
Expected result:
If you just want the real solution (and not the many symbolic solutions) use nsolve. The bisect solvers is appropriated for this function with a steep gradient near the root. Defining lam = var('lam') gives
If you want a symbolic solution (but not the many imaginary ones) then solveset or the lower level _invert can be used:
>>> from sympy import solveset, var, Reals
>>> from sympy.solvers.solvers import _invert
>>> var('x')
x
>>> eq=(100*E**(-x*1600)-50)
>>> solveset(eq,x,Reals)
{log(2)/1600}
>>> from sympy.solvers.solvers import _invert
>>> _invert(eq,x)
(log(2)/1600, x)

How many FLOPs are there in calculating a factorial using math.factorial(n) in python

I am trying to understand how many FLOPs are there if I use a certain algorithm to find the exponential approximated sum, specially If I use math.factorial(n) in python. I understand FLOPs for binary operation, so is factorial also a binary operation here within a function? Not being a computer science major, I have some difficulties with these. My code looks like this:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import math
x = input ("please enter a number for which you want to run the exponential summation e^{x}")
N = input ("Please enter an integer before which term you want to turncate your summation")
function= math.exp(x)
exp_sum = 0.0
abs_err = 0.0
rel_err = 0.0
for n in range (0, N):
factorial = math.factorial(n) #How many FLOPs here?
power = x**n # calculates N-1 times
nth_term = power/factorial #calculates N-1 times
exp_sum = exp_sum + nth_term #calculates N-1 times
abs_err = abs(function - exp_sum)
rel_err = abs(abs_err)/abs(function)
Please help me understand this. I might also be wrong about the other FLOPs!
According to that SO answer and to the C source code, in python2.7 math.factorial(n) uses a naive algorithm to compute the factorial so it computes using about n operations as factorial(n)=1*2*3*4*...*n.
A small mistake regarding the rest is that for n in range(0,N) will loop N times , not N-1 (from n=0 to n=N-1).
A final note is that counting FLOP may not be representative of the actual algorithm real world performance especially in python that is an interpretted language and that it tends to hide most of its inner working behind clever syntax that links to compiled C code(eg: exp_sum + nth_term is actualy exp_sum.__add__(nth_term)).

Fitting a Gaussian, getting a straight line. Python 2.7

As my title suggests, I'm trying to fit a Gaussian to some data and I'm just getting a straight line. I've been looking at these other discussion Gaussian fit for Python and Fitting a gaussian to a curve in Python which seem to suggest basically the same thing. I can make the code in those discussions work fine for the data they provide, but it won't do it for my data.
My code looks like this:
import pylab as plb
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy import asarray as ar,exp
y = y - y[0] # to make it go to zero on both sides
x = range(len(y))
max_y = max(y)
n = len(y)
mean = sum(x*y)/n
sigma = np.sqrt(sum(y*(x-mean)**2)/n)
# Someone on a previous post seemed to think this needed to have the sqrt.
# Tried it without as well, made no difference.
def gaus(x,a,x0,sigma):
return a*exp(-(x-x0)**2/(2*sigma**2))
popt,pcov = curve_fit(gaus,x,y,p0=[max_y,mean,sigma])
# It was suggested in one of the other posts I looked at to make the
# first element of p0 be the maximum value of y.
# I also tried it as 1, but that did not work either
plt.plot(x,y,'b:',label='data')
plt.plot(x,gaus(x,*popt),'r:',label='fit')
plt.legend()
plt.title('Fig. 3 - Fit for Time Constant')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (V)')
plt.show()
The data I am trying to fit is as follows:
y = array([ 6.95301373e+12, 9.62971320e+12, 1.32501876e+13,
1.81150568e+13, 2.46111132e+13, 3.32321345e+13,
4.45978682e+13, 5.94819771e+13, 7.88394616e+13,
1.03837779e+14, 1.35888594e+14, 1.76677210e+14,
2.28196006e+14, 2.92781632e+14, 3.73133045e+14,
4.72340762e+14, 5.93892782e+14, 7.41632194e+14,
9.19750269e+14, 1.13278296e+15, 1.38551838e+15,
1.68291212e+15, 2.02996957e+15, 2.43161742e+15,
2.89259207e+15, 3.41725793e+15, 4.00937676e+15,
4.67187762e+15, 5.40667931e+15, 6.21440313e+15,
7.09421973e+15, 8.04366842e+15, 9.05855930e+15,
1.01328502e+16, 1.12585509e+16, 1.24257598e+16,
1.36226443e+16, 1.48356404e+16, 1.60496345e+16,
1.72482199e+16, 1.84140400e+16, 1.95291969e+16,
2.05757166e+16, 2.15360187e+16, 2.23933053e+16,
2.31320228e+16, 2.37385276e+16, 2.42009864e+16,
2.45114362e+16, 2.46427484e+16, 2.45114362e+16,
2.42009864e+16, 2.37385276e+16, 2.31320228e+16,
2.23933053e+16, 2.15360187e+16, 2.05757166e+16,
1.95291969e+16, 1.84140400e+16, 1.72482199e+16,
1.60496345e+16, 1.48356404e+16, 1.36226443e+16,
1.24257598e+16, 1.12585509e+16, 1.01328502e+16,
9.05855930e+15, 8.04366842e+15, 7.09421973e+15,
6.21440313e+15, 5.40667931e+15, 4.67187762e+15,
4.00937676e+15, 3.41725793e+15, 2.89259207e+15,
2.43161742e+15, 2.02996957e+15, 1.68291212e+15,
1.38551838e+15, 1.13278296e+15, 9.19750269e+14,
7.41632194e+14, 5.93892782e+14, 4.72340762e+14,
3.73133045e+14, 2.92781632e+14, 2.28196006e+14,
1.76677210e+14, 1.35888594e+14, 1.03837779e+14,
7.88394616e+13, 5.94819771e+13, 4.45978682e+13,
3.32321345e+13, 2.46111132e+13, 1.81150568e+13,
1.32501876e+13, 9.62971320e+12, 6.95301373e+12,
4.98705540e+12])
I would show you what it looks like, but apparently I don't have enough reputation points...
Anyone got any idea why it's not fitting properly?
Thanks for your help :)
The importance of the initial guess, p0 in curve_fit's default argument list, cannot be stressed enough.
Notice that the docstring mentions that
[p0] If None, then the initial values will all be 1
So if you do not supply it, it will use an initial guess of 1 for all parameters you're trying to optimize for.
The choice of p0 affects the speed at which the underlying algorithm changes the guess vector p0 (ref. the documentation of least_squares).
When you look at the data that you have, you'll notice that the maximum and the mean, mu_0, of the Gaussian-like dataset y, are
2.4e16 and 49 respectively. With the peak value so large, the algorithm, would need to make drastic changes to its initial guess to reach that large value.
When you supply a good initial guess to the curve fitting algorithm, convergence is more likely to occur.
Using your data, you can supply a good initial guess for the peak_value, the mean and sigma, by writing them like this:
y = np.array([...]) # starting from the original dataset
x = np.arange(len(y))
peak_value = y.max()
mean = x[y.argmax()] # observation of the data shows that the peak is close to the center of the interval of the x-data
sigma = mean - np.where(y > peak_value * np.exp(-.5))[0][0] # when x is sigma in the gaussian model, the function evaluates to a*exp(-.5)
popt,pcov = curve_fit(gaus, x, y, p0=[peak_value, mean, sigma])
print(popt) # prints: [ 2.44402560e+16 4.90000000e+01 1.20588976e+01]
Note that in your code, for the mean you take sum(x*y)/n , which is strange, because this would modulate the gaussian by a polynome of degree 1 (it multiplies a gaussian with a monotonously increasing line of constant slope) before taking the mean. That will offset the mean value of y (in this case to the right). A similar remark can be made for your calculation of sigma.
Final remark: the histogram of y will not resemble a Gaussian, as y is already a Gaussian. The histogram will merely bin (count) values into different categories (answering the question "how many datapoints in y reach a value between [a, b]?").

Solve an equation using a python numerical solver in numpy

I have an equation, as follows:
R - ((1.0 - np.exp(-tau))/(1.0 - np.exp(-a*tau))) = 0.
I want to solve for tau in this equation using a numerical solver available within numpy. What is the best way to go about this?
The values for R and a in this equation vary for different implementations of this formula, but are fixed at particular values when it is to be solved for tau.
In conventional mathematical notation, your equation is
The SciPy fsolve function searches for a point at which a given expression equals zero (a "zero" or "root" of the expression). You'll need to provide fsolve with an initial guess that's "near" your desired solution. A good way to find such an initial guess is to just plot the expression and look for the zero crossing.
#!/usr/bin/python
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
# Define the expression whose roots we want to find
a = 0.5
R = 1.6
func = lambda tau : R - ((1.0 - np.exp(-tau))/(1.0 - np.exp(-a*tau)))
# Plot it
tau = np.linspace(-0.5, 1.5, 201)
plt.plot(tau, func(tau))
plt.xlabel("tau")
plt.ylabel("expression value")
plt.grid()
plt.show()
# Use the numerical solver to find the roots
tau_initial_guess = 0.5
tau_solution = fsolve(func, tau_initial_guess)
print "The solution is tau = %f" % tau_solution
print "at which the value of the expression is %f" % func(tau_solution)
You can rewrite the equation as
For integer a and non-zero R you will get a solutions in the complex space;
There are analytical solutions for a=0,1,...4(see here);
So in general you may have one, multiple or no solution and some or all of them may be complex values. You may easily throw scipy.root at this equation, but no numerical method will guarantee to find all the solutions.
To solve in the complex space:
import numpy as np
from scipy.optimize import root
def poly(xs, R, a):
x = complex(*xs)
err = R * x - x + 1 - R
return [err.real, err.imag]
root(poly, x0=[0, 0], args=(1.2, 6))

Computation of Kullback-Leibler (KL) distance between text-documents using numpy

My goal is to compute the KL distance between the following text documents:
1)The boy is having a lad relationship
2)The boy is having a boy relationship
3)It is a lovely day in NY
I first of all vectorised the documents in order to easily apply numpy
1)[1,1,1,1,1,1,1]
2)[1,2,1,1,1,2,1]
3)[1,1,1,1,1,1,1]
I then applied the following code for computing KL distance between the texts:
import numpy as np
import math
from math import log
v=[[1,1,1,1,1,1,1],[1,2,1,1,1,2,1],[1,1,1,1,1,1,1]]
c=v[0]
def kl(p, q):
p = np.asarray(p, dtype=np.float)
q = np.asarray(q, dtype=np.float)
return np.sum(np.where(p != 0,(p-q) * np.log10(p / q), 0))
for x in v:
KL=kl(x,c)
print KL
Here is the result of the above code: [0.0, 0.602059991328, 0.0].
Texts 1 and 3 are completely different, but the distance between them is 0, while texts 1 and 2, which are highly related has a distance of 0.602059991328. This isn't accurate.
Does anyone has an idea of what I'm not doing right with regards to KL? Many thanks for your suggestions.
Though I hate to add another answer, there are two points here. First, as Jaime pointed out in the comments, KL divergence (or distance - they are, according to the following documentation, the same) is designed to measure the difference between probability distributions. This means basically that what you pass to the function should be two array-likes, the elements of each of which sum to 1.
Second, scipy apparently does implement this, with a naming scheme more related to the field of information theory. The function is "entropy":
scipy.stats.entropy(pk, qk=None, base=None)
http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.stats.entropy.html
From the docs:
If qk is not None, then compute a relative entropy (also known as
Kullback-Leibler divergence or Kullback-Leibler distance) S = sum(pk *
log(pk / qk), axis=0).
The bonus of this function as well is that it will normalize the vectors you pass it if they do not sum to 1 (though this means you have to be careful with the arrays you pass - ie, how they are constructed from data).
Hope this helps, and at least a library provides it so don't have to code your own.
After a bit of googling to undersand the KL concept, I think that your problem is due to the vectorization : you're comparing the number of appearance of different words. You should either link your column indice to one word, or use a dictionnary:
# The boy is having a lad relationship It lovely day in NY
1)[1 1 1 1 1 1 1 0 0 0 0 0]
2)[1 2 1 1 1 0 1 0 0 0 0 0]
3)[0 0 1 0 1 0 0 1 1 1 1 1]
Then you can use your kl function.
To automatically vectorize to a dictionnary, see How to count the frequency of the elements in a list? (collections.Counter is exactly what you need). Then you can loop over the union of the keys of the dictionaries to compute the KL distance.
A potential issue might be in your NP definition of KL. Read the wikipedia page for formula: http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
Note that you multiply (p-q) by the log result. In accordance with the KL formula, this should only be p:
return np.sum(np.where(p != 0,(p) * np.log10(p / q), 0))
That may help...