I want to define a variable that is les than 1 but greater than 0, i dont know if that can be done from the begining, i have something like this
delta = var(r'\delta',positive=True) # added style
i'm using jupyter
Using your definition of delta, let n = 1/(1 + delta) (or delta/(1 + delta)) and it will be as you have described.
Related
I am not pratice in Sympy manipulation.
I need to find roots on particular poly:
-4x**(11/2)-24x**(9/2)-16x**(7/2)+2x**(5/2)+16x**(5)+23x**(4)+5x**(3)-x**(2)
I verified that I have 2 real solution and I find one of them with Sympy function
nsolve(mypoly,x,1).
Why the previous step doesn't look the other?
How can I proceed to find ALL roots?
Thank you to all for assistance
A.
To my knowledge, nsolve looks in the proximity of the provided initial guess to find one root for each equations.
I would plot the expression to find suitable initial guesses:
from sympy import *
from sympy.plotting import PlotGrid
expr = -4*x**(S(11)/2)-24*x**(S(9)/2)-16*x**(S(7)/2)+2*x**(S(5)/2)+16*x**(5)+23*x**(4)+5*x**(3)-x**(2)
p1 = plot(expr, (x, 0, 0.5), adaptive=False, n=1000, ylim=(-0.01, 0.05), show=False)
p2 = plot(expr, (x, 0, 5), adaptive=False, n=1000, ylim=(-200, 200), show=False)
PlotGrid(1, 2, p1, p2)
Now, we can do:
nsolve(expr, x, 0.2)
# out: 0.169003536680445
nsolve(expr, x, 4)
# out: 4.28968831654177
EDIT: to find all roots (even the complex one), we can:
compute the derivative of the expression.
convert both the expression and the derivative to numerical functions with sympy's lambdify.
visually inspect the expression in the complex plane to determine good initial values for the root finding algorithm. I'm going to use this plotting module, SymPy Plotting Backend which exposes a very handy function, plot_complex, to generate domain coloring plots. In particular, I will plot alternating black and white stripes corresponding to modulus.
use scipy's newton method to compute the actual roots. EDIT: I just discovered that nsolve works too :)
# step 1 and 2
f = lambdify(x, expr)
f_der = lambdify(x, expr.diff(x))
# step 3
from spb import plot_complex
r = (x, -1-0.8j, 4.5+0.8j)
w = r[1].real - r[2].real
h = r[1].imag - r[2].imag
# number of discretization points, watch out memory usage
n1 = 1500
n2 = int(h / w * n1)
plot_complex(expr, r, {"interpolation": "spline36"}, grid=False, coloring="e", n1=n1, n2=n2, size=(10, 5))
In the above picture we see circular stripes getting bigger and deforming. The center of these circular stripes represent a pole or a zero. But this is an easy case: there are no poles. So, from the above pictures we count 7 zeros. We already know 3, the two computed above and the value 0. Let's find the others:
from scipy.optimize import newton
r1 = newton(f, x0=-0.9+0.1j, fprime=f_der)
r2 = newton(f, x0=-0.9-0.1j, fprime=f_der)
r3 = newton(f, x0=0.6+0.6j, fprime=f_der)
r4 = newton(f, x0=0.6-0.6j, fprime=f_der)
for r in (r1, r2, r3, r4):
print(r, ": is it a zero?", expr.subs(x, r).evalf())
# out:
# (-0.9202719950522663+0.09010409402273806j) : is it a zero? -8.21787666002984e-15 + 2.06697764417957e-15*I
# (-0.9202719950522663-0.09010409402273806j) : is it a zero? -8.21787666002984e-15 - 2.06697764417957e-15*I
# (0.6323265751497729+0.6785871500619469j) : is it a zero? -2.2103533615688e-15 - 2.77549897301442e-15*I
# (0.6323265751497729-0.6785871500619469j) : is it a zero? -2.2103533615688e-15 + 2.77549897301442e-15*I
As you can see, inserting those values into the original expression get values very very close to zero. It is perfectly normal to see these kind of errors.
I just discovered that you can use also use nsolve instead of newton to compute complex roots. This makes step 1 and 2 unnecessary.
nsolve(expr, x, -0.9+0.1j)
# out: −0.920271995052266+0.0901040940227375𝑖
I'm using sympy.dsolve to solve a simple ODE for a decay chain.
The answer I get for different decay rates (e.g. lambda_1 > lambda_2) is wrong. After substituting C1=0, I get a simple exponential
-N_0*lambda_1*exp(-lambda_1*t)/(lambda_1 - lambda_2)
instead of the correct answer which has:
(exp(-lambda_1*t)-exp(-lambda_2*t)).
What am I doing wrong?
Here is my code
sp.var('lambda_1,lambda_2 t')
sp.var('N_0')
daughter = sp.Function('N2',Positive=True)(t)
stage1 = N_0*sp.exp(-lambda_1*t)
eq = sp.Eq(daughter.diff(t),stage1*lambda_1 - daughter*lambda_2)
sp.dsolve(eq,daughter)
Your differential equation is (using shorter variable identifiers)
y' = A*N0*exp(-A*t) - B*y
Apply the integrating factor exp(B*t) to get the equivalent
(exp(B*t)*y(t))' = A*N0*exp((B-A)*t)
Integrate to get
exp(B*t)*y(t) = A*N0*exp((B-A)*t)/(B-A) + C
y(t) = A*N0*exp(-A*t)/(B-A) + C*exp(-B*t)
which is exactly what the solver computed.
Did you plug in C1 = 0, expecting to get a solution such that y(0) = 0? That's not how it works. C1 is an arbitrary constant in the formula, setting it to 0 is no guarantee that the expression will evaluate to 0 when t = 0.
Here is a correct approach, step by step
sol1 = sp.dsolve(eq, daughter)
This returns a Piecewise because it's not known if two lambdas are equal:
Eq(N2(t), (C1 + N_0*lambda_1*Piecewise((t, Eq(lambda_2, lambda_1)), (-exp(lambda_2*t)/(lambda_1*exp(lambda_1*t) - lambda_2*exp(lambda_1*t)), True)))*exp(-lambda_2*t))
We can clarify that they are not:
sol2 = sol1.subs(Eq(lambda_2, lambda_1), False)
getting
Eq(N2(t), (C1 - N_0*lambda_1*exp(lambda_2*t)/(lambda_1*exp(lambda_1*t) - lambda_2*exp(lambda_1*t)))*exp(-lambda_2*t))
Next, we need C1 such that the right hand side turns into 0 when t = 0. So, take the right hand side, plug 0 for t, solve for C1:
C1 = Symbol('C1')
C1val = solve(sol2.rhs.subs(t, 0), C1, dict=True)[0][C1]
(It's not necessary to include dict=True but I like it, because it enforces uniform output of solve: it's an array of dictionaries.)
By the way, C1val is now N_0*lambda_1/(lambda_1 - lambda_2). Put it in:
sol3 = sol2.subs(C1, C1val).simplify()
and there you have it:
Eq(N2(t), N_0*lambda_1*(exp(lambda_1*t) - exp(lambda_2*t))*exp(-t*(lambda_1 + lambda_2))/(lambda_1 - lambda_2))
The expression is equivalent to N_0*lambda_1*(exp(-lambda_2*t) - exp(-lambda_1*t))/(lambda_1 - lambda_2) although SymPy seems loathe to combine the exponentials here.
I am trying to optimize parameters for a known function to fit an experimental data plot. The function is fairly involved
where x sweeps along a know set of numbers and p, g and c are the independent parameters to be optimized. Any ideas or resources that could be of assistance?
I would not recommend Genetic Algorithms. Instead go for straight forward Optimization.
Scipy has some resources.
You haven't provided any data or so, so I'll just go for something that should run. Below is something to get you started. I can't know if it works without seeing the data. Also, there must probably is a way to dynamically feed objectivefunc your x and y data. That's probably in the docs to scipy.optimize.minimize.
What I've done. Create a function to minimize. Here, I've called it objectivefunc. For that I've taken your function y = x^2 * p^2 * g / ... and transformed it to be of the form x^2 * p^2 * g / (...) - y = 0. Then square the left hand side and try to minimise it. Because you will have multiple (x/y) data samples, I'd minimise the sum of the squares. Put it all in a function and pass it to the minimize from scipy.
import numpy as np
from scipy.optimize import minimize
def objectivefunc(pgq):
"""Your function transformed so that it can be minimised.
I've renamed the input pgq, so that pgq[0] is p, pgq[1] is g, etc.
"""
p = pgq[0]
g = pgq[1]
q = pgq[2]
x = [10, 9.4, 17] # Some input data.
y = [12, 42, 0.8]
sum_ = 0
for i in range(len(x)):
sum_ += (x[i]**2 * p**2 * g - y[i] * ( (c**2 - x**2)**2 + x**2 * g**2) )**2
return sum_
pgq = np.array([1.3, 0.7, 0.5]) # Supply sensible initivial values
res = minimize(objectivefunc, pgq, method='nelder-mead',
options={'xtol': 1e-8, 'disp': True})
Have you tired old good Levenberg-Marquardt as implemented in Levenberg-Marquardt.vi. If it does not suite your needs, you can try Waptia libraryfor LabVIEW with one of the genetic algorithms implemented.
Guys,
I need your help!
Matlab : Plot this continuous-time signal on Matlab.
Define and plot the following continuous-time signals. and choose a a time interval that is relevant for this signal.
2.For signal x(t) in above, define y(t) =1+2x(-2t+3) by reusing the corresponding amplitude vetor defined in question #1 and defining an appropriate time vector. Plot x(t) and y(t) on the same plot.. Note that you are not allowed to recompute the signal, you have to use your knowledge about shifting, scaling, etc. (Hint: fliplr() or flipud()).
The question #2 really stop me to contine, how can I get the y(t) =1+2x(-2t+3) witout recompute the signal? And I knew how to use the re-compute the signal to let it work, but it require me not to re-compute the signal. Below is my code:
figure;
t = -5:0.01:5;
y = exp(-t).* heaviside(t) + (exp(-t) .* (exp((2*t)-4)-1) .* heaviside(t-2)) - (exp(t-4) .* heaviside(t-4));
t2 = (-2*t) + 3;
y2 = exp(-t2).* heaviside(t2) + (exp(-t2) .* (exp((2*t2)-4)-1) .* heaviside(t2-2)) - (exp(t2-4) .* heaviside(t2-4));
y3 = 1 + (2*y2);
plot(t,y,'g');
hold on;
plot(t,y3,'r')
And I tried to use to coding without re-compute the signal with below code:
figure;
clear all
t = -5:0.01:5;
y = exp(-t).* heaviside(t) + (exp(-t) .* (exp((2*t)-4)-1) .* heaviside(t-2)) - (exp(t-4) .* heaviside(t-4));
plot(t,y,'g');
hold on;
t2 = -t.* 2 + 3;
y2 = 1 + 2 * y;
plot(t2,y2,'r');
The graph without re-compute the signal will increase size by 2, and move left by 3 which is not same as above. so, I recognized it's wrong for my code. also, it stated let we need to use fliplr() or flipud(), but I don't know where to use this function.
Please give me a help, Thank you!
plot((t+3)/2,1+2*fliplr(y),'k')
The 1+2*y you already had. You need the fliplr to do the x(-t) instead of x(t). Then you need to inverse the relation on t: x(2t) will "move" twice faster than x(t) so if you plot the same data it has to be plotted on a twice smaller interval, hence the /2.
I've been wrestling with this issue for a week and I just need some guidance on the math part of it. If I could just understand the math behind it I could piece together the functions to make it work. The assignment is;
Design and develop a C++ program for Calculating e(n) when delta <= 0.000001
e(n-1) = 1 + 1/1! + 1/2! + 1/3! + 1/4! + … + 1/(n-1)!
e(n) = 1 + 1/1! + 1/2! + 1/3! + 1/4! + … + 1/(n)!
delta = e(n) – e(n-1)
You do not have any input to the program. Your output should be something like this:
N = 2 e(1) = 2 e(2) = 2.5 delta = 0.5
N = 3 e(2) = 2.5 e(3) = 2.565 delta = 0.065
...
You must use recursive function calls.
My first issue is the math and the variables that would contain them.
the delta, e(n), and e(n-1) variable must doubles
if e(n) = 1 + 1 / 1! = 2 then e(n-1) must equal 1, which means delta = 1 (that's my thinking anyway) I'm just not sure of the math behind the .5 delta the first time and the 0.065 in the second iteration.
Can someone point me in the right direction on this problem?
Thank you,
T
From the wikipedia link, you can see that
I will not explain the notion of limits here, but what this basically means is that, if we define a function e where e(n) = 1 + 1/1! + 1/2! + 1/3! + 1/4! + … + 1/(n)! (which is the function given in your problem), we are able to approximate the real value of the constant e.
The higher n is, the closer we get from e.
If you look closely at the function, you can see that each time, we add a term which is smaller than the previous one: 1 >= 1/1! >= 1/2! >= .... >= 1/(n)!
That basically means that, every time we increase n we are getting closer to e but we are slowing down in the way.
The real value of e is 2.71828...
In our first step e(1) = 1, we are 1.71828... too far from the real value
In the second step e(2) = 2, we are at 0.71828..., 1 distance closer
In the third step e(3) = 2.5, we are now at 0.21828..., 0.5 distance closer
As you can see, we are getting there, but the closer we get, the slower we move. Now let's say that at each step, we want to know how close we have moved compared to the previous value.
We then do simply e(n) - e(n-1). This is basically what the delta means.
At some point, we are moving so slow that it does no longer make any sense to keep going. We are almost staying put. At this point, we decide that our approximation is close enough from e.
In your case, the problem defines the minimum progression speed to 0.000001
here is a solution :-
delta = e(n) - e(n-1)
delta = 1/n!
delta < 0.000001
n! > 1000000
n >= 10 as 10! = 3628800