I'm high school student, and I'm writing a report about profile velocity.
I don't know much about differential equations and Python, but I have to use both of them.
I'm trying to induce the velocity from (ma = mg - kv), and caculate a and s from v.
I caculated v successfully, but I have few questions.
import sympy
init_printing()
%matplotlib inline
(m, g, k, t) = symbols('m g k t')
v = Function('v')
deq = Eq( m*v(t).diff(t), m*g - k*v(t) )
eq = dsolve( deq, v(t) )
C1 = Symbol('C1')
C1_ic = solve( eq.rhs.subs( {t:0}), C1)[0]
r = expand(eq.subs({C1:C1_ic}))
the simple way to caculate C1 doesn't work
v(0) = 0
so I write
eq = dsolve( deq, ics={v(0):0})
but it has same result with
eq = dsolve( deq, v(t) )
how to caculate acc and draw a graph?
I try this code, but it doesn't work
a = diff(r, t)
r = dsolve( a, v(t))
r.subs({m:1, g:9.8, k:1})
plot( r , (t,0,100))
I don't get the same result from eq = dsolve( deq, ics={v(0):0}). Also you should declare m, g and k with positive=True.
In [50]: m, g, k = symbols('m g k', positive=True)
In [51]: t = Symbol('t')
In [52]: v = Function('v')
In [53]: deq = Eq( m*v(t).diff(t), m*g - k*v(t) )
In [54]: deq
Out[54]:
d
m⋅──(v(t)) = g⋅m - k⋅v(t)
dt
In [55]: dsolve(deq, v(t))
Out[55]:
k⋅(C₁ - t)
──────────
m
g⋅m + ℯ
v(t) = ─────────────────
k
In [56]: dsolve(deq, v(t), ics={v(0):0})
Out[56]:
⎛ m⋅log(g) m⋅log(m) ⅈ⋅π⋅m⎞
k⋅⎜-t + ──────── + ──────── + ─────⎟
⎝ k k k ⎠
────────────────────────────────────
m
g⋅m + ℯ
v(t) = ───────────────────────────────────────────
k
In [57]: sol = dsolve(deq, v(t), ics={v(0):0}).rhs
In [58]: sol.expand()
Out[58]:
-k⋅t
─────
m
g⋅m g⋅m⋅ℯ
─── - ──────────
k k
In [59]: factor_terms(sol.expand())
Out[59]:
⎛ -k⋅t ⎞
⎜ ─────⎟
⎜ m ⎟
g⋅m⋅⎝1 - ℯ ⎠
────────────────
k
You can compute and plot the acceleration like
In [62]: sol = factor_terms(sol.expand())
In [64]: a = sol.diff(t)
In [65]: a = sol.diff(t).subs({m:1, g:9.8, k:1})
In [66]: a
Out[66]:
-t
9.8⋅ℯ
In [67]: plot(a, (t, 0, 100))
Related
How to model the following problem with pyomo and Gurobi solver?
This is a non-convex problem, and can not model as a QCQP problem. So I want to solve it with pyomo and Gurobi solver. The most difficult part is matrix algebra (pyomo do not support yet).
Many thanks!
Here is my own answer. Please let me know if there are any mistakes. Thanks!
import pyomo.environ as pyo
## get your param
L, Delta =
K, P =
O, l =
ones3 = np.ones((3, 1))
model = pyo.ConcreteModel()
model.r = pyo.RangeSet(0,2) # 3
model.s = pyo.RangeSet(0,0) # 1
model.n = pyo.RangeSet(0,len(L)-1) # n
model.m = pyo.RangeSet(0,len(K)-1) # m
model.q = pyo.RangeSet(0,len(O)-1) # q
def param_rule(I, J, mat):
return dict(((i,j), mat[i,j]) for i in I for j in J)
model.L = pyo.Param(model.n, model.n, initialize=param_rule(model.n, model.n, L)) # L
model.Delta = pyo.Param(model.n, model.r, initialize=param_rule(model.n, model.r, Delta)) # Delta
model.K = pyo.Param(model.m, model.n, initialize=param_rule(model.m, model.n, K)) # K
model.P = pyo.Param(model.m, model.r, initialize=param_rule(model.m, model.r, P)) # P
model.O = pyo.Param(model.q, model.n, initialize=param_rule(model.q, model.n, O)) # O
model.ones3 = pyo.Param(model.r, model.s, initialize=param_rule(model.r, model.s, ones3)) # ones3
model.l = pyo.Param(model.q, initialize=dict(((i),l[i,0]) for i in model.q)) # l
model.V = pyo.Var(model.n, model.r, initialize=param_rule(model.n, model.r, Delta)) # V
def ObjRule(model):
return 0.5*sum((sum(model.L[i,k]*model.V[k,j] for k in model.n) - model.Delta[i,j])**2 for i in model.n for j in model.r)
model.obj = pyo.Objective(rule=ObjRule, sense=pyo.minimize)
model.constraints = pyo.ConstraintList()
# constraints 1
[model.constraints.add(sum(model.K[i,k] * model.V[k,j] for k in model.n) == model.P[i,j]) for i in model.m for j in model.r]
# constraint 2
for i in model.q:
model.constraints.add(sum(sum(model.O[i,k] * model.V[k,j] for k in model.n)**2 for j in model.r) == model.l[i])
opt = pyo.SolverFactory('gurobi')
opt.options['nonconvex'] = 2
solution = opt.solve(model)
V_opt = np.array([pyo.value(model.V[i,j]) for i in model.n for j in model.r]).reshape(len(model.n),len(model.r))
obj_values = pyo.value(model.obj)
print("optimum point: \n {} ".format(V_opt))
print("optimal objective: {}".format(obj_values))
I want to specifya N x M matrix where N >> M and use the woodbury,
and for Sympy to simplify the matrix algebra.
Is this possible or something similar?
Is there a way in Sympy to simplify Matrix algebra? Say with woodbury identity
Yes, it's possible to apply Woodbury's identity. In the code below, the identity is applied on (I + Phi^T * Phi)**(-1).
The particular case used was (I+UV)^-1 = I - U*(I+VU)^-1*V
from sympy.simplify.simplify import nc_simplify
from sympy import *
N,M,sigma = symbols("N M \sigma")
Phi = MatrixSymbol("Phi", N, M)
In,Im = Identity(N), Identity(M)
f = (Im + Phi.transpose()#Phi).inverse()
f = f # Phi.transpose()
f = Phi # f
f = In - sigma**(-2)*f
f = sigma**(-2)*Phi.transpose()#f
f = f# Phi
def apply_woodburry_1(e,U,V):
# The Woodbury identity
#
# (I+UV)^-1 = I - U * (I+VU)^-1 * V
#
# Below, P will be LHS and Q will be RHS
#
UV_dim = (U * V).shape
I1 = Identity(UV_dim[0])
VU_dim = (V * U).shape
I2 = Identity(VU_dim[0])
P = (I1+U*V)**(-1)
Q = I1 - U * ( (I2+V*U)**(-1) ) * V
return e.replace(P,Q)
display(f)
f = apply_woodburry_1(f,Phi.transpose(),Phi)
display(f)
f = nc_simplify(f.expand())
display(f)
Output:
A simple problem of entropy maximization in statistical mechanics in Physics, is formulated as follows.
The goal is to maximize an entropy function (LaTeX is still missing in stackexchange?):
H = - sum_x P_x ln P_x
subject to the following constraints: the normalization constraint
1 = sum_x P_x
and the constraint of average energy
U = sum_i E_x P_x
where the index i runs over x=1,2,...,n. E_x represents the energy of the system when it is in microscopic state x and P_x is the probability for the system to be in the microscopic state x.
The solution to such a problem can be obtained by the method of Lagrange multipliers. In this context, it works as follows...
Firstly, the Lagrangian is defined as
L = H + a( 1 - sum_i P_x ) + b( U - sum_i P_x E_x )
Here, a and b are the Lagrange multipliers. The Lagrangian L is a function of a, b and the probabilities P_x for x=1,2,...,n. The term a( 1 - sum_x P_x ) correspond to the normalization constraint and the term b( E - sum_x P_x E_x ) to the average energy constraint.
Secondly, the partial derivatives of L with respect to a, b and the P_x for the different x=1,2,...,n are calculated. These result in
dL/da = 1 - sum_x P_x
dL/db = E - sum_x E_x P_x
dL/P_x = dH/P_x - a - b E_x
= - ln P_x - 1 - a - b E_x
Thirdly, we find the solution by equating these derivatives to zero. This makes sense since there are 2+n equations and we have 2+n unknowns: the P_x, a and b. The solution from these equations read
P_x = exp( - b E_x ) / Z
where
Z = sum_x exp( - b E_x )
and b is implicitly determined by the relation
E = sum_x P_x E_x = ( 1 / Z ) sum_x exp( -b E_x ) E_x
Now that the "mathematical" problem is defined, lets state the "computational" problem (which is the one I want to ask here).
The computational problem is the following: I would like to reproduce the previous derivation in Sympy.
The idea is to automatize the process so, eventually, I can attack similar but more complicate problems.
I already made certain progress. Still, I think I haven't used the best approach. This is my solution.
# Lets attempt to derive these analytical result using SymPy.
import sympy as sy
import sympy.tensor as syt
# Here, n is introduced to specify an abstract range for x and y.
n = sy.symbols( 'n' , integer = True )
a , b = sy.symbols( 'a b' ) # The Lagrange-multipliers.
x = syt.Idx( 'x' , n ) # Index x for P_x
y = syt.Idx( 'y' , n ) # Index y for P_y; this is required to take derivatives according to SymPy rules.
>>> P = syt.IndexedBase( 'P' ) # The unknowns P_x.
>>> E = syt.IndexedBase( 'E' ) # The knowns E_x; each being the energy of state x.
>>> U = sy.Symbol( 'U' ) # Mean energy.
>>>
>>> # Entropy
>>> H = sy.Sum( - P[x] * sy.log( P[x] ) , x )
>>>
>>> # Lagrangian
>>> L = H + a * ( 1 - sy.Sum( P[x] , x ) ) + b * ( U - sy.Sum( E[x] * P[x] , x ) )
>>> # Lets compute the derivatives
>>> dLda = sy.diff( L , a )
>>> dLdb = sy.diff( L , b )
>>> dLdPy = sy.diff( L , P[y] )
>>> # These look like
>>>
>>> print dLda
-Sum(P[x], (x, 0, n - 1)) + 1
>>>
>>> print dLdb
U - Sum(E[x]*P[x], (x, 0, n - 1))
>>>
>>> print dLdPy
-a*Sum(KroneckerDelta(x, y), (x, 0, n - 1)) - b*Sum(KroneckerDelta(x, y)*E[x], (x, 0, n - 1)) + Sum(-log(P[x])*KroneckerDelta(x, y) - KroneckerDelta(x, y), (x, 0, n - 1))
>>> # The following approach does not work
>>>
>>> tmp = dLdPy.doit()
>>> print tmp
-a*Piecewise((1, 0 <= y), (0, True)) - b*Piecewise((E[y], 0 <= y), (0, True)) + Piecewise((-log(P[y]) - 1, 0 <= y), (0, True))
>>>
>>> sy.solve( tmp , P[y] )
[]
>>> # Hence, we try an ad-hoc procedure
>>> Px = sy.Symbol( 'Px' )
>>> Ex = sy.Symbol( 'Ex' )
>>> tmp2 = dLdPy.doit().subs( P[y] , Px ).subs( E[y] , Ex ).subs( y , 0 )
>>> print tmp2
-Ex*b - a - log(Px) - 1
>>> Px = sy.solve( tmp2 , Px )
>>> print Px
[exp(-Ex*b - a - 1)]
Is there a "better" way to proceed? In particular, I don't like the idea of substituting P[y] with Px and E[y] with Ex in order to solve the equations. Why the equations cannot be solved in terms of P[y]?
Part of my assignment is to implement the Gradient Descent to find the best approximation of values c_1, c_2 and r_1 for the function
.
Given is only a list of 30 y-values corresponding to x from 0 to 30. I am implementing this in Enthought Canopy like this:
First I start with random values:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as pyplt
c1 = -0.1
c2 = 0.1
r1 = 0.1
x = np.linspace(0,29,30) #start,stop,numitems
y = c1*np.exp(r1*x) + (c1*x)**3.0 - (c2*x)**2.0
pyplt.plot(x,y)
values_x = np.linspace(0,29,30)
values_y = np.array([0.2, -0.142682939241718, -0.886680607211679, -2.0095087143494, -3.47583798747496, -5.24396052331554, -7.2690008846359, -9.50451068338581, -11.9032604272567, -14.4176327390446, -16.9998176236069, -19.6019094345634, -22.1759550265352, -24.6739776668383, -27.0479889096801, -29.2499944927101, -31.2319972651608, -32.945998641919, -34.3439993255969, -35.3779996651013, -35.9999998336943, -36.161999917415, -35.8159999589895, -34.9139999796348, -33.4079999898869, -31.249999994978, -28.3919999975061, -24.7859999987616, -20.383999999385, -15.1379999996945])
pyplt.plot(values_x,values_y)
The squared error is quite high:
def Error(y,y0):
return ( (1.0)*sum((y-y0)**2.0) )
print Error(y,values_y)
Now, to implement the gradient descent, I derived the partial derivative functions for c_1, c_2 and r_1 and implemented the Gradient Descent:
step_size = 0.0000005
accepted_Error = 50
dc1 = c1
dc2 = c2
dr1 = r1
y0 = values_y
previous_Error = 100000
left = True
for _ in range(1000):
gc1 = (2.0) * sum( ( y - dc1*np.exp(dr1*x) - (dc1*x)**3 + (dc2*x)**2 ) * ( -1*np.exp(dr1*x) - (3*(dc1**2)*(x**3)) ) )
gc2 = (2.0) * sum( ( y - dc1*np.exp(dr1*x) - (dc1*x)**3 + (dc2*x)**2 ) * ( 2*dc2*(x**2) ) )
gr1 = (2.0) * sum( ( y - dc1*np.exp(dr1*x) - (dc1*x)**3 + (dc2*x)**2 ) * ( -1*dc1*x*np.exp(dr1*x) ) )
dc1 = dc1 - step_size*gc1
dc2 = dc2 - step_size*gc2
dr1 = dr1 - step_size*gr1
y1 = dc1*np.exp(dr1*x) + (dc1*x)**3.0 - (dc2*x)**2.0
current_Error = Error(y0,y1)
if (current_Error > accepted_Error):
print currentError
else:
break
if (current_Error > previous_Error):
print currentError
print "DIVERGING"
break
if (current_Error==previous_Error):
print "CAN'T IMPROVE"
break
previous_Error = current_Error
However, the error is not improving at all, and I tried to vary the step size. Is there a mistake in my code?
import sympy as sp
t = sp.Symbol("t")
y = sp.Function("y")
v = sp.Function("v")
sol = sp.dsolve([
y(t).diff(t) - v(t),
v(t).diff(t) + 2*y(t)
],
ics={y(0):2, v(0):0.45})
sol
gives:
[y(t) == C1*sin(sqrt(2)*t) + C2*cos(sqrt(2)*t),
v(t) == sqrt(2)*C1*cos(sqrt(2)*t) - sqrt(2)*C2*sin(sqrt(2)*t)]
Why are C1 and C2 not calculated?