Is it possible to create indexed functions in sympy like fi(t) which might be used in a product or sum, eg Σfi(t)?
import sympy as sp
f = sp.Function('f')
i = sp.symbols('i', integer=True)
t = sp.symbols('t', real=True)
sp.Indexed(f, i)(t)
The above code produces the following error:
TypeError:
The base can only be replaced with a string, Symbol, IndexedBase or an
object with a method for getting items (i.e. an object with a
`__getitem__` method).
Assuming that you just want graphically pleasing output you can use the following
import sympy as sp
class f(sp.Function):
name='f'
def _latex(self, printer=None):
a = [printer.doprint(i) for i in self.args]
name=self.name
return r'{}_{{{}}}\left('.format(name,a[0])+','.join(a[1:])+r'\right)'
i = sp.symbols('i', integer=True)
t = sp.symbols('t', real=True)
f(i,t)
sp.Sum(f(i,t),(i,0,sp.oo))
Related
I would like to replace values in a sympy NDimArray.
I have the following code
import sympy as sp
import numpy as np
e = sp.MatrixSymbol('e',3,3)
E = sp.Matrix(e)
# Make E symmetric
E[1,0] = E[0,1]
E[2,0] = E[0,2]
E[2,1] = E[1,2]
result = sp.tensorproduct(E,E)
E_tst = np.random.rand(3,3)
E_tst[1,0] = E_tst[0,1]
E_tst[2,0] = E_tst[0,2]
E_tst[2,1] = E_tst[1,2]
resultNumeric = np.tensordot(E_tst,E_tst,axes=0)
check = resultNumeric - result.as_mutable().subs({E:sym.Matrix(E_tst)})
I get the error AttributeError: 'MutableDenseNDimArray' object has no attribute 'subs'.
How can I replace the symbols in a NDimArray?
Best Regards
Unfortunately, MutableDenseNDimArray doesn't inherit from Basic, whereas ImmutableDenseNDimArray does, therefore some attributes are not available. Don't ask me about this design decision.
However, you can achieve the same result by creating a substitution dictionary:
# substitution dictionary
d = {k: v for k, v in zip(list(E), list(Matrix(E_tst)))}
check = resultNumeric - result.subs(d)
I am trying to parallelize a penalized linear model using the multiprocessing library in python.
I created a function that solves my model:
from __future__ import division
import numpy as np
from cvxpy import *
def lm_lasso_solver(x, y, lambda1):
n = x.shape[0]
m = x.shape[1]
lambda1_param = Parameter(sign="positive")
betas_var = Variable(m)
response = dict(model='lm', penalization='l')
response["parameters"] = {"lambda_vector": lambda1}
lasso_penalization = lambda1_param * norm(betas_var, 1)
lm_penalization = 0.5 * sum_squares(y - x * betas_var)
objective = Minimize(lm_penalization + lasso_penalization)
problem = Problem(objective)
lambda1_param.value = lambda1
try:
problem.solve(solver=ECOS)
except:
try:
problem.solve(solver=CVXOPT)
except:
problem.solve(solver=SCS)
beta_sol = np.asarray(betas_var.value).flatten()
response["solution"] = beta_sol
return response
In this function x is a matrix of predictors and y is the response variable. lambda1 is the parameter that must be optimized, and so, is the parameter that I want to parallelize. I saved this script in a python file called "ms.py"
Then I created another python file called "parallelization.py" and in that file I defined the following:
import multiprocessing as mp
import ms
import functools
def myFunction(x, y, lambda1):
pool = mp.Pool(processes=mp.cpu_count())
results = pool.map(functools.partial(ms.lm_lasso_solver, x=x, y=y), lambda1)
return results
So the idea was now, on the python interpreter, execute:
from sklearn.datasets import load_boston
boston = load_boston()
x = boston.data
y = boston.target
runfile('parallelization.py')
lambda_vector = np.array([1,2,3])
myFunction(x, y, lambda_vector)
But when I do this, I get the following error message:
The problem is on the line:
results = pool.map(functools.partial(ms.lm_lasso_solver, x=x, y=y), lambda1)
You are calling the functools.partial() method with keyworded arguments whereas in your lm_lasso_solver method, you don't define them as keyworded arguments. You should call it with x and y as positional arguments as follows:
results = pool.map(functools.partial(ms.lm_lasso_solver, x, y), lambda1)
or simply use the apply_async() method the pool object :
results = pool.apply_async(ms.lm_lasso_solver, args=[x, y, lambda1])
I am trying to save the solutions of this optimization problem but they are Nonetype. Therefore I want to convert them to float but I get this error:
float() argument must be a string or a number, not 'NoneType'
It is rare due to in the printed solution from results.write() x1 is 6.57142857142857.
from coopr . pyomo import *
from pyomo.opt import SolverFactory
def create_model(N=[], M=[], c={}, a={}, b={}):
model = ConcreteModel()
model.x = Var(N, within=NonNegativeReals)
model.obj = Objective(expr=sum(c[i]*model.x[i] for i in N))
def con_rule(model, m):
return sum(a[i,m]*model.x[i] for i in N) >= b[m]
model.con = Constraint(M, rule=con_rule)
return model
model = create_model(N = [1,2], M = [1,2], c = {1:1, 2:2},
a = {(1,1):5, (2,1):4, (1,2):7, (2,2):3},
b = {1:11, 2:46})
#model.pprint()
instance = model.create()
#instance.pprint()
opt = SolverFactory("glpk")
results = opt.solve(instance, load_solutions=False)
results.write()
x_1=float( model.x[1].value)
#x_2=float( model.x[2].value or 0)
First, model.create() is deprecated on the most recent version of Pyomo. I believe it is now renamed to model.create_instance.
Second, you are solving the instance object returned from model.create(), which is a different object from model. Therefore, you should be accessing the .value attribute of variables on the instance object and not the model object.
Third, you are starting from a ConcreteModel, which means there is no need to call model.create() (or model.create_instance()). This is simply creating an unnecessary copy of what is already a "concrete instance". I.e., you could send the model object to the solver and the code accessing .value would work as is.
The create_instance method is only necessary when you start from an AbstractModel, where you then typically pass the name of some .dat file to it.
I am currently coding an Multiple Gradient Descent algorithm, where I use kriged functions.
My problem is that I can't find how to obtain the gradient of the kriged function (I tried to use linearize but I don't know how to make it work).
from __future__ import print_function
from six import moves
from random import shuffle
import sys
import numpy as np
from numpy import linalg as LA
import math
from openmdao.braninkm import F, G, DF, DG
from openmdao.api import Group, Component,IndepVarComp
from openmdao.api import MetaModel
from openmdao.api import KrigingSurrogate, FloatKrigingSurrogate
def rand_lhc(b, k):
# Calculates a random Latin hypercube set of n points in k dimensions within [0,n-1]^k hypercube.
arr = np.zeros((2*b, k))
row = list(moves.xrange(-b, b))
for i in moves.xrange(k):
shuffle(row)
arr[:, i] = row
return arr/b*1.2
class TrigMM(Group):
''' FloatKriging gives responses as floats '''
def __init__(self):
super(TrigMM, self).__init__()
# Create meta_model for f_x as the response
F_mm = self.add("F_mm", MetaModel())
F_mm.add_param('X', val=np.array([0., 0.]))
F_mm.add_output('f_x:float', val=0., surrogate=FloatKrigingSurrogate())
# F_mm.add_output('df_x:float', val=0., surrogate=KrigingSurrogate().linearize)
#F_mm.linearize('X', 'f_x:float')
#F_mm.add_output('g_x:float', val=0., surrogate=FloatKrigingSurrogate())
print('init ok')
self.add('p1', IndepVarComp('X', val=np.array([0., 0.])))
self.connect('p1.X','F_mm.X')
# Create meta_model for f_x as the response
G_mm = self.add("G_mm", MetaModel())
G_mm.add_param('X', val=np.array([0., 0.]))
G_mm.add_output('g_x:float', val=0., surrogate=FloatKrigingSurrogate())
#G_mm.add_output('df_x:float', val=0., surrogate=KrigingSurrogate().linearize)
#G_mm.linearize('X', 'g_x:float')
self.add('p2', IndepVarComp('X', val=np.array([0., 0.])))
self.connect('p2.X','G_mm.X')
from openmdao.api import Problem
prob = Problem()
prob.root = TrigMM()
prob.setup()
u=4
v=3
#training avec latin hypercube
prob['F_mm.train:X'] = rand_lhc(20,2)
prob['G_mm.train:X'] = rand_lhc(20,2)
#prob['F_mm.train:X'] = rand_lhc(10,2)
#prob['G_mm.train:X'] = rand_lhc(10,2)
#prob['F_mm.linearize:X'] = rand_lhc(10,2)
#prob['G_mm.linearize:X'] = rand_lhc(10,2)
datF=[]
datG=[]
datDF=[]
datDG=[]
for i in range(len(prob['F_mm.train:X'])):
datF.append(F(np.array([prob['F_mm.train:X'][i]]),u))
#datG.append(G(np.array([prob['F_mm.train:X'][i]]),v))
data_trainF=np.fromiter(datF,np.float)
for i in range(len(prob['G_mm.train:X'])):
datG.append(G(np.array([prob['G_mm.train:X'][i]]),v))
data_trainG=np.fromiter(datG,np.float)
prob['F_mm.train:f_x:float'] = data_trainF
#prob['F_mm.train:g_x:float'] = data_trainG
prob['G_mm.train:g_x:float'] = data_trainG
Are you going to be writing a Multiple Gradient Descent driver? If so, then OpenMDAO calculates the gradient from a param to an output at the Problem level using the calc_gradient method.
If you take a look at the source code for the pyoptsparse driver:
https://github.com/OpenMDAO/OpenMDAO/blob/master/openmdao/drivers/pyoptsparse_driver.py
The _gradfunc method is a callback function that returns the gradient of the constraints and objectives with respect to the design variables. The Metamodel component has built-in analytic gradients for all (I think) of our surrogates, so you don't even have to declare any there.
If this isn't what you are trying to do, then I may need a little more information about your application.
I have to do something like this.
import theano as th
import theano.tensor as T
x, y = T.dscalars('x', 'y')
z = np.matrix([[x*y, x-y], [x/y, x**2/(2*y)]])
f = th.function([x, y], z) # causes error
# next comes calculations like f(2, 1)*f(3, 2)*some_matrix
I know the last line is not a valid code as th.function doesn't support returning these objects. Is there an efficient way to do this without returning all elements of matrix and casting it as an np.matrix?
The problem with your approach is that z needs to be a list of theano variables not a numpy matrix.
You can achieve the same result using:
z1,z2,z3,z4 = x*y,x-y,x/y,x**2/(2*y)
f = th.function([x, y], [z1,z2,z3,z4])
def createz(z1,z2,z3,z4) :
return np.matrix([[z1,z2],[z3,z4]])
print(createz(*f(1,2)))