How do I use solve_congruence? - sympy

Please tell me how to keep the decimal point out.
I want integer
from sympy import *
from sympy.ntheory.modular import solve_congruence
var('x,y,k,kk')
def mySolve(myA,myB,myC):
xo=solve_congruence((0, myA),(-myC,-myB))[0]/myA
yo=solve(myA*xo+myB*y+myC,y)[0]
return xo,yo
print("#",mySolve(5**4,-2**4,-1))
# (1.0, 39.0000000000000) ?????????????????????????????
Sage Cell server--->OK
var('x,y,k,kk')
def mySolve(myA,myB,myC):
xo=crt([0,-myC],[myA,-myB])/myA
yo=solve(myA*xo+myB*y+myC,y)[0].rhs()
return xo,yo
print("#",mySolve(5**4,-2**4,-1))
# (1, 39)

Related

Custom multivariate Dirichlet priors in pymc3

Is it possible create custom multivariate distributions in pymc3? In the following, I have tried to create a linear transformation of a Dirichlet distribution. All variants on this have returned numerous errors, perhaps to do with theano data types? Any help would be gratefully appreciated.
import numpy as np
import pymc3 as pymc
import theano.tensor as tt
# data
n = 5
prior_params = np.ones(n - 1) / (n - 1)
mx = np.array([[0.25 , 0.5 , 0.75 , 1. ],
[0.25 , 0.333, 0.25 , 0. ],
[0.25 , 0.167, 0. , 0. ],
[0.25 , 0. , 0. , 0. ]])
# Note that the matrix mx takes the unit simplex into the unit simplex.
# custom log-liklihood
def generate_function(mx, prior_params):
def log_trunc_dir(x):
return pymc.Dirichlet.dist(a=prior_params).logp(mx.dot(x.T)).eval()
return log_trunc_dir
#model
with pymc.Model() as simple_model:
x = pymc.Dirichlet('x', a=np.ones(n - 1))
q = pymc.DensityDist('q', generate_function(mx, prior_params), observed={'x': x})
Thanks to significant help from the PyMC3 development community, I can post the
following working example of a customised Dirichlet prior in PyMC3.
import pymc3 as pm
import numpy as np
import scipy.special as special
import theano.tensor as tt
import matplotlib.pyplot as plt
n = 4
with pm.Model() as model:
prior = np.ones(n) / n
def dirich_logpdf(value=prior):
return -n * special.gammaln(1/n) + (-1 + 1/n) * tt.log(value).sum()
stick = pm.distributions.transforms.StickBreaking()
probs = pm.DensityDist('probs', dirich_logpdf, shape=n,
testval=np.array(prior), transform=stick)
data = np.array([5, 7, 1, 0])
sfs_obs = pm.Multinomial('sfs_obs', n=np.sum(data), p=probs, observed=data)
with model:
step = pm.Metropolis()
trace = pm.sample(100000, tune=10000, step=step)
print('MLE = ', data / np.sum(data))
print(pm.summary(trace))
pm.traceplot(trace, [probs])
plt.show()

Django Sum Aggregation from Int to Float

So i save my fees in cents, but want to display them in Euro. The conversion problem is that it first divides the integer by 100 and then converts to a float. It should be the other way around. How can i do that?
The following code should illustrate my Problem:
>>> from django.db.models import F, FloatField, Sum
>>> from payment.models import Payment
>>>
>>> paymentlist = Payment.objects.all()
>>> result = paymentlist.annotate(
... fees_in_cents=Sum(F('fees'), output_field=FloatField())).annotate(
... fees_in_euro=Sum(F('fees')/100, output_field=FloatField()))
>>>
>>> print(result[0].fees_in_cents, result[0].fees_in_euro)
120.0 1.0
fees_in_euro should obviously be 1.20
Divide fees value by 100.0
Example:
result = paymentlist.annotate(
fees_in_cents=Sum(F('fees'), output_field=FloatField())).annotate(
fees_in_euro=Sum(F('fees') / 100.0, output_field=FloatField())
)

ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0.0

I have applied Logistic Regression on train set after splitting the data set into test and train sets, but I got the above error. I tried to work it out, and when i tried to print my response vector y_train in the console it prints integer values like 0 or 1. But when i wrote it into a file I found the values were float numbers like 0.0 and 1.0. If thats the problem, how can I over come it.
lenreg = LogisticRegression()
print y_train[0:10]
y_train.to_csv(path='ytard.csv')
lenreg.fit(X_train, y_train)
y_pred = lenreg.predict(X_test)
print metics.accuracy_score(y_test, y_pred)
StrackTrace is as follows,
Traceback (most recent call last):
File "/home/amey/prog/pd.py", line 82, in <module>
lenreg.fit(X_train, y_train)
File "/usr/lib/python2.7/dist-packages/sklearn/linear_model/logistic.py", line 1154, in fit
self.max_iter, self.tol, self.random_state)
File "/usr/lib/python2.7/dist-packages/sklearn/svm/base.py", line 885, in _fit_liblinear
" class: %r" % classes_[0])
ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0.0
Meanwhile I've gone across the link which was unanswered. Is there a solution.
The problem here is that your y_train vector, for whatever reason, only has zeros. It is actually not your fault, and its kind of a bug ( I think ). The classifier needs 2 classes or else it throws this error.
It makes sense. If your y_train vector only has zeros, ( ie only 1 class ), then the classifier doesn't really need to do any work, since all predictions should just be the one class.
In my opinion the classifier should still complete and just predict the one class ( all zeros in this case ) and then throw a warning, but it doesn't. It throws the error in stead.
A way to check for this condition is like this:
lenreg = LogisticRegression()
print y_train[0:10]
y_train.to_csv(path='ytard.csv')
if len(np.sum(y_train)) in [len(y_train),0]:
print "all one class"
#do something else
else:
#OK to proceed
lenreg.fit(X_train, y_train)
y_pred = lenreg.predict(X_test)
print metics.accuracy_score(y_test, y_pred)
TO overcome the problem more easily i would recommend just including more samples in you test set, like 100 or 1000 instead of 10.
I had the same problem using learning_curve:
train_sizes, train_scores, test_scores = learning_curve(estimator,
X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes,
scoring="f1", random_state=RANDOM_SEED, shuffle=True)
add the suffle parameter that will randomize the sets.
This doesn't prevent error from happening but it's a way to increase the chances to have both classes in subsets used by the function.
I found it to be because of only 1's or 0's wound up in my y_test since my sample size was really small. Try chaning your test_size value.
# python3
import numpy as np
from sklearn.svm import LinearSVC
def upgrade_to_work_with_single_class(SklearnPredictor):
class UpgradedPredictor(SklearnPredictor):
def __init__(self, *args, **kwargs):
self._single_class_label = None
super().__init__(*args, **kwargs)
#staticmethod
def _has_only_one_class(y):
return len(np.unique(y)) == 1
def _fitted_on_single_class(self):
return self._single_class_label is not None
def fit(self, X, y=None):
if self._has_only_one_class(y):
self._single_class_label = y[0]
else:
super().fit(X, y)
return self
def predict(self, X):
if self._fitted_on_single_class():
return np.full(X.shape[0], self._single_class_label)
else:
return super().predict(X)
return UpgradedPredictor
LinearSVC = upgrade_to_work_with_single_class(LinearSVC)
or hard-way (more right):
import numpy as np
from sklearn.svm import LinearSVC
from copy import deepcopy, copy
from functools import wraps
def copy_class(cls):
copy_cls = type(f'{cls.__name__}', cls.__bases__, dict(cls.__dict__))
for name, attr in cls.__dict__.items():
try:
hash(attr)
except TypeError:
# Assume lack of __hash__ implies mutability. This is NOT
# a bullet proof assumption but good in many cases.
setattr(copy_cls, name, deepcopy(attr))
return copy_cls
def upgrade_to_work_with_single_class(SklearnPredictor):
SklearnPredictor = copy_class(SklearnPredictor)
original_init = deepcopy(SklearnPredictor.__init__)
original_fit = deepcopy(SklearnPredictor.fit)
original_predict = deepcopy(SklearnPredictor.predict)
#staticmethod
def _has_only_one_class(y):
return len(np.unique(y)) == 1
def _fitted_on_single_class(self):
return self._single_class_label is not None
#wraps(SklearnPredictor.__init__)
def new_init(self, *args, **kwargs):
self._single_class_label = None
original_init(self, *args, **kwargs)
#wraps(SklearnPredictor.fit)
def new_fit(self, X, y=None):
if self._has_only_one_class(y):
self._single_class_label = y[0]
else:
original_fit(self, X, y)
return self
#wraps(SklearnPredictor.predict)
def new_predict(self, X):
if self._fitted_on_single_class():
return np.full(X.shape[0], self._single_class_label)
else:
return original_predict(self, X)
setattr(SklearnPredictor, '_has_only_one_class', _has_only_one_class)
setattr(SklearnPredictor, '_fitted_on_single_class', _fitted_on_single_class)
SklearnPredictor.__init__ = new_init
SklearnPredictor.fit = new_fit
SklearnPredictor.predict = new_predict
return SklearnPredictor
LinearSVC = upgrade_to_work_with_single_class(LinearSVC)
You can find the indexes of the first (or any) occurrence of each of the classes and concatenate them on top of the arrays and delete them from their original positions, that way there will be at least one instance of each class in the training set.
This error related to the dataset you are using, the dataset contains a class for example 1/benign, whereas it must contain two classes 1 and 0 or Benign and Attack.

write a recursive function to find hermite polynomials

# I have the recursive relationship of the Hermite Polynomials:
Hn+1(x)=2xHn(x)−2nHn−1(x), n≥1,
H0(x)=1, H1(x)=2x.
I need to write def hermite(x,n) for any hermite polynomial Hn(x) using python 2.7
and make a plot of H5(x) on the interval x∈[−1,1].
Recursion is trivial here since the formula gives it. Just a small trap: you compute Hn(x), not Hn+1(x) so substract 1 to all n occurrences:
def hermite(x,n):
if n==0:
return 1
elif n==1:
return 2*x
else:
return 2*x*hermite(x,n-1)-2*(n-1)*hermite(x,n-2)
small test:
for i in range(0,5):
print(hermite(1,i))
1
2
2
-4
-20
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import hermite
def HERMITE(X,N):
HER = hermite(N)
sn = HER(X)
return sn
xvals = np.linspace(-1.0,1.0,1000)
for n in np.arange(0,7,1):
sol = HERMITE(xvals,n)
plt.plot(xvals,sol,"-.",label = "n = " + str(n),linewidth=2)
plt.xticks(fontsize=14,fontweight="bold")
plt.yticks(fontsize=14,fontweight="bold")
plt.grid()
plt.legend()
plt.show()
import math
import numpy as np
import matplotlib.pyplot as plt
def HER(x,n):
if n==0:
return 1.0 + 0.0*x
elif n==1:
return 2.0*x
else:
return 2.0*x*HER(x,n-1) -2.0*(n-1)*HER(x,n-2)
xvals = np.linspace(-np.pi,np.pi,1000)
for N in np.arange(0,7,1):
sol = HER(xvals,N)
plt.plot(xvals,sol,label = "n = " + str(N))
plt.xticks(fontsize=14,fontweight="bold")
plt.yticks(fontsize=14,fontweight="bold")
plt.grid()
plt.legend()
plt.show()
#Code is working perfectly fine
Feel free to ask any question...

Gradient of kriged function in Openmdao

I am currently coding an Multiple Gradient Descent algorithm, where I use kriged functions.
My problem is that I can't find how to obtain the gradient of the kriged function (I tried to use linearize but I don't know how to make it work).
from __future__ import print_function
from six import moves
from random import shuffle
import sys
import numpy as np
from numpy import linalg as LA
import math
from openmdao.braninkm import F, G, DF, DG
from openmdao.api import Group, Component,IndepVarComp
from openmdao.api import MetaModel
from openmdao.api import KrigingSurrogate, FloatKrigingSurrogate
def rand_lhc(b, k):
# Calculates a random Latin hypercube set of n points in k dimensions within [0,n-1]^k hypercube.
arr = np.zeros((2*b, k))
row = list(moves.xrange(-b, b))
for i in moves.xrange(k):
shuffle(row)
arr[:, i] = row
return arr/b*1.2
class TrigMM(Group):
''' FloatKriging gives responses as floats '''
def __init__(self):
super(TrigMM, self).__init__()
# Create meta_model for f_x as the response
F_mm = self.add("F_mm", MetaModel())
F_mm.add_param('X', val=np.array([0., 0.]))
F_mm.add_output('f_x:float', val=0., surrogate=FloatKrigingSurrogate())
# F_mm.add_output('df_x:float', val=0., surrogate=KrigingSurrogate().linearize)
#F_mm.linearize('X', 'f_x:float')
#F_mm.add_output('g_x:float', val=0., surrogate=FloatKrigingSurrogate())
print('init ok')
self.add('p1', IndepVarComp('X', val=np.array([0., 0.])))
self.connect('p1.X','F_mm.X')
# Create meta_model for f_x as the response
G_mm = self.add("G_mm", MetaModel())
G_mm.add_param('X', val=np.array([0., 0.]))
G_mm.add_output('g_x:float', val=0., surrogate=FloatKrigingSurrogate())
#G_mm.add_output('df_x:float', val=0., surrogate=KrigingSurrogate().linearize)
#G_mm.linearize('X', 'g_x:float')
self.add('p2', IndepVarComp('X', val=np.array([0., 0.])))
self.connect('p2.X','G_mm.X')
from openmdao.api import Problem
prob = Problem()
prob.root = TrigMM()
prob.setup()
u=4
v=3
#training avec latin hypercube
prob['F_mm.train:X'] = rand_lhc(20,2)
prob['G_mm.train:X'] = rand_lhc(20,2)
#prob['F_mm.train:X'] = rand_lhc(10,2)
#prob['G_mm.train:X'] = rand_lhc(10,2)
#prob['F_mm.linearize:X'] = rand_lhc(10,2)
#prob['G_mm.linearize:X'] = rand_lhc(10,2)
datF=[]
datG=[]
datDF=[]
datDG=[]
for i in range(len(prob['F_mm.train:X'])):
datF.append(F(np.array([prob['F_mm.train:X'][i]]),u))
#datG.append(G(np.array([prob['F_mm.train:X'][i]]),v))
data_trainF=np.fromiter(datF,np.float)
for i in range(len(prob['G_mm.train:X'])):
datG.append(G(np.array([prob['G_mm.train:X'][i]]),v))
data_trainG=np.fromiter(datG,np.float)
prob['F_mm.train:f_x:float'] = data_trainF
#prob['F_mm.train:g_x:float'] = data_trainG
prob['G_mm.train:g_x:float'] = data_trainG
Are you going to be writing a Multiple Gradient Descent driver? If so, then OpenMDAO calculates the gradient from a param to an output at the Problem level using the calc_gradient method.
If you take a look at the source code for the pyoptsparse driver:
https://github.com/OpenMDAO/OpenMDAO/blob/master/openmdao/drivers/pyoptsparse_driver.py
The _gradfunc method is a callback function that returns the gradient of the constraints and objectives with respect to the design variables. The Metamodel component has built-in analytic gradients for all (I think) of our surrogates, so you don't even have to declare any there.
If this isn't what you are trying to do, then I may need a little more information about your application.