I am defining two custom functions in Sympy, called phi and Phi. I know that Phi(x)+Phi(-x) == 1. How do I provide Sympy with this simplification rule? Can I specify this in my class definition?
Here is what I've done so far:
from sympy import Function
class phi(Function):
nargs = 1
def fdiff(self, argindex=1):
if argindex == 1:
return -1*self.args[0]*phi(self.args[0])
else:
raise ArgumentIndexError(self, argindex)
#classmethod
def eval(cls, arg):
# The function is even, so try to pull out factors of -1
if arg.could_extract_minus_sign():
return cls(-arg)
class Phi(Function):
nargs = 1
def fdiff(self, argindex=1):
if argindex == 1:
return phi(self.args[0])
else:
raise ArgumentIndexError(self, argindex)
For the curious, phi and Phi represent the Gaussian PDF and CDF, respectively. These are implemented in sympy.stats. But, in my case, it's easier to interpret results in terms of phi and Phi.
Based upon the comment by Stelios, Phi(x) should return 1-Phi(-x) if x is negative. Therefore, I modified Phi as follows:
class Phi(Function):
nargs = 1
def fdiff(self, argindex=1):
if argindex == 1:
return phi(self.args[0])
else:
raise ArgumentIndexError(self, argindex)
#classmethod
def eval(cls, arg):
# Phi(x) + Phi(-x) == 1
if arg.could_extract_minus_sign():
return 1-cls(-arg)
Related
I am working on a small optimization model with some disjunctions. The way I did in a concrete model worked well:
from pyomo.environ import *
m = ConcreteModel()
m.d1 = Disjunct()
m.d2 = Disjunct()
m.d1.sub1 = Disjunct()
m.d1.sub2 = Disjunct()
m.d1.disj = Disjunction(expr=[m.d1.sub1, m.d1.sub2])
m.disj = Disjunction(expr=[m.d1, m.d2])
But now I tranfered the concrete model into an abstract formulation. I was able to fix everything instead of nesting the disjunctions. The way I did it was like:
#Disjunct 1
def _op_mode1(self, op_mode, t):
m = op_mode.model()
op_mode.c1 = po.Constraint(expr=m.x[t] == True)
#Disjunct 2
def _op_mode2(self, op_mode, t):
m = op_mode.model()
op_mode.c1 = po.Constraint(expr=m.x[t] == False)
#Disjunction 1
def _op_modes(self,m, t):
return [m.mode1[t], m.mode2[t]]
#Adding Components
self.model.del_component("mode1")
self.model.del_component("mode1_index")
self.model.add_component("mode1", pogdp.Disjunct(self.model.T, rule=self._op_mode1))
self.model.del_component("mode2")
self.model.del_component("mode2_index")
self.model.add_component("mode2", pogdp.Disjunct(self.model.T, rule=self._op_mode1))
self.model.del_component("modes")
self.model.del_component("modes_index")
self.model.add_component("modes", pogdp.Disjunction(self.model.T, rule=self._op_modes))`
As I previously mentioned, this works fine. But I haven`t found any way to nest the disjunctions. Pyomo alsways complains about the second layer of the disjuncts like "sub1".
Would anybody could give me a hint?
Many greetings
Joerg
The issue with the latest model above is that you are declaring m.d1 and m.d2 for each element of m.T, but they overwrite each other each time since they have the same name. You should be seeing warning messages logged for this. So if you uncomment your pprint of the model, you'll see that you only have the last ones you declared (with constraints on x[10]). So the first 9 Disjunctions in m.disjunction_ are disjunctions of Disjuncts that do not exist. The simplest fix for this is to give the disjuncts unique names when you declare them:
import pyomo.environ as pyo
import pyomo.gdp as pogdp
model = pyo.ConcreteModel()
model.T = pyo.RangeSet(0, 10)
model.x=pyo.Var(model.T,bounds=(-2, 10))
model.y=pyo.Var(model.T,bounds=(20, 30))
# This was also a duplicate declaration:
#model.disjunction_ = pogdp.Disjunction(model.T)
def d1(m, t):
disj = pogdp.Disjunct()
disj.c1= pyo.Constraint(expr=m.x[t] <= 10)
m.add_component('d1_%s' % t, disj)
return disj
def d2(m, t):
disj = pogdp.Disjunct()
disj.c1= pyo.Constraint(expr=m.x[t] >= 10)
m.add_component('d2_%s' % t, disj)
return disj
# sum x,y
def obj_rule(m):
return pyo.quicksum(pyo.quicksum([m.x[t] + m.y[t]], linear=False) for t in
m.T)
model.obj = pyo.Objective(rule=obj_rule)
def _op_mode_test(m, t):
disj1 = d1(m, t)
disj2 = d2(m, t)
return [disj1, disj2]
model.disjunction_ = pogdp.Disjunction(model.T, rule=_op_mode_test)
However, it would be cleaner (and probably easier down the line) to index the Disjuncts by m.T as well, since that's basically what the unique names are doing.
Block (and hence Disjunct rules) are passed the block (or disjunct) to be populated as the first argument. So, an "abstract" equivalent too your concrete model might look something like this:
model = AbstractModel()
#model.Disjunct()
def d1(d):
# populate the `d` disjunct (i.e., `model.d1`) here
pass
#model.Disjunct()
def d2(d):
#d.Disjunct()
def sub1(sd):
# populate the 'sub1' disjunct here
pass
#d.Disjunct()
def sub2(sd):
# populate the 'sub2' disjunct here
pass
d.disj = Disjunction(expr=[d.sub1, d.sub2])
model.disj = Disjunction(expr=[model.d1, model.d2])
There is a more fundamental question as to why you are converting your model over to "abstract" form. Pyomo Abstract models were mostly devised to be familiar to people coming from modeling in AMPL. While they will work with block-structured models, as AMPL was never really designed with blocks in mind, similarly block-oriented Abstract models tend to be unnecessarily cumbersome.
Here ist our new model:
import pyomo.environ as pyo
import pyomo.gdp as pogdp
model = pyo.ConcreteModel()
model.T = pyo.RangeSet(0,10)
model.x=pyo.Var(model.T,bounds=(-2, 10))
model.y=pyo.Var(model.T,bounds=(20, 30))
model.disjunction_=pogdp.Disjunction(model.T)
def d1(m,t):
m.d1 = pogdp.Disjunct()
m.d1.c1= pyo.Constraint(expr=m.x[t] <=10)
def d2(m,t):
m.d2 = pogdp.Disjunct()
m.d2.c1= pyo.Constraint(expr=m.x[t] >=10)
# sum x,y
def obj_rule(m):
return pyo.quicksum(pyo.quicksum([m.x[t] + m.y[t]], linear=False) for t in m.T)
model.obj = pyo.Objective(rule=obj_rule)
def _op_mode_test(m,t):
d1(m,t)
d2(m,t)
return [m.d1,m.d2]
model.disjunction_=pogdp.Disjunction(model.T,rule=_op_mode_test)
#model.pprint()
pyo.TransformationFactory('gdp.bigm').apply_to(model)
solver = pyo.SolverFactory('baron')
solver.solve(model)
print(pyo.value(model.obj))
I think it has something to do with the RangeSet. For a single step it works, but with more than one steps it throws an error: AttributeError: 'NoneType' object has no attribute 'component'
It would be great if you could have a look on it.
Many thanks
Here is the code which works pretty fine with bigm, but not with mbigm or hull transformation:
import pyomo.environ as pyo
import pyomo.gdp as pogdp
model = pyo.ConcreteModel()
model.T = pyo.RangeSet(2)
model.x=pyo.Var(model.T,bounds=(1, 10))
model.y=pyo.Var(model.T,bounds=(1, 100))
def _op_mode_sub(m, t):
m.disj1[t].sub1 = pogdp.Disjunct()
m.disj1[t].sub1.c1= pyo.Constraint(expr=m.y[t] == 60)
m.disj1[t].sub2 = pogdp.Disjunct()
m.disj1[t].sub2.c1= pyo.Constraint(expr=m.y[t] == 100)
return [m.disj1[t].sub1, m.disj1[t].sub2]
def _op_mode(m, t):
m.disj2[t].c1= pyo.Constraint(expr=m.y[t] >= 3)
m.disj2[t].c2= pyo.Constraint(expr=m.y[t] <= 5)
return [m.disj1[t], m.disj2[t]]
model.disj1 = pogdp.Disjunct(model.T)
model.disj2 = pogdp.Disjunct(model.T)
model.disjunction1sub = pogdp.Disjunction(model.T, rule=_op_mode_sub)
model.disjunction1 = pogdp.Disjunction(model.T, rule=_op_mode)
def obj_rule(m, t):
return pyo.quicksum(pyo.quicksum([m.x[t] + m.y[t]], linear=False) for t in m.T)
model.obj = pyo.Objective(rule=obj_rule)
model.pprint()
gdp_relax=pyo.TransformationFactory('gdp.bigm')
gdp_relax.apply_to(model)
solver = pyo.SolverFactory('glpk')
solver.solve(model)
print(pyo.value(model.obj))
I have applied Logistic Regression on train set after splitting the data set into test and train sets, but I got the above error. I tried to work it out, and when i tried to print my response vector y_train in the console it prints integer values like 0 or 1. But when i wrote it into a file I found the values were float numbers like 0.0 and 1.0. If thats the problem, how can I over come it.
lenreg = LogisticRegression()
print y_train[0:10]
y_train.to_csv(path='ytard.csv')
lenreg.fit(X_train, y_train)
y_pred = lenreg.predict(X_test)
print metics.accuracy_score(y_test, y_pred)
StrackTrace is as follows,
Traceback (most recent call last):
File "/home/amey/prog/pd.py", line 82, in <module>
lenreg.fit(X_train, y_train)
File "/usr/lib/python2.7/dist-packages/sklearn/linear_model/logistic.py", line 1154, in fit
self.max_iter, self.tol, self.random_state)
File "/usr/lib/python2.7/dist-packages/sklearn/svm/base.py", line 885, in _fit_liblinear
" class: %r" % classes_[0])
ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0.0
Meanwhile I've gone across the link which was unanswered. Is there a solution.
The problem here is that your y_train vector, for whatever reason, only has zeros. It is actually not your fault, and its kind of a bug ( I think ). The classifier needs 2 classes or else it throws this error.
It makes sense. If your y_train vector only has zeros, ( ie only 1 class ), then the classifier doesn't really need to do any work, since all predictions should just be the one class.
In my opinion the classifier should still complete and just predict the one class ( all zeros in this case ) and then throw a warning, but it doesn't. It throws the error in stead.
A way to check for this condition is like this:
lenreg = LogisticRegression()
print y_train[0:10]
y_train.to_csv(path='ytard.csv')
if len(np.sum(y_train)) in [len(y_train),0]:
print "all one class"
#do something else
else:
#OK to proceed
lenreg.fit(X_train, y_train)
y_pred = lenreg.predict(X_test)
print metics.accuracy_score(y_test, y_pred)
TO overcome the problem more easily i would recommend just including more samples in you test set, like 100 or 1000 instead of 10.
I had the same problem using learning_curve:
train_sizes, train_scores, test_scores = learning_curve(estimator,
X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes,
scoring="f1", random_state=RANDOM_SEED, shuffle=True)
add the suffle parameter that will randomize the sets.
This doesn't prevent error from happening but it's a way to increase the chances to have both classes in subsets used by the function.
I found it to be because of only 1's or 0's wound up in my y_test since my sample size was really small. Try chaning your test_size value.
# python3
import numpy as np
from sklearn.svm import LinearSVC
def upgrade_to_work_with_single_class(SklearnPredictor):
class UpgradedPredictor(SklearnPredictor):
def __init__(self, *args, **kwargs):
self._single_class_label = None
super().__init__(*args, **kwargs)
#staticmethod
def _has_only_one_class(y):
return len(np.unique(y)) == 1
def _fitted_on_single_class(self):
return self._single_class_label is not None
def fit(self, X, y=None):
if self._has_only_one_class(y):
self._single_class_label = y[0]
else:
super().fit(X, y)
return self
def predict(self, X):
if self._fitted_on_single_class():
return np.full(X.shape[0], self._single_class_label)
else:
return super().predict(X)
return UpgradedPredictor
LinearSVC = upgrade_to_work_with_single_class(LinearSVC)
or hard-way (more right):
import numpy as np
from sklearn.svm import LinearSVC
from copy import deepcopy, copy
from functools import wraps
def copy_class(cls):
copy_cls = type(f'{cls.__name__}', cls.__bases__, dict(cls.__dict__))
for name, attr in cls.__dict__.items():
try:
hash(attr)
except TypeError:
# Assume lack of __hash__ implies mutability. This is NOT
# a bullet proof assumption but good in many cases.
setattr(copy_cls, name, deepcopy(attr))
return copy_cls
def upgrade_to_work_with_single_class(SklearnPredictor):
SklearnPredictor = copy_class(SklearnPredictor)
original_init = deepcopy(SklearnPredictor.__init__)
original_fit = deepcopy(SklearnPredictor.fit)
original_predict = deepcopy(SklearnPredictor.predict)
#staticmethod
def _has_only_one_class(y):
return len(np.unique(y)) == 1
def _fitted_on_single_class(self):
return self._single_class_label is not None
#wraps(SklearnPredictor.__init__)
def new_init(self, *args, **kwargs):
self._single_class_label = None
original_init(self, *args, **kwargs)
#wraps(SklearnPredictor.fit)
def new_fit(self, X, y=None):
if self._has_only_one_class(y):
self._single_class_label = y[0]
else:
original_fit(self, X, y)
return self
#wraps(SklearnPredictor.predict)
def new_predict(self, X):
if self._fitted_on_single_class():
return np.full(X.shape[0], self._single_class_label)
else:
return original_predict(self, X)
setattr(SklearnPredictor, '_has_only_one_class', _has_only_one_class)
setattr(SklearnPredictor, '_fitted_on_single_class', _fitted_on_single_class)
SklearnPredictor.__init__ = new_init
SklearnPredictor.fit = new_fit
SklearnPredictor.predict = new_predict
return SklearnPredictor
LinearSVC = upgrade_to_work_with_single_class(LinearSVC)
You can find the indexes of the first (or any) occurrence of each of the classes and concatenate them on top of the arrays and delete them from their original positions, that way there will be at least one instance of each class in the training set.
This error related to the dataset you are using, the dataset contains a class for example 1/benign, whereas it must contain two classes 1 and 0 or Benign and Attack.
After having seen the nice implementation of the "ampl car example" in Pyomo repository, I would like to keep extending the problem with new features and constraints, but I have found the next problems during development. Is someone able of fix them?
1) Added new constraint "electric car": Now the acceleration is limited by adherence until a determined speed and then constant power model is used. I am not able of implement this constraint as i would think. It is commented in the, but Pyomo complains about that a constraint is related to a variable. (now Umax depends of the car speed).
2) Added new comfort acceleration and jerk constraints. It seems they are working right, but should be nice if a Pyomo guru supervise them and tell me if they are really implemented in the correct way.
3) About last one, in order of reducing verbosity. Is there any way of combine accelerationL and accelerationU in a unique constraint? Same for jerkL and jerkU.
4) The last feature is a speed limit constraint divided in two steps. Again, I am not able of getting it works, so it is commented in code. Does anybody dare to fix it?
# Ampl Car Example (Extended)
#
# Shows how to convert a minimize final time optimal control problem
# to a format pyomo.dae can handle by removing the time scaling from
# the ContinuousSet.
#
# min tf
# dx/dt = v
# dv/dt = u - R*v^2
# x(0)=0; x(tf)=L
# v(0)=0; v(tf)=0
# -3 <= u <= 1 (engine constraint)
#
# {v <= 7m/s ===> u < 1
# u <= { (electric car constraint)
# {v > 7m/s ===> u < 1*7/v
#
# -1.5 <= dv/dt <= 0.8 (comfort constraint -> smooth driving)
# -0.5 <= d2v/dt2 <= 0.5 (comfort constraint -> jerk)
# v <= Vmax (40 kmh[0-500m] + 25 kmh(500-1000m])
from pyomo.environ import *
from pyomo.dae import *
m = ConcreteModel()
m.R = Param(initialize=0.001) # Friction factor
m.L = Param(initialize=1000.0) # Final position
m.T = Param(initialize=50.0) # Estimated time
m.aU = Param(initialize=0.8) # Acceleration upper bound
m.aL = Param(initialize=-1.5) # Acceleration lower bound
m.jU = Param(initialize=0.5) # Jerk upper bound
m.jL = Param(initialize=-0.5) # Jerk lower bound
m.NFE = Param(initialize=100) # Number of finite elements
'''
def _initX(m, i):
return m.x[i] == i*m.L/m.NFE
def _initV(m):
return m.v[i] == m.L/50
'''
m.tf = Var()
m.tau = ContinuousSet(bounds=(0,1)) # Unscaled time
m.t = Var(m.tau) # Scaled time
m.x = Var(m.tau, bounds=(0,m.L))
m.v = Var(m.tau, bounds=(0,None))
m.u = Var(m.tau, bounds=(-3,1), initialize=0)
m.dt = DerivativeVar(m.t)
m.dx = DerivativeVar(m.x)
m.dv = DerivativeVar(m.v)
m.da = DerivativeVar(m.v, wrt=(m.tau, m.tau))
m.obj = Objective(expr=m.tf)
def _ode1(m, i):
if i==0:
return Constraint.Skip
return m.dt[i] == m.tf
m.ode1 = Constraint(m.tau, rule=_ode1)
def _ode2(m, i):
if i==0:
return Constraint.Skip
return m.dx[i] == m.tf * m.v[i]
m.ode2 = Constraint(m.tau, rule=_ode2)
def _ode3(m, i):
if i==0:
return Constraint.Skip
return m.dv[i] == m.tf*(m.u[i] - m.R*m.v[i]**2)
m.ode3 = Constraint(m.tau, rule=_ode3)
def _accelerationL(m, i):
if i==0:
return Constraint.Skip
return m.dv[i] >= m.aL*m.tf
m.accelerationL = Constraint(m.tau, rule=_accelerationL)
def _accelerationU(m, i):
if i==0:
return Constraint.Skip
return m.dv[i] <= m.aU*m.tf
m.accelerationU = Constraint(m.tau, rule=_accelerationU)
def _jerkL(m, i):
if i==0:
return Constraint.Skip
return m.da[i] >= m.jL*m.tf**2
m.jerkL = Constraint(m.tau, rule=_jerkL)
def _jerkU(m, i):
if i==0:
return Constraint.Skip
return m.da[i] <= m.jU*m.tf**2
m.jerkU = Constraint(m.tau, rule=_jerkU)
'''
def _electric(m, i):
if i==0:
return Constraint.Skip
elif value(m.v[i])<=7:
return m.a[i] <= 1
else:
return m.v[i] <= 1*7/m.v[i]
m.electric = Constraint(m.tau, rule=_electric)
'''
'''
def _speed(m, i):
if i==0:
return Constraint.Skip
elif value(m.x[i])<=500:
return m.v[i] <= 40/3.6
else:
return m.v[i] <= 25/3.6
m.speed = Constraint(m.tau, rule=_speed)
'''
def _initial(m):
yield m.x[0] == 0
yield m.x[1] == m.L
yield m.v[0] == 0
yield m.v[1] == 0
yield m.t[0] == 0
m.initial = ConstraintList(rule=_initial)
discretizer = TransformationFactory('dae.finite_difference')
discretizer.apply_to(m, nfe=value(m.NFE), wrt=m.tau, scheme='BACKWARD')
#discretizer = TransformationFactory('dae.collocation')
#discretizer.apply_to(m, nfe=value(m.NFE), ncp=4, wrt=m.tau, scheme='LAGRANGE-RADAU')
solver = SolverFactory('ipopt')
solver.solve(m,tee=True)
print("final time = %6.2f" %(value(m.tf)))
t = []
x = []
v = []
a = []
u = []
for i in m.tau:
t.append(value(m.t[i]))
x.append(value(m.x[i]))
v.append(3.6*value(m.v[i]))
a.append(10*value(m.u[i] - m.R*m.v[i]**2))
u.append(10*value(m.u[i]))
import matplotlib.pyplot as plt
plt.plot(x, v, label='v (km/h)')
plt.plot(x, a, label='a (dm/s2)')
plt.plot(x, u, label='u (dm/s2)')
plt.xlabel('distance')
plt.grid('on')
plt.legend()
plt.show()
Thanks a lot in advance,
Pablo
(1) You should not think of Pyomo constraint rules as callbacks that are used by the solver. You should think of them more as a function to generate a container of constraint objects that gets called once for each index when the model is constructed. Meaning it is invalid to use a variable in an if statement unless you are really only using its initial value to define the constraint expression. There are ways to express what I think you are trying to do, but they involve introducing binary variables into the problem, in which case you can no longer use Ipopt.
(2) Can't really provide any help. Syntax looks fine.
(3) Pyomo allows you to return double-sided inequality expressions (e.g., L <= f(x) <= U) from constraint rules, but they can not involve variable expressions in the L and U locations. It doesn't look like the constraints you are referring to can be combined into this form.
(4) See (1)
What would be the best approach to create a type that is a saturated integer in python ?
i.e.:
v = SaturatedInteger(0, 100)
# That should create an integer that will always be in between 0 and 100,
# and have all default operations
v += 68719
print v #Should print '100'.
I can think of inheriting int type, but where should the saturating logic be implemented then ?
If you need a new (quick and dirty) class for it, I would implement it as follows.
class SaturatedInteger:
def __init__(self, val, lo, hi):
self.real, self.lo, self.hi = val, lo, hi
def __add__(self, other):
return min(self.real + other.real, self.hi)
def __sub__(self, other):
return max(self.real - other.real, self.lo)
...
Add as many of the other operators in the docs as you feel you will need (and their 'r' variants).
By storing the value in the instance name real, you can do your arithmetic with regular integers, floats, etc. too:
a = SaturatedInteger(60, 0, 100)
print(a)
60
print(a+30)
90
print(a+40)
100
print(a+50.)
100
print(a-70.)
0
print(a+a)
100
Though, of course you only add the real part if you're adding a complex number to your SaturatedInteger, so watch out. (For a much more complete and robust version, #jonrsharpe's answer is the way to go).
In general, I would implement using a #property to protect an instance's value attribute, then emulate a numeric type, rather than inheriting from int:
class SaturatedInteger(object):
"""Emulates an integer, but with a built-in minimum and maximum."""
def __init__(self, min_, max_, value=None):
self.min = min_
self.max = max_
self.value = min_ if value is None else value
#property
def value(self):
return self._value
#value.setter
def value(self, new_val):
self._value = min(self.max, max(self.min, new_val))
#staticmethod
def _other_val(other):
"""Get the value from the other object."""
if hasattr(other, 'value'):
return other.value
return other
def __add__(self, other):
new_val = self.value + self._other_val(other)
return SaturatedInteger(self.min, self.max, new_val)
__radd__ = __add__
def __eq__(self, other):
return self.value == self._other_val(other)
if __name__ == '__main__':
v = SaturatedInteger(0, 100)
v += 68719
assert v == 100
assert 123 + v == 100
I've only implemented __add__, __radd__ and __eq__, but you can probably see how the rest could be built out as required. You might want to think about what happens when two SaturatedIntegers are used together - should the result have e.g. min(self.min, other.min) as its own min?
I wrote a sample class that has an add function:
class SatInt:
def __init__(self, low, up):
self.lower = low
self.upper = up
self.value = 0
def add(self, n):
if n+self.value > self.upper:
self.value = self.upper
else:
self.value = self.value + n
x = SatInt(0,100)
x.add(32718)
print(x.value)
100
Situation:
Python 2.7 code that contains a number of "yield" statements. But the specs have changed.
Each yield calls a function that used to always return a value. Now the result is sometimes a value that should be yielded, but sometimes no value should be yielded.
Dumb Example:
BEFORE:
def always(x):
return 11 * x
def do_stuff():
# ... other code; each yield is buried inside an if or other flow construct ...
# ...
yield always(1)
# ...
yield always(6)
# ...
yield always(5)
print( list( do_stuff() ) )
=>
[11, 66, 55]
AFTER (if I could use Python 3, but that is not currently an option):
def maybe(x):
""" only keep odd value; returns list with 0 or 1 elements. """
result = 11 * x
return [result] if bool(result & 1) else []
def do_stuff():
# ...
yield from maybe(1)
# ...
yield from maybe(6)
# ...
yield from maybe(5)
=>
[11, 55]
AFTER (in Python 2.7):
def maybe(x):
""" only keep odd value; returns list with 0 or 1 elements. """
result = 11 * x
return [result] if bool(result & 1) else []
def do_stuff():
# ...
for x in maybe(1): yield x
# ...
for x in maybe(6): yield x
# ...
for x in maybe(5): yield x
NOTE: In the actual code I am translating, the "yields" are buried inside various flow-control constructs. And the "maybe" function has two parameters, and is more complex.
MY QUESTION:
Observe that each call to "maybe" returns either 1 value to yield, or 0 values to yield.
(It would be fine to change "maybe" to return the value, or to return None when there is no value, if that helps.)
Given this 0/1 situation, is there any more succinct way to code?
If as you say you can get away with returning None, then I'd leave the code as it was in the first place:
def maybe(x):
""" only keep odd value; returns either element or None """
result = 11 * x
if result & 1: return result
def do_stuff():
yield maybe(1)
yield maybe(6)
yield maybe(5)
but use a wrapped version instead which tosses the Nones, like:
def do_stuff_use():
return (x for x in do_stuff() if x is not None)
You could even wrap the whole thing up in a decorator, if you wanted:
import functools
def yield_not_None(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
return (x for x in f(*args, **kwargs) if x is not None)
return wrapper
#yield_not_None
def do_stuff():
yield maybe(1)
yield maybe(6)
yield maybe(5)
after which
>>> list(do_stuff())
[11, 55]