How to add to a set in pyomo - pyomo

I am trying to run this code (some important part of the code is here):
Master = AbstractModel()
Master.intIndices = Set(initialize=INTVARS)
Master.constraintSet = Set(initialize=CONS)
Master.conIndices = Set(initialize=CONVARS)
Master.intPartList = Set()
Master.dualVarSet = Master.constraintSet * Master.intPartList
Master.theta = Var(domain=Reals, bounds = (None, None))
Master.intVars = Var(Master.intIndices, domain=NonNegativeIntegers, bounds=(0, 10))
Master.dualVars = Var(Master.dualVarSet, domain=Reals, bounds = (None, None))
max_iters = 1000
opt = SolverFactory("couenne")
for i in range(max_iters):
Master.intPartList.add(i)
but it shows me this error on the last line:
RuntimeError: Cannot access add on AbstractOrderedSimpleSet 'intPartList' before it has been constructed (initialized).
Can somebody help me?

You are not initializing Master.intPartList with any data, so you can't update it like that. However, if you make your model a concrete model, and supply an initialization for your set you can...
In [11]: from pyomo.environ import *
In [12]: m2 = ConcreteModel()
In [13]: m2.X = Set(initialize=[1,])
In [14]: m2.X.add(2)
Out[14]: 1
In [15]: m2.X.pprint()
X : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 2 : {1, 2}
In [16]:

Related

TensorFlow train function with multiple layers

I am new to tensor and trying to understand it. I managed to create one layer model. But I would like now to add 2 more. How can I make my train function working? I would like to train it with hundreds of values X and Y. I implemented all values what I need: Weight and Bias of each layer, but I dont understand how can I use them in my train function. And when it will be trained, how can I use it. Like I do in the last part of a code.
import numpy as np
print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution: {}".format(tf.executing_eagerly()))
x = np.array([
[10, 10, 30, 20],
])
y = np.array([[10, 1, 1, 1]])
class Model(object):
def __init__(self, x, y):
# get random values.
self.W = tf.Variable(tf.random.normal((len(x), len(x[0]))))
self.b = tf.Variable(tf.random.normal((len(y),)))
self.W1 = tf.Variable(tf.random.normal((len(x), len(x[0]))))
self.b1 = tf.Variable(tf.random.normal((len(y),)))
self.W2 = tf.Variable(tf.random.normal((len(x), len(x[0]))))
self.b2 = tf.Variable(tf.random.normal((len(y),)))
def __call__(self, x):
out1 = tf.multiply(x, self.W) + self.b
out2 = tf.multiply(out1, self.W1) + self.b1
last_layer = tf.multiply(out2, self.W2) + self.b2
# Input_Leyer = self.W * x + self.b
return last_layer
def loss(predicted_y, desired_y):
return tf.reduce_sum(tf.square(predicted_y - desired_y))
optimizer = tf.optimizers.Adam(0.1)
def train(model, inputs, outputs):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
grads = t.gradient(current_loss, [model.W, model.b])
optimizer.apply_gradients(zip(grads, [model.W, model.b]))
print(current_loss)
model = Model(x, y)
for i in range(10000):
train(model, x, y)
for i in range(3):
InputX = np.array([
[input(), input(), input(), input()],
])
returning = tf.math.multiply(
InputX, model.W, name=None
)
print("I think that output can be:", returning)
Just add new variables to the list:
grads = t.gradient(current_loss, [model.W, model.b, model.W1, model.b1, model.W2, model.b2])
optimizer.apply_gradients(zip(grads, [model.W, model.b, model.W1, model.b1, model.W2, model.b2]))

[Pyomo]How to generate a constraints via a loop

all
i want to create a set of constraints, as following:
Pg[0] <= Pmax[0]
Pg[1] <= Pmax[1]
Pg[3] <= Pmax[3]
Pg[4] <= Pmax[4]
Pg[5] <= Pmax[5]
and, my code is as following:
import pyomo.environ as pyo
model = pyo.ConcreteModel()
Ngen = 5
Pmax = [40, 170, 520, 200, 100]
model.Ngen = pyo.Set(dimen=Ngen)
model.Pg = pyo.Var(model.Ngen)
def Co1(model):
return ((model.Pg[ii] for ii in model.Ngen) <= (Pmax[jj] for jj in range(Ngen)))
model.Co1 = pyo.Constraint(rule=Co1)
but, the Python Console tells me:
" TypeError: '<=' not supported between instances of 'generator' and 'generator' "
HOW DO I RECORRECT this?
and the other quesiton, if the Pmax is not a list, but a numpy-ndarray. somethings will different?
thanks a lot!
The clue is in the fact that your constraint is defined over a set. Pyomo allows you to pass args to the Constraint constructor that define the set(s) of that constraint.
Therefore your final line should look something like:
model.Co1 = pyo.Constraint(model.Ngen, rule=Co1)
The second issue is the constraint expression itself. Now that this is defined with respect to a Set, you need only express the constraint in terms of a given index (say i), because the rule also expects arguments pertaining to each index of each Set of the Constraint:
def Co1(model, i):
return model.Pg[i] <= Pmax[i]
Finally, the Set you're defining is of size=5, not dimen=5. Sets are usually dimen=1 unless you're working with say arc links of a network in which case they would be of dimension 2. Since your set is defined over integers, the easiest way to add this is via RangeSet which defines all the integers from 1 to N:
model.Ngen = pyo.RangeSet(Ngen)
Given this, the only thing to change is that the list Pmax is 0-indexed and so our constraint will need to account for i being offset by one:
import pyomo.environ as pyo
model = pyo.ConcreteModel()
Ngen = 5
Pmax = [40, 170, 520, 200, 100]
model.Ngen = pyo.RangeSet(Ngen)
model.Pg = pyo.Var(model.Ngen)
def Co1(model, i):
return model.Pg[i] <= Pmax[i - 1]
model.Co1 = pyo.Constraint(model.Ngen, rule = Co1)
model.pprint()
# 1 RangeSet Declarations
# Ngen : Dim=0, Dimen=1, Size=5, Domain=Integers, Ordered=True, Bounds=(1, 5)
# Virtual
#
# 1 Var Declarations
# Pg : Size=5, Index=Ngen
# Key : Lower : Value : Upper : Fixed : Stale : Domain
# 1 : None : None : None : False : True : Reals
# 2 : None : None : None : False : True : Reals
# 3 : None : None : None : False : True : Reals
# 4 : None : None : None : False : True : Reals
# 5 : None : None : None : False : True : Reals
#
# 1 Constraint Declarations
# Co1 : Size=5, Index=Ngen, Active=True
# Key : Lower : Body : Upper : Active
# 1 : -Inf : Pg[1] : 40.0 : True
# 2 : -Inf : Pg[2] : 170.0 : True
# 3 : -Inf : Pg[3] : 520.0 : True
# 4 : -Inf : Pg[4] : 200.0 : True
# 5 : -Inf : Pg[5] : 100.0 : True
#
# 3 Declarations: Ngen Pg Co1
NumPy arrays can be sliced in exactly the same way as lists so the above constraint will still work fine if you change Pmax = np.array([40, 170, 520, 200, 100])

Is it possible to find all integer solutions?

I wanna get all integer solutions in a limited time, is it possible?
This is a linear, integer constraint satisfaction problem, which can be solved efficiently by OR Tools' CP-SAT. I've modified their example to solve your problem in Python:
from ortools.sat.python import cp_model
class VarArraySolutionPrinter(cp_model.CpSolverSolutionCallback):
"""Print intermediate solutions."""
def __init__(self, variables):
cp_model.CpSolverSolutionCallback.__init__(self)
self.__variables = variables
self.__solution_count = 0
def on_solution_callback(self):
self.__solution_count += 1
for v in self.__variables:
print('%s=%i' % (v, self.Value(v)), end=' ')
print()
def solution_count(self):
return self.__solution_count
def SearchForAllSolutionsSampleSat():
"""Showcases calling the solver to search for all solutions."""
# Creates the model.
model = cp_model.CpModel()
p = [1, 2, 3, 4]
ceq = 30
cgeq = 2
N = len(p)
# Creates the variables
x = [model.NewIntVar(0, 100, f'x{i}') for i in range(N)]
# Create the constraints.
model.Add(sum([xi*pi for xi, pi in zip(x, p)]) == ceq)
model.Add(sum(x) >= cgeq)
# Create a solver and solve.
solver = cp_model.CpSolver()
solution_printer = VarArraySolutionPrinter(x)
status = solver.SearchForAllSolutions(model, solution_printer)
print('Status = %s' % solver.StatusName(status))
print('Number of solutions found: %i' % solution_printer.solution_count())
SearchForAllSolutionsSampleSat()

How can I make an indicator function with Pyomo?

I'm looking to create a simple indicator variable in Pyomo. Assuming I have a variable x, this indicator function would take the value 1 if x > 0, and 0 otherwise.
Here's how I've tried to do it:
model = ConcreteModel()
model.A = Set(initialize=[1,2,3])
model.B = Set(initialize=['J', 'K'])
model.x = Var(model.A, model.B, domain = NonNegativeIntegers)
model.ix = Var(model.A, model.B, domain = Binary)
def ix_indicator_rule(model, a, b):
return model.ix[a, b] == int(model.x[a, b] > 0)
model.ix_constraint = Constraint(model.A, model.B,
rule = ix_indicator_rule)
The error message I get is along the lines of Avoid this error by using Pyomo-provided math functions, which according to this link are found at pyomo.environ...but I'm not sure how to do this. I've tried using validate_PositiveValues(), like this:
def ix_indicator_rule(model, a, b):
return model.ix[a, b] == validate_PositiveValues(model.x[a, b])
model.ix_constraint = Constraint(model.A, model.B,
rule = ix_indicator_rule)
with no luck. Any help is appreciated!
You can achieve this with a "big-M" constraint, like this:
model = ConcreteModel()
model.A = Set(initialize=[1, 2, 3])
model.B = Set(initialize=['J', 'K'])
# m must be larger than largest allowed value of x, but it should
# also be as small as possible to improve numerical stability
model.m = Param(initialize=1e9)
model.x = Var(model.A, model.B, domain=NonNegativeIntegers)
model.ix = Var(model.A, model.B, domain=Binary)
# force model.ix to be 1 if model.x > 0
def ix_indicator_rule(model, a, b):
return model.x <= model.ix[a, b] * model.m
model.ix_constraint = Constraint(
model.A, model.B, rule=ix_indicator_rule
)
But note that the big-M constraint is one-sided. In this example it forces model.ix on when model.x > 0, but doesn't force it off when model.x == 0. You can achieve the latter (but not the former) by flipping the inequality to model.x >= model.ix[a, b] * model.m. But you can't do both in the same model. Generally you just pick the version that suits your model, e.g., if setting model.ix to 1 worsens your objective function, then you would pick the version shown above, and the solver will take care of setting model.ix to 0 whenever it can.
Pyomo also offers disjunctive programming features (see here and here) which may suit your needs. And the cplex solver offers indicator constraints, but I don't know whether Pyomo supports them.
I ended up using the Piecewise function and doing something like this:
DOMAIN_PTS = [0,0,1,1000000000]
RANGE_PTS = [0,0,1,1]
model.ix_constraint = Piecewise(
model.A, model.B,
model.ix, model.x,
pw_pts=DOMAIN_PTS,
pw_repn='INC',
pw_constr_type = 'EQ',
f_rule = RANGE_PTS,
unbounded_domain_var = True)
def objective_rule(model):
return sum(model.ix[a,b] for a in model.A for b in model.B)
model.objective = Objective(rule = objective_rule, sense=minimize)
It seems to work okay.

error in readNetFromTensorflow in c++

I'm new in deep learning. In first step I create and train a model in python with keras and freezed by this code:
def export_model(MODEL_NAME, input_node_name, output_node_name):
tf.train.write_graph(K.get_session().graph_def, 'out', \
MODEL_NAME + '_graph.pbtxt')
tf.train.Saver().save(K.get_session(), 'out/' + MODEL_NAME + '.chkp')
freeze_graph.freeze_graph('out/' + MODEL_NAME + '_graph.pbtxt', None, \
False, 'out/' + MODEL_NAME + '.chkp', output_node_name, \
"save/restore_all", "save/Const:0", \
'out/frozen_' + MODEL_NAME + '.pb', True, "")
input_graph_def = tf.GraphDef()
with tf.gfile.Open('out/frozen_' + MODEL_NAME + '.pb', "rb") as f:
input_graph_def.ParseFromString(f.read())
output_graph_def = optimize_for_inference_lib.optimize_for_inference(
input_graph_def, [input_node_name], [output_node_name],
tf.float32.as_datatype_enum)
with tf.gfile.FastGFile('out/opt_' + MODEL_NAME + '.pb', "wb") as f:
f.write(output_graph_def.SerializeToString())
it's output :
checkpoint
Model.chkp.data-00000-of-00001
Model.chkp.index
Model.chkp.meta
Model_graph.pbtxt
frozen_Model.pb
opt_Model.pb
when I want to read the net in opencv c++ by readNetFromTensorflow :
String weights = "frozen_Model.pb";
String pbtxt = "Model_graph.pbtxt";
dnn::Net cvNet = cv::dnn::readNetFromTensorflow(weights, pbtxt);
This will make error :
OpenCV(4.0.0-pre) Error: Unspecified error (FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: frozen_Model.pb) in cv::dnn::ReadTFNetParamsFromBinaryFileOrDie, file D:\LIBS\OpenCV-4.00\modules\dnn\src\tensorflow\tf_io.cpp, line 44
and
OpenCV(4.0.0-pre) Error: Assertion failed (const_layers.insert(std::make_pair(name, li)).second) in cv::dnn::experimental_dnn_v4::`anonymous-namespace'::addConstNodes, file D:\LIBS\OpenCV-4.00\modules\dnn\src\tensorflow\tf_importer.cpp, line 555
How to fix this error?
Amin, may I ask you to try to save a graph in a testing mode:
K.backend.set_learning_phase(0) # <--- This setting makes all the following layers work in test mode
model = Sequential(name = MODEL_NAME)
model.add(Conv2D(filters = 128, kernel_size = (5, 5), activation = 'relu',name = 'FirstLayerConv2D_No1',input_shape = (Width, Height, image_channel)))
...
model.add(Dropout(0.25))
model.add(Dense(100, activation = 'softmax', name = 'endNode'))
# Create a graph definition (with no weights)
sess = K.backend.get_session()
sess.as_default()
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_def.pb', as_text=False)
Then freeze your checkpoint files with a newly created graph_def.pb by a freeze_graph.py script (do not forget to use --input_binary flag).
Part of the code :
Create model, train and export_model
train_batch = gen.flow_from_directory(path + 'Train', target_size = (Width, Height), shuffle = False, color_mode = color_mode,
batch_size = batch_size_train, class_mode = 'categorical')
.
.
X_train, Y_train = next(train_batch)
.
.
X_train = X_train.reshape(X_train.shape).astype('float32')
.
.
model = Sequential(name = MODEL_NAME)
model.add(Conv2D(filters = 128, kernel_size = (5, 5), activation = 'relu',name = 'FirstLayerConv2D_No1',input_shape = (Width, Height, image_channel)))
model.add(Conv2D(filters = 128, kernel_size = (3, 3), activation = 'relu'))
model.add(MaxPool2D(pool_size = (2, 2)))
model.add(BatchNormalization())
.
.
.
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(200, activation = 'tanh'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Dense(100, activation = 'softmax', name = 'endNode'))
model.compile(loss = 'categorical_crossentropy',
optimizer = SGD(lr = 0.01, momentum = 0.9), metrics = ['accuracy'])
history = model.fit(X_train, Y_train, batch_size = batch_size_fit, epochs = epoch, shuffle = True,
verbose = 1, validation_split = .1, validation_data = (X_test, Y_test))
export_model(MODEL_NAME, "FirstLayerConv2D_No1/Relu", "endNode/Softmax")
When you're writing the graph in python you need to do the following steps:
with tf.Session(graph=tf.Graph()) as sess:
# 1. Load saved model
saved_model = tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], SAVED_MODEL_PATH)
# 2. Convert variables to constants
inference_graph_def = tf.graph_util.convert_variables_to_constants(sess, saved_model.graph_def, OUTPUT_NODE_NAMES)
# 3. Optimize for inference
optimized_graph_def = optimize_for_inference_lib.optimize_for_inference(inference_graph_def,
INPUT_NODE_NAMES,
OUTPUT_NODE_NAMES,
tf.float32.as_datatype_enum)
# 4. Save .pb file
tf.train.write_graph(optimized_graph_def, MODEL_DIR, 'model_name.pb', as_text=False)
# 5. Transform graph
transforms = [
'strip_unused_nodes(type=float, shape=\"1,128,128,3\")',
'remove_nodes(op=PlaceholderWithDefault)',
'remove_device',
'sort_by_execution_order'
]
transformed_graph_def = TransformGraph(optimized_graph_def,
INPUT_NODE_NAMES,
OUTPUT_NODE_NAMES,
transforms)
# 6. Remove constant nodes and attributes
for i in reversed(range(len(transformed_graph_def.node))):
if transformed_graph_def.node[i].op == "Const":
del transformed_graph_def.node[i]
for attr in ['T', 'data_format', 'Tshape', 'N', 'Tidx', 'Tdim',
'use_cudnn_on_gpu', 'Index', 'Tperm', 'is_training', 'Tpaddings']:
if attr in transformed_graph_def.node[i].attr:
del transformed_graph_def.node[i].attr[attr]
# 7. Save .pbtxt file
tf.train.write_graph(transformed_graph_def, MODEL_DIR, 'model_name.pbtxt', as_text=True)
Plus, if you have special nodes as Flatten, you need to remove and rename some nodes manually.
More info here.