How to use pyomo along with gurobi feasRelax? - pyomo

In gurobipy we can use
m = Model()
m.feasRelax(1, False, False, True)
m.optimize()
to compute the feasibility relaxation.
Is there a way to do this in pyomo with gurobi?

Related

Guidance on documenting Pyomo model formulations

Does anyone have suggestions or best practices for how to document the formulation of Pyomo models (mainly constraints) directly within the code? Ideally, I'd like to find a way to write the constraint math in LaTeX in the docstring of the helper functions that I use to define constraints and then have those docstrings picked up by Sphinx autodoc.
Currently, I'm using the #decorator style of writing Pyomo components, such as:
import pyomo.environ as pyo
class Model:
def __init__(self, **kwargs):
self.model = pyo.ConcreteModel()
self.model.T = pyo.RangeSet(1, 24)
self.model.x = pyo.Var(self.model.T, within=pyo.NonNegativeReals)
#self.model.Constraint()
def Example_Constraint(model):
"""This is the docstring that I would ideally be able to `autodoc`
.. math ::
\sum_{t \in T} x_{t} \geq 100
"""
return sum(x[t] for t in self.model.T) >= 100
However, Sphinx cannot "see" inner functions, such as the functions used to construct Example_Constraint within the __init__ method.
Options:
Move away from the #decorator syntax and have the helper functions as instance, static, or class methods that are called in the __init__ method (e.g., self.model.Test_Constraint = pyo.Constraint(self.model.T, rule=self.test_constraint_rule). The downside of this approach (and one of the reasons we originally adopted the #decorator syntax) was that the functions that it feels like duplication to define the Pyomo component (self.model.Example_Constraint) and a Python function to construct the component (self.test_constraint_rule).
I've seen #GiorgioBalestieri's package https://github.com/GiorgioBalestrieri/pyomo-sphinx-docs, which works by pulling Pyomo components' doc. This seems like an OK option, but not ideal given that it's an experiment and light on formatting options.
Other approaches that I don't know of?

Google AI platform custom prediction routines with multiple inputs, how to read json inputs

For creating a custom prediction routine with a Keras (Tensorflow 2.1) model, I am having trouble figuring out what form the json inputs are coming in as, and how to read them in the predictor class for multiple inputs. All of the custom prediction routine examples in the documentation use simple flat single-input lists. If for example we send in our inputs as:
{"instances": [
{
"event_type_input": [1, 2, 20],
"event_dwelltime_input": [1.368, 0.017, 0.0],
"rf_input": [1.2, -2.8]},
{
"event_type_input": [14, 40, 20],
"event_dwelltime_input": [1.758, 13.392, 0.0],
"rf_input": [1.29, -2.87]}
]}
How should we ingest the incoming json in our predictor class?
class MyPredictor(object):
def __init__(self, model):
self.model = model
def predict(self, instances, **kwargs):
inputs = np.array(instances)
# The above example from the docs is wrong for multiple inputs
# What should our inputs be to get the inputs in the right shape
# for our keras model?
outputs = self.model.predict(inputs)
return outputs.tolist()
Our json inputs to google ai platform are a list of dictionaries. However, for a keras model, our inputs need to be in different shape, like the following:
inputs = {
"event_type_input": np.array([[1, 2, 20], [14, 40, 20]]),
"event_dwelltime_input": np.array([[1.368, 0.017, 0.0], [1.758, 13.392, 0.0]])
"rf_input": np.array([[1.2, -2.8], [1.29, -2.87]]}
model.predict(inputs)
Am I right that the thing to do then is just reshape the instances? The only confusion is that if using the tensorflow framework (instead of a custom prediction routine), it handles predicting on the json input fine, and I thought that all the tensorflow framework is doing is calling the .predict method on the instances (unless indeed there is some under-the-hood reshaping of the data. I couldn't find a source to find out what is exactly happening)
Main question: How should we write our predictor class to take in the instances such that we can run the model.predict method on it?
I would suggest creating a new Keras Model and exporting it.
Create a separate Input layer for each of the three inputs to the new Model (with the name of the Input being the name in your JSON struct). Then, in this Model, reshape the inputs, and borrow the weights/structure from your trained model, and export the new model. Something like this:
trained_model = keras.models.load_model(...) # trained model
input1 = keras.Input(..., name='event_type_input')
input2 = keras.Input(..., name='event_dwelltime_input')
input3 = keras.Input(..., name='rf_input')
export_inputs = keras.concatenate([input1, input2, input3])
reshaped_inputs = keras.layers.Lambda(...) # reshape to what the first hidden layer of your trained model expects
layer1 = trained_model.get_layer(index=1)(reshaped_inputs)
layer2 = trained_model.get_layer(index=2)(layer1) # etc. ...
...
exportModel = keras.Model(export_inputs, export_output)
exportModel.save(...)

How to save features into a file in keras?

I have trained weight matrix, I would like to extract features at each end every layer and store them in a file. How could I do that? Thanks.
Have a look at the Keras FAQ
One simple way is to create a new Model that will output the layers
that you are interested in:
from keras.models import Model
model = ... # create the original model
layer_name = 'my_layer'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(data)
Alternatively, you can build a Keras function that will return the
output of a certain layer given a certain input, for example:
from keras import backend as K
get_3rd_layer_output = K.function([model.layers[0].input],
[model.layers[3].output])
layer_output = get_3rd_layer_output([X])[0]
Similarly, you could build a Theano and TensorFlow function directly.
Note that if your model has a different behavior in training and
testing phase (e.g. if it uses Dropout, BatchNormalization, etc.),
you will need to pass the learning phase flag to your function:
get_3rd_layer_output = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[3].output])
# output in test mode = 0
layer_output = get_3rd_layer_output([X, 0])[0]
# output in train mode = 1
layer_output = get_3rd_layer_output([X, 1])[0]
Then you just need to store your predictions in a file using e.g. np.save('filename.npz',intermediate_output )

SolverFactory.solve summary option

Pyomo solver invocation can be achieved by command line usage or from a Python script.
How does the command line call with the summary flag
pyomo solve model.py input.dat --solver=glpk --summary
translate to e.g. the usage of a SolverFactory class in a Python script?
Specifically, in the following example, how can one specify a summary option? Is it an (undocumented?) argument to SolverFactory.solve?
from pyomo.opt import SolverFactory
import pyomo.environ
from model import model
opt = SolverFactory('glpk')
instance = model.create_instance('input.dat')
results = opt.solve(instance)
The --summary option is specific to the pyomo command. It is not a solver option. I believe all it really does is execute the line
pyomo.environ.display(instance)
after the solve, which you can easily add to your script. A more direct way of querying the solution is just to access the value of model variables or the objective by "evaluating" them. E.g.,
instance.some_objective()
instance.some_variable()
instance.some_indexed_variable[0]()
or
pyomo.environ.value(instance.some_objective)
pyomo.environ.value(instance.some_variable)
pyomo.environ.value(instance.some_indexed_variable)
I prefer the former, but the latter is more appropriate if you are accessing the values of immutable, indexed Param objects. Also, note that variables have a .value attribute that you can access directly (and update if you want to provide a warmstart).
Per default the --summary command option stores a 'result' file in json format into the directory of your model.
You can achieve the same result by adding the following to your code:
results = opt.solve(instance, load_solutions=True)
results.write(filename='results.json', format='json')
or:
results = opt.solve(instance)
instance.solutions.store_to(results)
results.write(filename='results.json', format='json')

How do I convert a CloudML Alpha model to a SavedModel?

In the alpha release of CloudML's online prediction service, the format for exporting model was:
inputs = {"x": x, "y_bytes": y}
g.add_to_collection("inputs", json.dumps(inputs))
outputs = {"a": a, "b_bytes": b}
g.add_to_collection("outputs", json.dumps(outputs))
I would like to convert this to a SavedModel without retraining my model. How can I do that?
We can convert this to a SavedModel by importing the old model, creating the Signatures, and re-exporting it. This code is untested, but something like this should work:
import json
import tensorflow as tf
from tensorflow.contrib.session_bundle import session_bundle
# Import the "old" model
session, _ = session_bundle.load_session_bundle_from_path(export_dir)
# Define the inputs and the outputs for the SavedModel
old_inputs = json.loads(tf.get_collection('inputs'))
inputs = {name: tf.saved_model.utils.build_tensor_info(tensor)
for name, tensor in old_inputs}
old_outputs = json.loads(tf.get_collection('outputs'))
outputs = {name: tf.saved_model.utils.build_tensor_info(tensor)
for name, tensor in old_outputs}
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
# Save out the converted model
b = builder.SavedModelBuilder(new_export_dir)
b.add_meta_graph_and_variables(session,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={'serving_default': signature})
b.save()