Receiving .rc suffix from cplex when using Pyomo with NL / ASL solver interface - pyomo

I would like to obtain .rc or .urc suffixes for my variables from the cplex solver, using Pyomo with the NL / ASL interface. This interface is generally faster than the default cplex interface for my models. However I can't seem to get the NL interface to return these suffixes. If I use the cplex solver with default options, I get values for the rc suffix. However, if I use solver_io='nl' or set the solver to 'cplexamp' (which I think does the same thing), then I get no rc values. (I am able to get duals, but not rc's.)
Here's some example code:
from pyomo.environ import *
from pyomo.opt import SolverFactory
def show_rc(m, *args, **kwargs):
opt = SolverFactory(*args, **kwargs)
results = opt.solve(m, suffixes=['rc'])
m.solutions.load_from(results)
m.rc.pprint()
m = ConcreteModel()
m.X = Var(bounds=(0, 1))
m.obj = Objective(rule=lambda m: 3.14 * m.X, sense=maximize)
m.rc = Suffix(direction=Suffix.IMPORT, datatype=Suffix.FLOAT)
show_rc(m, "cplex") # has value 3.14
show_rc(m, "cplex", solver_io="nl") # no value returned
show_rc(m, "cplexamp") # no value returned
The documentation specifically mentions getting reduced costs via a suffix, and the .rc suffix seems to be the standard place for this in AMPL, but I'm having no luck reading this via Pyomo's NL interface. Can anyone point me in the right direction?

Unfortunately, the cplexamp executable does not return reduced costs in the solution file (I just checked). I suppose AMPL must be computing these using the dual solution that is returned. I would open up a ticket on GitHub. Perhaps we can add that functionality to our ASL interface.
In terms of speed, you should try out Pyomo's Python-based interface to Cplex (solver_io="python"). This is usually much faster as it does not require any file I/O. You will need to install the Cplex-Python bindings before you can use that interface through Pyomo. If you can "import cplex", then it should be good to go.
Edit: I forgot to mention that solver_io="python" does return reduced costs for Cplex.

Related

loading a graph from .meta file from Tensorflow in c++ for inference

I have trained some models using tensorflow 1.5.1 and I have the checkpoints for those models (including .ckpt and .meta files). Now I want to do inference in c++ using those files.
In python, I would do the following to save and load the graph and the checkpoints.
for saving:
images = tf.placeholder(...) // the input layer
//the graph def
output = tf.nn.softmax(net) // the output layer
tf.add_to_collection('images', images)
tf.add_to_collection('output', output)
for inference i restore the graph and the checkpoint then restore the input and output layers from collections like so:
meta_file = './models/last-100.meta'
ckpt_file = './models/last-100'
with tf.Session() as sess:
saver = tf.train.import_meta_graph(meta_file)
saver.restore(sess, ckpt_file)
images = tf.get_collection('images')
output = tf.get_collection('output')
outputTensors = sess.run(output, feed_dict={images: np.array(an_image)})
now assuming that I did the saving in python as usual, how can I do inference and restore in c++ with simple code like in python?
I have found examples and tutorials but for tensorflow versions 0.7 0.12 and the same code doesn't work for version 1.5. I found no tutorials for restoring models using c++ API on tensorflow website.
For the sake of this thread. I will rephrase my comment into an answer.
Posting a full example would require either a CMake setup or putting the file into a specific directory to run bazel. As I do favor the first way and it would burst all limits on this post to cover all parts I would like to redirect to a complete implementation in C99, C++, GO without Bazel which I tested for TF > v1.5.
Loading a graph in C++ is not much more difficult than in Python, given you compiled TensorFlow already from source.
Start by creating a MWE, which creates a very dump network graph is always a good idea to figure out how things work:
import tensorflow as tf
x = tf.placeholder(tf.float32, shape=[1, 2], name='input')
output = tf.identity(tf.layers.dense(x, 1), name='output')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(tf.global_variables())
saver.save(sess, './exported/my_model')
There are probably tons of answers here on SO about this part. So I just let it stay here without further explanation.
Loading in Python
Before doing stuff in other languages, we can try to do it in python properly -- in the sense: we just need to rewrite it in C++.
Even restoring is very easy in python like:
import tensorflow as tf
with tf.Session() as sess:
# load the computation graph
loader = tf.train.import_meta_graph('./exported/my_model.meta')
sess.run(tf.global_variables_initializer())
loader = loader.restore(sess, './exported/my_model')
x = tf.get_default_graph().get_tensor_by_name('input:0')
output = tf.get_default_graph().get_tensor_by_name('output:0')
it is not helpful as most of these API endpoints do not exists in the C++ API (yet?). An alternative version would be
import tensorflow as tf
with tf.Session() as sess:
metaGraph = tf.train.import_meta_graph('./exported/my_model.meta')
restore_op_name = metaGraph.as_saver_def().restore_op_name
restore_op = tf.get_default_graph().get_operation_by_name(restore_op_name)
filename_tensor_name = metaGraph.as_saver_def().filename_tensor_name
sess.run(restore_op, {filename_tensor_name: './exported/my_model'})
x = tf.get_default_graph().get_tensor_by_name('input:0')
output = tf.get_default_graph().get_tensor_by_name('output:0')
Hang on. You can always use print(dir(object)) to get the properties like restore_op_name, ... .
Restoring a model is an operation in TensorFlow like every other operation. We just call this operation and providing the path (a string-tensor) as an input. We can even write our own restore operation
def restore(sess, metaGraph, fn):
restore_op_name = metaGraph.as_saver_def().restore_op_name # u'save/restore_all'
restore_op = tf.get_default_graph().get_operation_by_name(restore_op_name)
filename_tensor_name = metaGraph.as_saver_def().filename_tensor_name # u'save/Const'
sess.run(restore_op, {filename_tensor_name: fn})
Even this looks strange, it now greatly helps to do the same stuff in C++.
Loading in C++
Starting with the usual stuff
#include <tensorflow/core/public/session.h>
#include <tensorflow/core/public/session_options.h>
#include <tensorflow/core/protobuf/meta_graph.pb.h>
#include <string>
#include <iostream>
typedef std::vector<std::pair<std::string, tensorflow::Tensor>> tensor_dict;
int main(int argc, char const *argv[]) {
const std::string graph_fn = "./exported/my_model.meta";
const std::string checkpoint_fn = "./exported/my_model";
// prepare session
tensorflow::Session *sess;
tensorflow::SessionOptions options;
TF_CHECK_OK(tensorflow::NewSession(options, &sess));
// here we will put our loading of the graph and weights
return 0;
}
You should be able to compile this by either put it in the TensorFlow repo and use bazel or simply follow the instructions here to use CMake.
We need to create such a meta_graph created by tf.train.import_meta_graph. This can be done by
tensorflow::MetaGraphDef graph_def;
TF_CHECK_OK(ReadBinaryProto(tensorflow::Env::Default(), graph_fn, &graph_def));
In C++ reading a graph from file is not the same as importing a graph in Python. We need to create this graph in a session by
TF_CHECK_OK(sess->Create(graph_def.graph_def()));
By looking at the strange python restore function above:
restore_op_name = metaGraph.as_saver_def().restore_op_name
restore_op = tf.get_default_graph().get_operation_by_name(restore_op_name)
filename_tensor_name = metaGraph.as_saver_def().filename_tensor_name
we can code the equivalent piece in C++
const std::string restore_op_name = graph_def.saver_def().restore_op_name()
const std::string filename_tensor_name = graph_def.saver_def().filename_tensor_name()
Having this in place, we just run the operation by
sess->Run(feed_dict, // inputs
{}, // output_tensor_names (we do not need them)
{restore_op}, // target_node_names
nullptr) // outputs (there are no outputs this time)
Creating the feed_dict is probably a post on its own and this answer is already long enough. It does only cover the most important stuff. I would like to redirect to a complete implementation in C99, C++, GO without Bazel which I tested for TF > v1.5. This is not that hard -- it just can get very long in the case of the plain C version.

Pyomo: Setting variable branching priorities

At Chapter 16 of the documentation the following is said:
Exporting information to a solver or algorithm to aid in solving a mathematical program (e.g., warm-starting information, variable branching priorities).
Yet I cannot find an example of how variable priorities could be set (for compatable solvers). I cannot find something about this in the source code.
As setting variable priorities is solver-specific: with which solvers does this work? More specifically: how to do this with CPLEX or Gurobi? And may it also work for open-source solvers?
This is now possible with the CPLEX LP solver using a fork of pyomo.
m = ConcreteModel()
m.x = Var(domain=Integers)
m.s = RangeSet(10)
m.y = Var(m.s, domain=Integers)
m.o = Objective(expr=m.x + sum(m.y), sense=minimize)
m.c = Constraint(expr=m.x >= 1)
m.c2 = Constraint(expr=quicksum(m.y[i] for i in m.s) >= 10)
m.priority = Suffix(direction=Suffix.EXPORT, datatype=Suffix.INT)
m.direction = Suffix(direction=Suffix.EXPORT, datatype=Suffix.INT) # this is optional
m.priority.set_value(m.x, 1)
m.priority.set_value(m.y, 2)
m.direction.set_value(m.y, BranchDirection.down)
m.direction.set_value(m.y[10], 1)
with SolverFactory('cplex', solver_io='lp') as opt:
opt.solve(model, priorities=True)
You should now see something similar to this screenshot in the logs of your CPLEX run:
To run this currently you'll need to install a specific dev branch of pyomo as follows:
pip install git+https://github.com/Pyomo/pyomo.git#refs/pull/1300/merge
This should be made available to the master dev version of pyomo and eventually a pyomo release in due time.
Pyomo has a system for passing options to the solvers. For the most part, these are passed exactly as specified, so you would look at the Gurobi or CPLEX documentation for what the accepted keywords and values are.

BitwiseXOR for Tensorflow

My project requests a new layer, which needs the new operator of Tensor to compute bitwiseXOR between input x and constant Key k.
E.g. x = 4 (bit form: 100), k = 7 (111), the bitwiseXOR(x, k) expects as 3 (011).
As far as I know, Tensor only has LogicXOR operator for bool type. Luckily, Tensorflow has the extended ability to have a new Op. However, I read the document in https://www.tensorflow.org/extend/adding_an_op, I can get the basic idea, but that is far from the implementation, maybe because of the lack of c++ knowledge. Any suggestions for implementation the new operator that will be helpful. Then I can use that new Op of Tensor to build the new layers.
If you don't want to implement your own C++ op, you can try with tf.py_func which allows you to define a python function that operates on numpy arrays and is then used as a Tensorflow operation in the graph.
For your problem you can use numpy's bitwise_xor():
import tensorflow as tf
import numpy as np
t1 = tf.constant([2,4,6], dtype=tf.int64)
t2 = tf.constant([1,3,5], dtype=tf.int64)
t_xor = tf.py_func(np.bitwise_xor, [t1, t2], tf.int64, stateful=False)
with tf.Session() as sess:
val = sess.run(t_xor)
print(val)
which prints [3,7,3] as expected.
Please take care of the known limitations of this function (taken from the link above):
N.B. The tf.py_func() operation has the following known limitations:
The body of the function (i.e. func) will not be serialized in a
GraphDef. Therefore, you should not use this function if you need to
serialize your model and restore it in a different environment.
The operation must run in the same address space as the Python program
that calls tf.py_func(). If you are using distributed TensorFlow, you
must run a tf.train.Server in the same process as the program that
calls tf.py_func() and you must pin the created operation to a device
in that server (e.g. using with tf.device():).

custom kernel in e1071

i am currently trying to tune the svm function in the e1071 package for R. my input is genomic data (that is each attribute takes a value in the set {-1, 0, 1}) and none of the four kernels currently offered in the package is really good for this kind of data --- i would like to use Hamming distance as my kernel instead.
the svm function, it seems, is written in C++. i have downloaded the source via
download.packages(pkgs = "e1071",
destdir = ".",
type = "source")
found the svm.cpp file containing code for the function and the corresponding kernel portion, where i can potentially add my own custom kernel. has anyone tried doing this? is it possible to do this? once i've finished modifying svm.cpp (provided i figure out how..), how do i make the package "see" the modified file?
You can modify the existing kernel.
I changed the return statement of radial kernel to make the changes..
You can try with that

Preloading for Importr in Django

Is there a way to preload libraries for the R instance that rpy2 talks to? I am spending 25-30% of my response time (about .5s per chart) in importr calls to lattice or grdevices, and would like to cut down if possible.
Code snippet:
grdevices = importr('grDevices')
importr('lattice')
imagefile = File(open('1d_%s.png' % str(uuid4()), 'w'))
grdevices.png(file=imagefile.name, type='cairo',width=400,height=350)
rcmd="""
print(
xyplot(yvec~xvec,labels=labels,type=c('p','r'),
ylab='%s',xlab='%s'
)
)"""% (y_lab, x_lab)
robjects.r(rcmd)
grdevices.dev_off()
imagefile.close()
If I do not invoke importr("lattice"), robjects.r freaks at the "xyplot(..." call I make later. Can I use R_PROFILE or R_ENVIRON_USER to speed up the lattice and grdevices calls?
importr is a pretty high level function, trading performance for ease-of-use. It does a lot beside just loading an R package. It also maps all R objects in that package to Python (rpy2) objects. That effort is lost when doing importr('lattice') in your script, if the result is not used.
Beside that, importing packages in R itself is not without a cost (for the larger R packages with S4 class definitions, you this can be noticeable when the script is short). rpy2 can't do much about this.
Using R variables such as R_PROFILE is possible, but this was not enabled by default before very recently. How to enable it is on SO (here).
Now, here importr is taking "only" 25% of the response time. Optimization efforts focusing on this will not be able to make it more than 25% faster (and that's a very optimistic limit). Interpolating data into a string to evaluate it as R code after that is not very optimal (as warned in the documentation for rpy2 ). Consider calling the R function through rpy2, passing the data as anything exporting the buffer interface (for example).