Pyomo input data from model.display(filename) - pyomo

I've solved a model and output the results to filename
from pyomo.environ import *
model = ConcreteModel()
# declared variables
...
# solved model
...
# display results
model.display(filename)
Now, this program has finished running. I'd like to do some post-processing of the results in filename. Is there an easy way to read filename and put all the solution information back into model for post-processing of the solution?
I'm trying to plot many of the variables that I have solved for with matplotlib. I'd like to separate the "solution of the model" code and the "post-processing of the model" code, because I'd like to be able to post-process the model in many different ways that I won't be able to decide at runtime. So, I'd like to solve model, call model.display(filename), read all the data from filename and input back into the pyomo model, and do some plotting of the results.
I am currently writing my own parser for filename, but I wanted to know if there is an available method with pyomo to do this.

A good way to do what you want is to pickle (i.e., serialize) the model after solution, then subsequent programs can restore the model and use it. For some discussion of pickling a Pyomo model, see this Stackoverflow post:
How to save (pickle) a model instance in pyomo

Related

How to save and restore a tf.estimator.Estimator model with export_savedmodel?

I started using Tensorflow recently and I try to get use to tf.estimator.Estimator objects. I would like to do something a priori quite natural: after having trained my classifier, i.e. an instance of tf.estimator.Estimator (with the train method), I would like to save it in a file (whatever the extension) and then reload it later to predict the labels for some new data. Since the official documentation recommends to use Estimator APIs, I guess something as important as that should be implemented and documented.
I saw on some other page that the method to do that is export_savedmodel (see the official documentation) but I simply don't understand the documentation. There is no explanation of how to use this method. What is the argument serving_input_fn? I never encountered it in the Creating Custom Estimators tutorial or in any of the tutorials that I read. By doing some googling, I discovered that around a year ago the estimators where defined using an other class (tf.contrib.learn.Estimator) and it looks like the tf.estimator.Estimator is reusing some of the previous APIs. But I don't find clear explanations in the documentation about it.
Could someone please give me a toy example? Or explain me how to define/find this serving_input_fn?
And then how would be load the trained classifier again?
Thank you for your help!
Edit: I discovered that one doesn't necessarily need to use export_savemodel to save the model. It is actually done automatically. Then if we define later a new estimator having the same model_dir argument, it will also automatically restore the previous estimator, as explained here.
As you figured out, estimator automatically saves an restores the model for you during the training. export_savemodel might be useful if you want to deploy you model to the field (for example providing the best model for Tensorflow Serving).
Here is a simple example:
est.export_savedmodel(export_dir_base=FLAGS.export_dir, serving_input_receiver_fn=serving_input_fn)
def serving_input_fn():
inputs = {'features': tf.placeholder(tf.float32, [None, 128, 128, 3])}
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
Basically serving_input_fn is responsible for replacing dataset pipelines with a placeholder. In the deployment you can feed data to this placeholder as the input to your model for inference or prediction.

load the GoogleNews-vectors-negative300.bin and predict_output_word

I tried to load the GoogleNews-vectors-negative300.bin and try the predict_output_word method,
I tested three ways, but every failed, the code and error of each way are shown below.
import gensim
from gensim.models import Word2Vec
The first:
I first used this line:
model=Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin',binary=True)
print(model.wv.predict_output_word(['king','man'],topn=10))
error:
DeprecationWarning: Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead.
The second:
Then I tried:
model = gensim.models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin',binary=True)
print(model.wv.predict_output_word(['king','man'],topn=10))
error:
AttributeError: 'Word2VecKeyedVectors' object has no attribute 'predict_output_word'
The third:
model = gensim.models.Word2Vec.load('GoogleNews-vectors-negative300.bin')
print(model.wv.predict_output_word(['king','man'],topn=10))
error:
_pickle.UnpicklingError: invalid load key, '3'.
I read the document at
https://radimrehurek.com/gensim/models/word2vec.html
but still have no idea the namespace where the predict_output_word would be in.
Anybody can help?
Thanks.
The GoogleNews set of vectors is just the raw vectors – without a full trained model (including internal weights). So it:
can't be loaded as a fully-functional gensim Word2Vec model
can be loaded as a lookup-only KeyedVectors, but that object alone doesn't have the data or protocols necessary for further model training or other functionality
Google hasn't released the full model that was used to create the GoogleNews vector set.
Note also that the predict_output_word() function in gensim should be considered an experimental curiosity. It doesn't work in hierarchical-softmax models (because it's not as simple to generate ranked predictions). It doesn't quite match the same context-window weighting as is used during training.
Predicting words isn't really the point of the word2vec algorithm – and many imeplementations don't offer any interface for making individual word-predictions outside of the sparse bulk training process. Rather, word2vec uses the exercise of (sloppily) trying to make predictions to train word-vectors that turn out to be useful for other, non-word-prediction, purposes.

Django: how do i create a model dynamically

How do I create a model dynamically upon uploading a csv file? I have done the part where it can read the csv file.
This doc explains very well how to dynamically create models at runtime in django. It also links to an example of doing so.
However, as you will see after looking at the document, it is quite complex and cumbersome to do this. I would not recommend doing this and believe it is quite likely you can determine a model ahead of time that is flexible enough to handle the CSV. This would be much better practice since dynamically changing the schema of your database as your application is running is a recipe for a ton of bugs in your code.
I understand that you want to create new schema's on the fly based on fields in the those in a CSV. While thats a valid use case and could be the absolute right call. I doubt it though - it lends itself to a data model for a single tenet SaaS application that could have goofy performance and migration issues.
I'd try using Mongo/ some other NoSQL solutions as others have mentioned. But a simpler approach may be a modified Star Schema implemented in SQL. In this case you create a dimensions tables that stores each header, then create an instance of each data element that has a foreign key to dimension and records the value of that dimension.
If you read the csv the psuedo code would look something like this:
for row in DictReader(file):
for k in row.keys():
try:
dim = Dimension.objects.get(name=k)
except:
dim = Dimension(name=k)
dim.save()
DimensionRecord(dimension=dim, value=row[k]
Obviously you could better handle reading the headers and error trapping if dimensions already exist, but this would be an example of how you could dynamically load variable headered CSV's into a SQL db.

Save CPLEX preprocessing/aggregation

I'm using the C++ CPLEX API to model MILP problems. CPLEX "simplifies" my models before solving them (i.e., via the aggregator, MILP presolve, substitutions, etc.). When I use the exportModel method of the IloCplex class it only considers the original model.
Is it possible to save the reduced model?
Thank you for your help
It is not possible to do this using the C++ API (you don't have access to the presolve model via the object oriented Concert layers). You can do it programmatically with the C Callable Library or the Python API. Alternatively, you can do it manually with the interactive, like so:
CPLEX> read model.sav
CPLEX> write model.lp
CPLEX> write presolved.pre
CPLEX> read presolved.pre
CPLEX> write presolved.lp
This example assumes that you've exported your original model in SAV format. After following those steps, you'd end up with presolved.lp (the presolved model in LP format). If you wanted to do it programmatically (using one of the API's above), you'd follow the same steps.

Django - How to pass dynamic models between pages

I have made a django app that creates models and database tables on the fly. This is, as far as I can tell, the only viable way of doing what I need. The problem arises of how to pass a dynamically created model between pages.
I can think of a few ways of doing such but they all sound horrible. The methods I can think of are:
Use global variables within views.py. This seems like a horrible hack and likely to cause conflicts if there are multiple simultaneous users.
Pass a reference in the URL and use some eval hackery to try and refind the model. This is probably stupid as the model could potentially be garbage collected en route.
Use a place-holder app. This seems like a bad idea due to conflicts between multiple users.
Having an invisible form that posts the model when a link is clicked. Again very hacky.
Is there a good way of doing this, and if not, is one of these methods more viable than the others?
P.S. In case it helps my app receives data (as a json string) from a pre-existing database, and then caches it locally (i.e. on the webserver) creating an appropriate model and table on the fly. The idea is then to present this data and do various filtering and drill downs on it with-out placing undue strain on the main database (as each query returns a few hundred results out of a database of hundreds of millions of data points.) W.R.T. 3, the tables are named based on a hash of the query and time stamp, however a place-holder app would have a predetermined name.
Thanks,
jhoyla
EDITED TO ADD: Thanks guys, I have now solved this problem. I ended up using both answers together to give a complete answer. As I can only accept one I am going to accept the contenttypes one, sadly I don't have the reputation to give upvotes yet, however if/when I ever do I will endeavor to return and upvote appropriately.
The solution in it's totality,
from django.contrib.contenttypes.models import ContentType
view_a(request):
model = create_model(...)
request.session['model'] = ContentType.objects.get_for_model(model)
...
view_b(request):
ctmodel = request.session.get('model', None)
if not ctmodel:
return Http404
model = ctmodel.model_class()
...
My first thought would be to use content types and to pass the type/model information via the url.
You could also use Django's sessions framework, e.g.
def view_a(request):
your_model = request.session.get('your_model', None)
if type(your_model) == YourModel
your_model.name = 'something_else'
request.session['your_model'] = your_model
...
def view_b(request):
your_model = request.session.get('your_model', None)
...
You can store almost anything in the session dictionary, and managing it is also easy:
del request.session['your_model']