Accessing constraints/objective - pyomo

I'm using a toolbox that internally calls Pyomo to solve an optimization problem. I am interested in accessing the Pyomo model it constructs so I can modify the constraints/objective. So my question is, suppose I get the following output:
Problem:
Name: unknown
Lower bound: 6250.0
Upper bound: 6250.0
Number of objectives: 1
Number of constraints: 11
Number of variables: 8
Number of nonzeros: 17
Sense: minimize
Solver:
Status: ok
Termination condition: optimal
Statistics:
Branch and bound:
Number of bounded subproblems: 0
Number of created subproblems: 0
Error rc: 0
Time: 0.015599727630615234
Solution:
number of solutions: 0
number of solutions displayed: 0
So the problem works well and I get a solution, the modeling was done internally via another too. Is it possible to access the constraints/objective so I can freely modify the optimization model in Pyomo syntax?
I tried to call the model and got something like this:
<pyomo.core.base.PyomoModel.ConcreteModel at xxxxxxx>

It sounds like network.model is the Pyomo model and yes, if you have access to it, you can modify it. Try printing the model or individual model components using the pprint() method:
network.model.pprint()
network.model.con1.pprint()
or you can loop over specific types of components in the model:
for c in network.model.component_objects(Constraint):
print(c) # Prints the name of every constraint on the model
The easiest way to modify the objective would be to find the existing one, deactivate it, and add a new one
network.model.obj.deactivate()
network.model.new_obj = Objective(expr=network.model.x**2)
There are ways to modify constraints/objectives in place but they are not well-documented. I'd recommend using Python's dir() function and playing around with some of the component attributes and methods to get a feel for how they work.
dir(network.model.obj)

Related

userWarning pymc3 : What does reparameterize mean?

I built a pymc3 model using the DensityDist distribution. I have four parameters out of which 3 use Metropolis and one uses NUTS (this is automatically chosen by the pymc3). However, I get two different UserWarnings
1.Chain 0 contains number of diverging samples after tuning. If increasing target_accept does not help try to reparameterize.
MAy I know what does reparameterize here mean?
2. The acceptance probability in chain 0 does not match the target. It is , but should be close to 0.8. Try to increase the number of tuning steps.
Digging through a few examples I used 'random_seed', 'discard_tuned_samples', 'step = pm.NUTS(target_accept=0.95)' and so on and got rid of these user warnings. But I couldn't find details of how these parameter values are being decided. I am sure this might have been discussed in various context but I am unable to find solid documentation for this. I was doing a trial and error method as below.
with patten_study:
#SEED = 61290425 #51290425
step = pm.NUTS(target_accept=0.95)
trace = sample(step = step)#4000,tune = 10000,step =step,discard_tuned_samples=False)#,random_seed=SEED)
I need to run these on different datasets. Hence I am struggling to fix these parameter values for each dataset I am using. Is there any way where I give these values or find the outcome (if there are any user warnings and then try other values) and run it in a loop?
Pardon me if I am asking something stupid!
In this context, re-parametrization basically is finding a different but equivalent model that it is easier to compute. There are many things you can do depending on the details of your model:
Instead of using a Uniform distribution you can use a Normal distribution with a large variance.
Changing from a centered-hierarchical model to a
non-centered
one.
Replacing a Gaussian with a Student-T
Model a discrete variable as a continuous
Marginalize variables like in this example
whether these changes make sense or not is something that you should decide, based on your knowledge of the model and problem.

On loading the saved Keras sequential model, my test data gives low accuracy in the beginning

I am creating a simple sequential Keras model which will take 10k inputs in a batch of 100. Each input has 3 columns and the corresponding output is sum of that row.
Sequential model has 2 layers- LSTM(Stateful=true) , Dense.
Now, after compiling and fitting the model, I am saving it in 'model.h5' file.
Then, I read the saved model, and call model.predict with a test data (size=10k , batch_size = 100).
Problem: the prediction doesn't work properly for first 400-500 inputs and for the rest its working perfectly fine with very low val_loss.
Case1: I make the LSTM layer Stateless(i.e. Stateful=False)
In this case Keras is providing very accurate outputs for all the test data.
Case2: Instead of saving and then reading again, if I directly apply model.predict on the model created, all the outputs are coming accurately.
But, I need Stateful=True, also, I want to save my model and then resume work on that model later.
1.Is there any way to solve this?
2.Also, when I am providing test data, how is the model's accuracy increasing? ( because the first 400-500 tests provide inaccurate results and the rest are pretty accurate)
Your problem seems to come from losing the hidden states of your cells. During model building they might be reset and this might cause the problem.
So (it's a little bit cumbersome), but you could save and load also a states of your network:
How to save? (assuming that i-th layer is a recurrentone):
hidden_state = model.layers[i].states[0].eval()
cell_state = model.layers[i].states[0].eval()
numpy.save("some name", hidden_state)
numpy.save("some other name", cell_state)
Now when you can reload the hidden state, here you can read on how to set the hidden state in a layer.
Of course - it's the best to pack all of this methods in some kind of object and e.g. class constructor methods.

Simple prediction model for multiple features

I am new in prediction models. I am currently using python2.7 and sklearn. I would like to know a simple model to combine many features to predict one target.
To make it more clear. Lets say I have 4 arrays of size 10: A,B,C,Y. I would like to use the values of A,B,C to predict the values of Y.
Thank you

SegNet results of train set (test via test_segmentation.py)

I run SegNet on my own dataset (by Segnet tutorial). I see great results via test_segmentation.py.
my problem is that I want to see the real net results and not test_segmentation own colorisation (via classes).
for example, if I have trained net with 2 classes, so after the train I will see not only 2 colors (as we see with the classes), but we will see the real net color segmentation ([0.22,0.19,0.3....) lighter and darker as the net see it]
I hope that I explained myself well. thanks for helping.
You could use a python script to achieve what you want. Take a look at this script.
The command out = out['argmax'], extracts the raw output, so you can get a segmentation map with 'lighter and darker' values as you wanted.
When you say the 'real' net color segmentation I will assume that you mean the probability maps. Effectively the last layer will have one map for every class; and if you check the function predict in inference.py, they take the argmax; that is the channel (which represents the class) with the highest probability. If you want to get these maps, you just have to get the data without computing the argmax; something like:
predicted = net.blobs['prob'].data
I solve it. the solution is to range cmin and cmax from 0 to 1 in the scipy saving method. for example: scipy.misc.toimage(output, cmin=0.0, amax=1).save(/path/.../image.png)

AIMMS artificial variable constraint

I have difficulty formulating the constraints correctly. Dumbed down version of the problem:
There are 12 time units, 3 products, demand d_{i,t} for the product i at time $t$ is known in advance and the resources r_{i,t} (all 8, product i uses the non-i resources) necessary for product i at time t are known as well. We need to minimize holding costs h_i by deciding how much of product i we need to produce at time t, this is called x_{i,t}. Starting inventory for every product is 6. To help I introduce stock level s_{i,t}. This equates to the following formulation:
I got this working using the Excel solver but I need to do this in AIMMS. The Stock variable s is giving me issues, I can't get it to work using an if statement to condition on if t=1 nor would I know how to split it in two constraints, given the first iteration of the second constraint would need to reference the first constraint.
You can specify index domain in your constraint attributes as follows:
(t,i) | t > 1
if t > 1 statement should work if the set of time instances is a subset of integers. If not - you should use ord(t) > 1, i.e.
if ord(t) > 1
then
Your_Constraint
endif