Pyomo - Gurobi Persistent - Change parameter values between solve - pyomo

I'm actually trying to use Gurobi in persistent mode with Pyomo.
I've modeled (using pyomo blocks) an energy system with a battery and solve the energy/power balance at every time step, updating the battery SOC_INIT before each solve.
It works fine when I use gurobi in none persistent mode.
With gurobi-persistent, I update the SOC_INIT param and resolve. Nothing happens, the solver does nothing, and keeps all variables values from the previous solve.
Roughly, What I have:
The SOC_INIT Param is mutable
SOC = SOC_INIT + Energy exchange during timestep (SOC is a Var)
I've tried the solver.add_var/solver.update_var on SOC variable, changed SOC_INIT from Param to Var.
What am I missing ? Documentation about this is not very helpful...
Would someone share a proper way to do that, i.e, updating some Params/Vars between solves ? a code snippet would be nice :)
Thanks for your help.
Max.

First get the number of solutions
n_of_solutions = m.SolCount
Then loop over each solution by using
for solution in range(0, n_of_solutions):
m.setParam(GRB.Param.SolutionNumber, solution)
# do something

Related

Weird interaction between measures

I'm trying to create a measure which can give me the primo date value for a given timeframe eg. start date of: current month, current year, YTD etc, I'm using calculation groups, but i've boiled the problem down to this.
In this example i'm tying to find the value of the stat date of current YTD-period, which should be 01/01/2022 :
primotest =
var primoDateValue = TOTALYTD(
FIRSTDATE( Dato[FuldDato])
, Dato[FuldDato]
)
//in my actual measure i have to filter the primoDateValue by [level2]-column, which i why it is important to include it in this example.
var randomMeasure =
CALCULATE(1
,'Dim funktionsbudgetter'[level2] = "this doesn't matter"
)
return
primoDateValue
for some unknown and really weird reason the randomMeasure interacts with the primoDateValue measure.
if 'Dim funktionsbudgetter'[level2] = "this doesn't matter" is included in randomMeasure. the result is incorrect
if I remove ''Dim funktionsbudgetter'[level2] = "this doesn't matter" the value is correct
My datamodel looks like this:
I'm guessing that i have somekind of problem with my datamodel. but no matter what that problem is. I can't think of a single scenario that causes an interaction between primoDateValue and randomMeasure.
I'm hoping that some of you can think up a scenario that would cause the weird interaction between the measures. so that I can figure out where the problem actually is.
okay so i figured out the problem or actually problems.
it turns out that it is a combination of using time intelligent functions eg. TotalYTD, calculation groups (CG) and composit models.
basically, you should be really careful when using CG and composit models.
in short, you can't use remote data with a local defined CG and vice versa. it will straight up ignore the CG item and just return the normal measure.
you can read more about it here: sqlbi - guide
I also found that the time intelligence function doesn't always play nice with composit models. which i also the problem in my "boiled-down" measure in my OP.
I don't know exactly why. it works in other scenarios.
for now i can just rewrite my measure to avoid TI functions

How to know if the optimization problem is infeasible or not? Pyomo Warning: Problem may be infeasible

Pyomo can find a solution, but it gives this warning:
WARNING: Loading a SolverResults object with a warning status into
model=(SecondCD);
message from solver=Ipopt 3.11.1\x3a Converged to a locally infeasible point. Problem may be infeasible.
How do I know if the problem is infeasible or not?
this pyomo model optimizes a farm's decision of inputs allocation.
model.Crops = Set() # set Crops := cereal rapes maize ;
model.Inputs = Set() # set Inputs := land labor capital fertilizer;
model.b = Param(model.Inputs) # Parameters in CD production function
model.x = Var(model.Crops, model.Inputs, initialize = 100, within=NonNegativeReals)
def production_function(model, i):
return prod(model.x[i,j]**model.b[j] for j in model.Inputs)
model.Q = Expression(model.Crops, rule=production_function)
...
instance = model.create_instance(data="SecondCD.dat")
opt = SolverFactory("ipopt")
opt.options["tol"] = 1E-64
results = opt.solve(instance, tee=True) # solves and updates instance
instance.display()
if I set b >=1, (e.g.: param b := land 1 labor 1 capital 1 fertilizer 1),
pyomo can find optimal solution;
but if i set b < 1, (e.g.: param b := land 0.1 labor 0.1 capital 0.1 fertilizer 0.1), and set opt.options["tol"] = 1E-64, pyomo can find a solution, but gives that warning.
I expect an optimal solution, but the actual result gives the warning mentioned above.
The message you get (message from solver=Ipopt 3.11.1\x3a Converged to a locally infeasible point. Problem may be infeasible.) doesn't mean that the problem is necessarilly infeasible. A non-linear solver will typically give you a local optimum, and the path to get to the solution is a very important part of finding a "better" local optimum. When you tried with another point, you found a feasible solution, and that is the proof that your problem is feasible.
Now, in finding the global optimum instead of a local optimum, this is a little bit harder. One way to find out is to check if your problem is convex. If it is, it means that there will only be one local optimum, and that this local optimum is the global optimum. This can be done mathematically. See https://math.stackexchange.com/a/1707213/470821 and http://www.princeton.edu/~amirali/Public/Teaching/ORF523/S16/ORF523_S16_Lec7_gh.pdf from a quick Google search). If you found that your problem is not convex, then you can try to prove that there are few local optimums and that they can be found easily with good starting points. Finally, if this can't be done, you should consider more advanced techniques, all with their pros and cons. For example, you can try to generate a set of starting solutions to make sure that you cover the whole feasible domain of your problem. Another one would be to use meta-heuristics methods to help you find a better starting solution.
Also, I am sure that Ipopt have some tools to help tackling this problem of finding a good starting solution that improves the resulting local optimum.

What are hp.Discrete and hp.Realinterval? Can I include more values in hp.realinterval instead of just 2?

I am using Hyperparameter using HParams Dashboard in Tensorflow 2.0-beta0 as suggested here https://www.tensorflow.org/tensorboard/r2/hyperparameter_tuning_with_hparams
I am confused in step 1, I could not find any better explanation. My questions are related to following lines:
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
My question:
I want to try more dropout values instead of just two (0.1 and 0.2). If I write more values in it then it throws an error- 'maximum 2 arguments can be given'. I tried to look for documentation but could not find anything like from where these hp.Discrete and hp.RealInterval functions came.
Any help would be appreciated. Thank you!
Good question. They notebook tutorial lacks in many aspects. At any rate, here is how you do it at a certain resolution res
for dropout_rate in tf.linspace(
HP_DROPOUT.domain.min_value,
HP_DROPOUT.domain.max_value,
res,):
By looking at the implementation to me it really doesn't seem to be GridSearch but MonteCarlo/Random search (note: this is not 100% correct, please see my edit below)
So on every iteration a random float of that real interval is chosen
If you want GridSearch behavior just use "Discrete". That way you can even mix and match GridSearch with Random search, pretty cool!
Edit: 27th of July '22: (based on the comment of #dpoiesz)
Just to make it a little more clear, as it is sampled from the intervals, concrete values are returned. Therefore, those are added to the grid dimension and grid search is performed using those
RealInterval is a min, max tuple in which the hparam will pick a number up.
Here a link to the implementation for better understanding.
The thing is that as it is currently implemented it does not seems to have any difference in between the two except if you call the sample_uniform method.
Note that tf.linspace breaks the mentioned sample code when saving current value.
See https://github.com/tensorflow/tensorboard/issues/2348
In particular OscarVanL's comment about his quick&dirty workaround.

userWarning pymc3 : What does reparameterize mean?

I built a pymc3 model using the DensityDist distribution. I have four parameters out of which 3 use Metropolis and one uses NUTS (this is automatically chosen by the pymc3). However, I get two different UserWarnings
1.Chain 0 contains number of diverging samples after tuning. If increasing target_accept does not help try to reparameterize.
MAy I know what does reparameterize here mean?
2. The acceptance probability in chain 0 does not match the target. It is , but should be close to 0.8. Try to increase the number of tuning steps.
Digging through a few examples I used 'random_seed', 'discard_tuned_samples', 'step = pm.NUTS(target_accept=0.95)' and so on and got rid of these user warnings. But I couldn't find details of how these parameter values are being decided. I am sure this might have been discussed in various context but I am unable to find solid documentation for this. I was doing a trial and error method as below.
with patten_study:
#SEED = 61290425 #51290425
step = pm.NUTS(target_accept=0.95)
trace = sample(step = step)#4000,tune = 10000,step =step,discard_tuned_samples=False)#,random_seed=SEED)
I need to run these on different datasets. Hence I am struggling to fix these parameter values for each dataset I am using. Is there any way where I give these values or find the outcome (if there are any user warnings and then try other values) and run it in a loop?
Pardon me if I am asking something stupid!
In this context, re-parametrization basically is finding a different but equivalent model that it is easier to compute. There are many things you can do depending on the details of your model:
Instead of using a Uniform distribution you can use a Normal distribution with a large variance.
Changing from a centered-hierarchical model to a
non-centered
one.
Replacing a Gaussian with a Student-T
Model a discrete variable as a continuous
Marginalize variables like in this example
whether these changes make sense or not is something that you should decide, based on your knowledge of the model and problem.

Enforce no-delay schedule IBM OPL (CPLEX)

I created a schedule in IBM OPL:
dvar sequence schedule in all(j in Jobs) job[j];
If the CP-Module generates a solution, the solution is sometimes not a non-delay solution. This is however not allowed and thus I want to enforce a non-delay schedule.
I tried different solutions in the subject to-Section...
forall(t in Jobs)
if (t > 1)
startOf(job[t]) == endOf(job[t-1]);
... but these fail (obviously) when job t-1 is not followed by job t.
Anyone who can give me a hint on how to solve this problem?
Kind regards,
Franz
you should try to use endAtStart.
(OPL constraint to restrict the relative positions of interval variables.)
regards
endAtStart will only work if your Jobs are always ready to start when the preceding Job finishes. If this is not the case, you will receive an error.
A better solution would be to regard the no-delay in the objective. What you are trying to achieve is to start every new job as soon as possible. So you can for example use the staticLex function:
minimize staticLex(startOf(job[1]), startOf(job[2]),...);
However, this expression can become very long and you would need to hardcode the number of intervals.
A little workaround would be to assign the intervals decreasing weighting factors:
minimize sum(j in Jobs) startOf(job[j])*100^-ord(Jobs,j);
I hope this works for you!
Regards
Jan