pyomo with gurobi solver max time limit termination criterion not working - pyomo

I am trying to solve an optimization problem in pyomo by using gurobipy. Given the size of the problem, I would like to set a time limit of 100 seconds as a termination criterion. Although I specified it in the solver options, as follows, it seems to be completely ignored.
opt = SolverFactory("gurobi", solver_io="python", maxTimeLimit=100)
results = opt.solve(model)

Gurobi's time limit name is TimeLimit, not maxTimeLimit. The time limit is an option that is defined at solve time, not as part of the instantiation of the solver. This is because you may want to solve the same model for a specified amount of time and then resolve for a different amount of time:
opt.solve(model, options={'TimeLimit': 100})
opt.solve(model, options={'TimeLimit': 1000})
See this Gurobi documentation page for the names of the parameters.

Related

How can I retrieve the time used and relative MIP gap after optimizing with GLPK in Pyomo?

I am using a concrete model with Pyomo (using GLPK) where two optimizations are run for each day of the year, but during test runs (just a couple of days) the MIP gap is sometimes very high (around 8%) after the time limit. I wish to store (in a dataframe) the time used for each optimization and the MIP gap that was reached, so I can get an idea of a) how long does the average optimization take, and b) how close the results are to optimality. How can I retrieve this data? I haven't been able to find a way to do this.
The code and the data used are too long to share, but this is how I am calling the solver and giving a time limit and gap criteria:
model = create_model(parameters_a, parameters_b)
solver = SolverFactory('glpk')
solver.options["mipgap"] = 0.01
solver.options["tmlim"] = 1000
solver.solve(model, tee=True, symbolic_solver_labels=False)
Python has various utilities for timing, which may be useful. You can also check the contents of the return object from the solve() call:
import time
start = time.time()
results = solver.solve(model, tee=True)
print(results)
end = time.time()
print(end - start)

userWarning pymc3 : What does reparameterize mean?

I built a pymc3 model using the DensityDist distribution. I have four parameters out of which 3 use Metropolis and one uses NUTS (this is automatically chosen by the pymc3). However, I get two different UserWarnings
1.Chain 0 contains number of diverging samples after tuning. If increasing target_accept does not help try to reparameterize.
MAy I know what does reparameterize here mean?
2. The acceptance probability in chain 0 does not match the target. It is , but should be close to 0.8. Try to increase the number of tuning steps.
Digging through a few examples I used 'random_seed', 'discard_tuned_samples', 'step = pm.NUTS(target_accept=0.95)' and so on and got rid of these user warnings. But I couldn't find details of how these parameter values are being decided. I am sure this might have been discussed in various context but I am unable to find solid documentation for this. I was doing a trial and error method as below.
with patten_study:
#SEED = 61290425 #51290425
step = pm.NUTS(target_accept=0.95)
trace = sample(step = step)#4000,tune = 10000,step =step,discard_tuned_samples=False)#,random_seed=SEED)
I need to run these on different datasets. Hence I am struggling to fix these parameter values for each dataset I am using. Is there any way where I give these values or find the outcome (if there are any user warnings and then try other values) and run it in a loop?
Pardon me if I am asking something stupid!
In this context, re-parametrization basically is finding a different but equivalent model that it is easier to compute. There are many things you can do depending on the details of your model:
Instead of using a Uniform distribution you can use a Normal distribution with a large variance.
Changing from a centered-hierarchical model to a
non-centered
one.
Replacing a Gaussian with a Student-T
Model a discrete variable as a continuous
Marginalize variables like in this example
whether these changes make sense or not is something that you should decide, based on your knowledge of the model and problem.

How to estimate test execution using testlink

I have gone through many articles and all are suggested me to create custom fields/Keywords in testlink to estimate the time for sprint execution.
Articles like :-
http://www.softwaretestingconcepts.com/testlink-using-custom-fields-and-keywords-for-effective-testing
Is there any alternative approach or any scientific method to estimate your sprint execution accurately.
I have found one article proposing below method:-
Number of Test Cases = (Number of Function Points) × 1.2
Source :- http://www.tutorialspoint.com/estimation_techniques/estimation_techniques_testing.htm
What should be the approach to estimating your execution cycle? Currently, I am doing it as per my experience in my project. It is working fine but management wants a concrete mechanism for same. Please suggest and share your experience
I have added Time Estimate and Actual Time from below option:-
Below is the result of above setting
I am not able to get this field data in report. I need total estimate also and then comparison between actual and estimated time
Any help would be appreciated
TestLink provides inbuilt support to record Estimated Exec. time(define at TestCase creation) and Actual Execution time. (record at TestCase Execution time)
Based on this feature hopes you can buildup your requirement.

Using predict for ancillary parameters in maximum likelihood model in Stata

I wanted to know whether I can use the predict option for ancillary parameters (maximum likelihood ) program as follows (I estimated lnsigma and so sigma is the ancillary parameter in the model):
predict lnsigma, eq(lnsigma)
gen sigma=exp(lnsigma)
I also would like to know whether we can use above for heteroscedastic model.
Thank you in advance.
That sounds correct. I would be more explicit by typing predict lnsigma, xb eq(lnsigma). This way your code will not be broken when someone later on desides to write a prediction program for your estimation program and sets the default to something different than the linear prediction.
You can also do it in one line:
predictnl sigma = exp(xb(#2))
This assumes that lnsigma is the second equation in your model. If it is the third equation you replace xb(#2) with xb(#3). predictnl is also also an easy way of using the delta method to predict standard errors and confidence intervals for sigma.
I assume this is your own Stata program. If that is true, then you also have a third option: You can create your own prediction program, which Stata's predict command will recongnize. You can find some useful tricks on how to do that here: http://www.stata.com/help.cgi?_pred_se

OpenCV Neural Network train one iteration at a time

The only way I know to train a multilayer neural network in OpenCV is:
CvANN_MLP network;
....
network.train(input, output, Mat(), Mat(), params, flags);
But this will not print out any meaningful debug (e.g. Iteration count, current error,...), the program will just sit there until it finishes training, very troublesome if the dataset is in gigabytes, there's no way I can see the progress.
How do I train the network one iteration at a time, or print out some debug while training?
Problem not solved, but question solved. Answer: It's impossible as far as the current OpenCV versions are concerned.
Are you setting the UPDATE_WEIGHTS flags?
You can test the error yourself by having the ANN predict the result vector for each sample in the training set.
According to http://opencv.willowgarage.com/documentation/cpp/ml_neural_networks.html#cvann-mlp-train
the params parameter is of Type cvANN_MLP_TrainParams. This class contains a property TermCriteria which controls the when the training function terminates. This Termination criteria class http://opencv.willowgarage.com/documentation/cpp/basic_structures.html can be set to terminate after a given number of iterations or when a given epsilon conditions is fulfilled or some combination of both. I have not used the training function myself so I can't know the code that you'd use to make this work, but something like this should limit the number of training cycles
CvANN_MLP_TrainParams params = CvANN_MLP_TrainParams()
params.term_crit.type = 1;//This should tell the train function you want to terminate on number of iterations
params.term_crit.maxCount = 1;//Termination after one iteration might be max_iter instead of maxCount
network.train(input, output, Mat(),Mat(), params, flags)
Like I said I haven't worked with openCV but having read the documentation something like this should work.
Your answer lays in the source code. IF you want to get some output after every x epochs, put something in the source code, in this loop:
https://github.com/opencv/opencv/blob/9787ab598b6609a6ca6652a12441d741cb15f695/modules/ml/src/ann_mlp.cpp#L941
When they made OpenCV they had to find a balance between user customizability and how easy it is to use/read. Ultimately you have the power to do whatever you want when editing the source code.