Is there any way to adjust the tolerances for the 'ipopt' optimizer that is used in Pyomo?
The default tolerance for Ipopt is 1E-8. This can be changed from Pyomo using the solver-specific 'tol' option. For example:
solver = SolverFactory('ipopt')
solver.options['tol'] = 1E-5
solver.solve(model)
Related
I am computing the solution to a dynamic non-linear optimization problem, that I set up usign the pyomo library. I use a ConcreteModel, with an objective function and several constraints, all time-indexed.
My objective function takes the form of a ScalarObjective (I am solving a dynamic general equilibrium problem in which I seek to maximize total welfare). I would like to compute the gradient of the objective, evaluated at the optimum, with respect to one of the model's variables at a given period t. My problem is a discrete-time problem.
I have tried many different options, asking AI chatbots for help (both You Chat and ChatGPT), but every solution I'm given is incorrect -- on this topic the AI chatbots seem to know very little.
I feel that some method in the library pyomo.dae could be of help, but I haven't found a solution yet. Could anyone help me, please?
You can do this using Pyomo's differentiate function. Here is a toy example:
import pyomo.environ as pyo
from pyomo.core.expr.calculus.derivatives import differentiate
m = pyo.ConcreteModel()
m.x = pyo.Var()
m.con = pyo.Constraint(expr=m.x<=10)
m.obj = pyo.Objective(expr=m.x**2)
pyo.SolverFactory('ipopt').solve(m)
print(pyo.value(m.x))
# -1.2528349584581178e-10
# Evaluate the derivative at current value of m.x
ddx = differentiate(m.obj, wrt=m.x)
print(ddx)
# -2.5056699169162357e-10
# Return derivative expression
ddx2 = differentiate(m.obj, wrt=m.x, mode='sympy')
print(ddx2)
# 2.0*x
You can read more about this function here: https://github.com/Pyomo/pyomo/blob/main/pyomo/core/expr/calculus/derivatives.py#L31
Pyomo can find a solution, but it gives this warning:
WARNING: Loading a SolverResults object with a warning status into
model=(SecondCD);
message from solver=Ipopt 3.11.1\x3a Converged to a locally infeasible point. Problem may be infeasible.
How do I know if the problem is infeasible or not?
this pyomo model optimizes a farm's decision of inputs allocation.
model.Crops = Set() # set Crops := cereal rapes maize ;
model.Inputs = Set() # set Inputs := land labor capital fertilizer;
model.b = Param(model.Inputs) # Parameters in CD production function
model.x = Var(model.Crops, model.Inputs, initialize = 100, within=NonNegativeReals)
def production_function(model, i):
return prod(model.x[i,j]**model.b[j] for j in model.Inputs)
model.Q = Expression(model.Crops, rule=production_function)
...
instance = model.create_instance(data="SecondCD.dat")
opt = SolverFactory("ipopt")
opt.options["tol"] = 1E-64
results = opt.solve(instance, tee=True) # solves and updates instance
instance.display()
if I set b >=1, (e.g.: param b := land 1 labor 1 capital 1 fertilizer 1),
pyomo can find optimal solution;
but if i set b < 1, (e.g.: param b := land 0.1 labor 0.1 capital 0.1 fertilizer 0.1), and set opt.options["tol"] = 1E-64, pyomo can find a solution, but gives that warning.
I expect an optimal solution, but the actual result gives the warning mentioned above.
The message you get (message from solver=Ipopt 3.11.1\x3a Converged to a locally infeasible point. Problem may be infeasible.) doesn't mean that the problem is necessarilly infeasible. A non-linear solver will typically give you a local optimum, and the path to get to the solution is a very important part of finding a "better" local optimum. When you tried with another point, you found a feasible solution, and that is the proof that your problem is feasible.
Now, in finding the global optimum instead of a local optimum, this is a little bit harder. One way to find out is to check if your problem is convex. If it is, it means that there will only be one local optimum, and that this local optimum is the global optimum. This can be done mathematically. See https://math.stackexchange.com/a/1707213/470821 and http://www.princeton.edu/~amirali/Public/Teaching/ORF523/S16/ORF523_S16_Lec7_gh.pdf from a quick Google search). If you found that your problem is not convex, then you can try to prove that there are few local optimums and that they can be found easily with good starting points. Finally, if this can't be done, you should consider more advanced techniques, all with their pros and cons. For example, you can try to generate a set of starting solutions to make sure that you cover the whole feasible domain of your problem. Another one would be to use meta-heuristics methods to help you find a better starting solution.
Also, I am sure that Ipopt have some tools to help tackling this problem of finding a good starting solution that improves the resulting local optimum.
I am trying to predict whether a particular service ticket raised by client needs a code change.
I have training data.
I have around 17k data points with problem description and tag (Y for code change required and N for no code change)
I did TF-IDF and it gave me 27k features. So I tried to fit RandomForestClassifier (sklearn python) with this 17k x 27k matrix.
I am getting very low scores on test set while training accuracy is very high.
Precision on train set: 89%
Precision on test set: 21%
Can someone suggest any workarounds?
I am using this model now:
sklearn.RandomForestClassifier(n_jobs=3,n_estimators=100,class_weight='balanced',max_features=None,oob_score=True)
Please help!
EDIT:
I have 11k training data with 900 positives (skewed). I tried LinearSVC sparsify but didn't work as well as Truncated SVD (Latent Semantic Indexing). maxFeatures=None performs better on the test set than without it.
I have also tried SVM, logistic (l2 and l1), ExtraTrees. RandonForest still is working best.
Right now, going at 92% precision on positives but recall is 3% only
Any other suggestions would be appreciated!
Update:
Feature engineering helped a lot. I pulled features out of the air (len of chars, len of words, their, difference, ratio, day of week the problem was of reported, day of month, etc) and now I am at 19-20% recall with >95% accuracy.
Food for your thoughts on using word2vec average vectors as deep features for the free text instead of tf-idf or bag of words ???
[edited]
Random forest handles more features than data points quite fine. RF is e.g. used for micro-array studies with e.g. a 100:5000 data point/feature ratio or in single-nucleotide_polymorphism(SNP) studies with e.g 5000:500,000 ratio.
I do disagree with the diagnose provided by #ncfirth, but the suggested treatment of variable selection may help anyway.
Your default random forest is not badly overfitted. It is just not meaningful to pay any attention to a non-cross validated training set prediction performance for a RF model, because any sample will end in the terminal nodes/leafs it has itself defined. But the overall ensemble model is still robust.
[edit] If you would change the max_depth or min_samples_split, the training precision would probably drop, but that is not the point. The non-cross validated training error/precision of a random forest model or many other ensemble models simply does not estimate anything useful.
[I did before edit confuse max_features with n_estimators, sry I mostly use R]
Setting max_features="none" is not random forest, but rather 'bagged trees'. You may benefit from a somewhat lower max_features which improve regularization and speed, maybe not. I would try lowering max_features to somewhere between 27000/3 and sqrt(27000), the typical optimal range.
You may achieve better test set prediction performance by feature selection. You can run one RF model, keep the top ~5-50% most important features and then re-run the model with fewer features. "L1 lasso" variable selection as ncfirth suggests may also be a viable solution.
Your metric of prediction performance, precision, may not be optimal in case unbalanced data or if the cost of false-negative and false-positive is quite different.
If your test set is still predicted much worse than the out-of-bag cross-validated training set, you may have problems with your I.I.D. assumptions that any supervised ML model rely on or you may need to wrap the entire data processing in an outer cross-validation loop, to avoid over optimistic estimation of prediction performance due to e.g. the variable selection step.
Seems like you've overfit on your training set. Basically the model has learnt noise on the data rather than the signal. There are a few ways to combat this, but it seems fairly obvious that you're model has overfit because of the incredibly large number of features you're feeding it.
EDIT:
It seems I was perhaps too quick to jump to the conclusion of overfitting, however this may still be the case (left as an exercise to the reader!). However feature selection may still improve the generalisability and reliability of your model.
A good place to start for removing features in scikit-learn would be here. Using sparsity is a fairly common way to perform feature selection:
from sklearn.svm import LinearSVC
from sklearn.feature_selection import SelectFromModel
import numpy as np
# Create some data
X = np.random.random((1800, 2700))
# Boolean labels as the y vector
y = np.random.random(1800)
y = y > 0.5
y = y.astype(bool)
lsvc = LinearSVC(C=0.05, penalty="l1", dual=False).fit(X, y)
model = SelectFromModel(lsvc, prefit=True)
X_new = model.transform(X)
print X_new.shape
Which returns a new matrix of shape (1800, 640). You can tune the number of features selected by altering the C parameter (called the penalty parameter in scikit-learn but sometimes called the sparsity parameter).
I am performing some machine learning tasks using SVM. I suspect the data is non-linear so I also included the RBF kernel. I found that SVM with RBF kernel is MUCH worse than linear SVM. I wonder if I did something wrong with my classifier parameter specifications.
My code as follows:
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
svm1 = LinearSVC() # performs the best, similar to logistic regression results which is expected
svm2 = LinearSVC(class_weight="auto") # performs somewhat worse than svm1
svm3 = SVC(kernel='rbf', random_state=0, C=1.0, cache_size=4000, class_weight='balanced') # performs way worse than svm1; takes the longest processing time
svm4 = SVC(kernel='rbf', random_state=0, C=1.0, cache_size=4000) # this is the WORST of all, the classifier simply picks the majority class
With RBF try tuning your C and gamma parameters. Scikit-learn's grid search will help you.
Here is an example to get you started:
svc = SVC(...)
params = {"C":[0.1, 1, 10], "gamma": [0.1, 0.01, 0.001]}
grid_search = GridSearchCV(svc, params)
grid_search.fit(X,y)
Following paper is a good guide for SVM users.
A Practical Guide to Support Vector Classification
http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf
In a nutshell, three points are essential to let SVM perform correctly.
(1) feature preparation (feature scaling, feature categorization)
(2) parameter tuning (coarse and fine-grained cross validation)
(3) kernel selection (#features vs #instances)
Basic idea for (3) is to select the linear kernel if #features >> #instances. With small #instances, SVMs with non-linear kernels can be overfit easily.
I wanted to know whether I can use the predict option for ancillary parameters (maximum likelihood ) program as follows (I estimated lnsigma and so sigma is the ancillary parameter in the model):
predict lnsigma, eq(lnsigma)
gen sigma=exp(lnsigma)
I also would like to know whether we can use above for heteroscedastic model.
Thank you in advance.
That sounds correct. I would be more explicit by typing predict lnsigma, xb eq(lnsigma). This way your code will not be broken when someone later on desides to write a prediction program for your estimation program and sets the default to something different than the linear prediction.
You can also do it in one line:
predictnl sigma = exp(xb(#2))
This assumes that lnsigma is the second equation in your model. If it is the third equation you replace xb(#2) with xb(#3). predictnl is also also an easy way of using the delta method to predict standard errors and confidence intervals for sigma.
I assume this is your own Stata program. If that is true, then you also have a third option: You can create your own prediction program, which Stata's predict command will recongnize. You can find some useful tricks on how to do that here: http://www.stata.com/help.cgi?_pred_se