So the problem is as follows:
I am working on an algorithm where an argument is whether the optimization problem is a maximization or a minimization problem. From the optimization model, the code looks like this:
model.obj = pyo.Objective(expr= sum(model.c[i]*model.x[i] for i in model.i),
sense=maximize)
And the output from pprint() looks like this:
obj : Size=1, Index=None, Active=True
Key : Active : Sense : Expression
None : True : maximize : 1.2*x[1] + 2*x[2]
How can I retrieve the element "maximize" from model.obj?
It appears that the pyo.Objective model object holds the attribute you are looking for. After a little spelunking in ipython:
In [1]: import pyomo.environ as pyo
In [2]: m = pyo.ConcreteModel()
In [3]: m.obj = pyo.Objective(expr=1, sense=pyo.maximize)
In [4]: m.pprint()
1 Objective Declarations
obj : Size=1, Index=None, Active=True
Key : Active : Sense : Expression
None : True : maximize : 1.0
1 Declarations: obj
In [5]: m.obj.sense
Out[5]: -1
In [6]: m.obj = pyo.Objective(expr=1, sense=pyo.minimize)
WARNING: Implicitly replacing the Component attribute obj (type=<class
'pyomo.core.base.objective.ScalarObjective'>) on block unknown with a new
Component (type=<class 'pyomo.core.base.objective.ScalarObjective'>). This
is usually indicative of a modelling error. To avoid this warning, use
block.del_component() and block.add_component().
In [7]: m.obj.sense
Out[7]: 1
Related
I tried solving a differential equation having a logical condition in Gekko. I know that Gekko does not like these things but I supposed that simple if3() function in order to switch between the two given expressions (R1,R2) will get some solution. Here is a simple example code that failed - Solution not found.
import numpy as np
from gekko import GEKKO
import matplotlib.pyplot as plt
A=1
B=1e-5
C=2
D=0.01
G=1
y0=120
m = GEKKO() # create GEKKO model
nt = 101
m.time = np.linspace(0,100,nt) # time points
y = m.Var(y0) # create GEKKO variable
R1 = m.Intermediate(-(y+A-A/(y/C+1)**(B/D))/G*(D*y+D*C))
R2 = m.Intermediate(-0.1*y)
z = m.Var() # This way - Solution not found
z = m.if3(y-60,R1,R2) #
m.Equation(y.dt()== z) #
#m.Equation(y.dt()== R1)
#m.Equation(y.dt()== R2)
# solve ODE
m.options.IMODE = 4
m.options.NODES = 5
m.solve(disp=False)
# plot results
plt.plot(m.time,y)
plt.xlabel('time')
plt.ylabel('y(t)')
Unfortunately, it seems I did not figure out yet how to overcome these problems by reading the article about Logical conditions in Optimization.
Best Regards,
Radovan
The posted script gives the error:
Exception: #error: Solution Not Found
To see additional details about the error, turn on the display with disp=True. This gives additional detail that indicates that the problem is infeasible.
Number of state variables: 2800
Number of total equations: - 2000
Number of slack variables: - 800
---------------------------------------
Degrees of freedom : 0
----------------------------------------------
Dynamic Simulation with APOPT Solver
----------------------------------------------
Iter: 1 I: -1 Tm: 13.07 NLPi: 9 Dpth: 0 Lvs: 0 Obj: 0.00E+00 Gap: NaN
Warning: no more possible trial points and no integer solution
Maximum iterations
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 13.116200000000001 sec
Objective : 0.
Unsuccessful with error code 0
---------------------------------------------------
Creating file: infeasibilities.txt
Use command apm_get(server,app,'infeasibilities.txt') to retrieve file
#error: Solution Not Found
A couple issues with the model:
z = m.Var() is not needed because the z = if3() function defines the variable.
IMODE=6 is needed because there are slack variables in the problem and an optimization mode is needed to solve the problem.
Try if2() instead of if3(). It seems to work better with the MPCC form than the Mixed Integer form.
Solver APOPT performs better for this problem. Try m.options.SOLVER=1.
import numpy as np
from gekko import GEKKO
import matplotlib.pyplot as plt
A=1; B=1e-5; C=2; D=0.01; G=1
y0=120
m = GEKKO(remote=False) # create GEKKO model
nt = 101
m.time = np.linspace(0,50,nt) # time points
y = m.Var(y0) # create GEKKO variable
R1 = m.Intermediate(-(y+A-A/(y/C+1)**(B/D))/G*(D*y+D*C))
R2 = m.Intermediate(-0.1*y)
z = m.if2(y-60,0,1)
m.Equation(y.dt()== (1-z)*R1+z*R2)
# solve ODE
m.options.IMODE = 6
m.options.SOLVER = 1
m.options.NODES = 5
m.solve(disp=True)
# plot results
plt.subplot(2,1,1)
plt.plot(m.time,y)
plt.plot([0,50],[60,60],'r--')
plt.ylabel('y(t)'); plt.grid()
plt.subplot(2,1,2)
plt.plot(m.time,z)
plt.ylabel('z(t)'); plt.grid()
plt.xlabel('time')
plt.show()
The alternative to using an if statement in the model is to integrate in a loop and check the condition y>60 to switch to the other model.
I have a pyomo ConcreteModel() which I solve repeatedly within another stochastic optimization process whereas one or more parameters are changed on the model.
The basic process can be described as follows:
# model is created as a pyomo.ConcreteModel()
for i in range(0, 10):
# change some parameter on the model
opt = SolverFactory('gurobi', solver_io='lp')
# how can I check here if the changed model/lp-file is valid?
results = opt.solve(model)
Now I get an error for some cases where the model and LP file (see gist) seems to contain NaN values:
ERROR: Solver (gurobi) returned non-zero return code (1)
ERROR: Solver log: Academic license - for non-commercial use only Error
reading LP format file /tmp/tmp8agg07az.pyomo.lp at line 1453 Unrecognized
constraint RHS or sense Neighboring tokens: " <= nan c_u_x1371_: +1 x434
<= nan "
Unable to read file Traceback (most recent call last):
File "<stdin>", line 5, in <module> File
"/home/cord/.anaconda3/lib/python3.6/site-
packages/pyomo/solvers/plugins/solvers/GUROBI_RUN.py", line 61, in
gurobi_run
model = read(model_file)
File "gurobi.pxi", line 2652, in gurobipy.read
(../../src/python/gurobipy.c:127968) File "gurobi.pxi", line 72, in
gurobipy.gurobi.read (../../src/python/gurobipy.c:125753)
gurobipy.GurobiError: Unable to read model Freed default Gurobi
environment
Of course, the first idea would be to prevent setting these NaN-values. But I don't know why they occur anyhow and want to figure out when the model breaks due to a wrong structure caused by NaNs.
I know that I can catch the solver status and termination criterion from the SolverFactory() object. But the error obviously occurs somewhere before the solving process due to the invalid changed values.
How can I can catch these kinds of errors for different solvers before solving i. e. check if the model/lp-file is valid before applying a solver? Is there some method e.g. check_model() which delivers True or False if the model is (not) valid or something similar?
Thanks in advance!
If you know that the error is taking place when the parameter values are being changed, then you could test to see whether the sum of all relevant parameter values is a valid number. After all, NaN + 3 = NaN.
Since you are getting NaN, I am going to guess that you are importing parameter values using Pandas from an Excel spreadsheet? There is a way to convert all the NaNs to a default number.
Code example for parameter check:
>>> from pyomo.environ import *
>>> m = ConcreteModel()
>>> m.p1 = Param(initialize=1)
>>> m.p2 = Param(initialize=2)
>>> for p in m.component_data_objects(ctype=Param):
... print(p.name)
...
p1
p2
>>> import numpy
>>> m.p3 = Param(initialize=numpy.nan)
>>> import math
>>> math.isnan(value(sum(m.component_data_objects(ctype=Param))))
True
Indexed, Mutable Parameters:
>>> from pyomo.environ import *
>>> m = ConcreteModel()
>>> m.i = RangeSet(2)
>>> m.p = Param(m.i, initialize={1: 1, 2:2}, mutable=True)
>>> import math
>>> import numpy
>>> math.isnan(value(sum(m.component_data_objects(ctype=Param))))
False
>>> m.p[1] = numpy.nan
>>> math.isnan(value(sum(m.component_data_objects(ctype=Param))))
True
For the problem formulation
import pyomo.environ as pe
model = pe.AbstractModel()
model.I = pe.Set()
model.p = model.Param(model.I)
model.create_instance("input.dat")
and the input.dat
set I := 1 2 3 ;
param p :=
1 0.1
2 0.2
3 0.3
;
param q :=
1 1.1
2 2.2
3 3.3
;
The following error is shown
AttributeError: 'AbstractModel' object has no attribute 'q'
How to silence create_instance in this case? The model is fully specified. The "excess" data (parameter q in this case) is needed for another model and the models share this input.dat. I could go with a try/except for the AttributeError and just carry on I guess, but then I would need to guard each create_instance call. I looked for a "skip_undefined" kwarg or similar in the documentation. Is there another preferred way to handle this situation?
According to the documentation, if you load your data using the method load from the class DataPortal, the parameters not used by the model are omitted.
Therefore you may try:
from pyomo.environ import *
data = DataPortal()
model = AbstractModel()
data.load(filename='./input.dat')
model.I = Set()
model.p = model.Param(model.I)
instance = model.create_instance(data)
I need help in understanding the accuracy and dataset output format for Deep Learning model.
I did some training for deep learning based on this site : https://machinelearningmastery.com/deep-learning-with-python2/
I did the example for pima-indian-diabetes dataset, and iris flower dataset. I train my computer for pima-indian-diabetes dataset using script from this : http://machinelearningmastery.com/tutorial-first-neural-network-python-keras/
Then I train my computer for iris-flower dataset using below script.
# import package
import numpy
from pandas import read_csv
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score, KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
from keras.callbacks import ModelCheckpoint
# fix random seed for reproductibility
seed = 7
numpy.random.seed(seed)
# load dataset
dataframe = read_csv("iris_2.csv", header=None)
dataset = dataframe.values
X = dataset[:,0:4].astype(float)
Y = dataset[:,4]
# encode class value as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
### one-hot encoder ###
dummy_y = np_utils.to_categorical(encoded_Y)
# define base model
def baseline_model():
# create model
model = Sequential()
model.add(Dense(4, input_dim=4, init='normal', activation='relu'))
model.add(Dense(3, init='normal', activation='sigmoid'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model_json = model.to_json()
with open("iris.json", "w") as json_file:
json_file.write(model_json)
model.save_weights('iris.h5')
return model
estimator = KerasClassifier(build_fn=baseline_model, nb_epoch=1000, batch_size=6, verbose=0)
kfold = KFold(n_splits=10, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X, dummy_y, cv=kfold)
print("Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
Everything works fine until I decided to try on other dataset from this link : https://archive.ics.uci.edu/ml/datasets/Glass+Identification
At first I train this new dataset using the pime-indian-diabetes dataset script's example and change the value for X and Y variable to this
dataset = numpy.loadtxt("glass.csv", delimiter=",")
X = dataset[:,0:10]
Y = dataset[:,10]
and also the value for the neuron layer to this
model = Sequential()
model.add(Dense(10, input_dim=10, init='uniform', activation='relu'))
model.add(Dense(10, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
the result produce accuracy = 32.71%
Then I changed the output column of this dataset which is originally in integer (1~7) to string (a~g) and use the example's script for the iris-flower dataset by doing some modification to it
import numpy
from pandas import read_csv
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
seed = 7
numpy.random.seed(seed)
dataframe = read_csv("glass.csv", header=None)
dataset = dataframe.values
X = dataset[:,0:10].astype(float)
Y = dataset[:,10]
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
def create_baseline():
model = Sequential()
model.add(Dense(10, input_dim=10, init='normal', activation='relu'))
model.add(Dense(1, init='normal', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model_json = model.to_json()
with open("glass.json", "w") as json_file:
json_file.write(model_json)
model.save_weights('glass.h5')
return model
estimator = KerasClassifier(build_fn=create_baseline, nb_epoch=1000, batch_size=10, verbose=0)
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X, encoded_Y, cv=kfold)
print("Baseline: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
I did not use 'dummy_y' variable as refer to this tutorial : http://machinelearningmastery.com/binary-classification-tutorial-with-the-keras-deep-learning-library/
I check that the dataset using alphabet as the output and thinking that maybe I can reuse that script to train the new glass dataset that I modified.
This time the results become like this
Baseline : 68.42% (3.03%)
From the article, that 68% and 3% means the mean and standard deviation of model accuracy.
My 1st question is when do I use integer or alphabet as the output column? and is this kind of accuracy result common when we tempered with the dataset like changing the output from integer to string/alphabet?
My 2nd question is how do I know how many neuron I have to put for each layer? Is it related to what backend I use when compiling the model(Tensorflow or Theano)?
Thank you in advance.
First question
It doesn't matter, as you can see here:
Y = range(10)
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
print encoded_Y
Y = ['a', 'b', 'c', 'd', 'e', 'f','g','h','i','j']
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
print encoded_Y
results:
[0 1 2 3 4 5 6 7 8 9]
[0 1 2 3 4 5 6 7 8 9]
Which means that your classifier sees exactly the same labels.
Second question
There is no absolutely correct answer for this question, but for sure it does not depend on your backend.
You should try and experiment with different number of neurons, number of layers, types of layers and all other network parameters in order to understand what is the best architecture to your problem.
With experience you will develop both a good intuition as for what parameters will be better for which type of problems as well as a good method for the experimentation.
The best rule of thumb (assuming you have the dataset required to sustain such a strategy) I've heard is "Make your network as large as you can until it overfit, add regularization until it does not overfit - repeat".
Per parts. First, if your output includes values of [0, 5] it is
impossible that using the sigmoid activation you can obtain that.
The sigmoid function has a range of [0, 1]. You could use an
activation = linear (without activation). But I think it's a bad approach because your problem is not to estimate a continuous value.
Second, the question you should ask yourself is not so much the type
of data you are using (in the sense of how you store the
information). Is it a string? Is it an int? Is it a float? It does
not matter, but you have to ask what kind of problem you are trying
to solve.
In this case, the problem should not be treated as a regression
(estimate a continuous value). Because your output are categorical,
numbers but categorical. Really you want to classifying between:
Type of glass: (class attribute).
When do a classification problem the following configuration is
"normally" used:
The class is encoded by one-hot encoding. It is nothing more than a vector of 0's and a single one in the corresponding class.
For instance: class 3 (0 count) and have 6 classes -> [0, 0, 0, 1, 0, 0] (as many zeros as classes you have).
As you see now, we dont have a single output, your model must be as outputs as your Y (6 classes). That way the last layer should
have as many neurons as classes. Dense (classes, ...).
You are also interested in the fact that the output is the probability of belonging to each class, that is: p (y = class_0),
... p (y_class_n). For this, the softmax activation layer is used,
which is to ensure that the sum of all the probabilities is 1.
You have to change the loss for the categorical_crossentropy so that it is able to work together with the softmax. And use the metric categorical_accuracy.
seed = 7
numpy.random.seed(seed)
dataframe = read_csv("glass.csv", header=None)
dataset = dataframe.values
X = dataset[:,0:10].astype(float)
Y = dataset[:,10]
encoder = LabelEncoder()
encoder.fit(Y)
from keras.utils import to_categorical
encoded_Y = to_categorical(encoder.transform(Y))
def create_baseline():
model = Sequential()
model.add(Dense(10, input_dim=10, init='normal', activation='relu'))
model.add(Dense(encoded_Y.shape[1], init='normal', activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['categorical_accuracy'])
model_json = model.to_json()
with open("glass.json", "w") as json_file:
json_file.write(model_json)
model.save_weights('glass.h5')
return model
model = create_baseline()
model.fit(X, encoded_Y, epochs=1000, batch_size=100)
The number of neurons does not depend on the backend you use.
But if it is true that you will never have the same results. That's
because there are enough stochastic processes within a network:
initialization, dropout (if you use), batch order, etc.
What is known is that expanding the number of neurons per dense
makes the model more complex and therefore has more potential to
represent your problem but is more difficult to learn and more
expensive both in time and in calculations. That way you always have
to look for a balance.
At the moment there is no clear evidence that it is better:
expand the number of neurons per layer.
add more layers.
There are models that use one architecture and others the other.
Using this architecture you get the following result:
Epoch 1000/1000
214/214 [==============================] - 0s 17us/step - loss: 0.0777 - categorical_accuracy: 0.9953
Using this architecture you get the following result:
I have been trying to solve this for days, and although I have found a similar problem here How can i vectorize list using sklearn DictVectorizer, the solution is overly simplified.
I would like to fit some features into a logistic regression model to predict 'chinese' or 'non-chinese'. I have a raw_name which I will extract to get two features 1) is just the last name, and 2) is a list of substring of the last name, for example, 'Chan' will give ['ch', 'ha', 'an']. But it seems Dictvectorizer doesn't take list type as part of the dictionary. From the link above, I try to create a function list_to_dict, and successfully, return some dict elements,
{'substring=co': True, 'substring=or': True, 'substring=rn': True, 'substring=ns': True}
but I have no idea how to incorporate that in the my_dict = ... before applying the dictvectorizer.
# coding=utf-8
import pandas as pd
from pandas import DataFrame, Series
import numpy as np
import nltk
import re
import random
from random import randint
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction import DictVectorizer
lr = LogisticRegression()
dv = DictVectorizer()
# Get csv file into data frame
data = pd.read_csv("V2-1_2000Records_Processed_SEP2015.csv", header=0, encoding="utf-8")
df = DataFrame(data)
# Pandas data frame shuffling
df_shuffled = df.iloc[np.random.permutation(len(df))]
df_shuffled.reset_index(drop=True)
# Assign X and y variables
X = df.raw_name.values
y = df.chineseScan.values
# Feature extraction functions
def feature_full_last_name(nameString):
try:
last_name = nameString.rsplit(None, 1)[-1]
if len(last_name) > 1: # not accept name with only 1 character
return last_name
else: return None
except: return None
def feature_twoLetters(nameString):
placeHolder = []
try:
for i in range(0, len(nameString)):
x = nameString[i:i+2]
if len(x) == 2:
placeHolder.append(x)
return placeHolder
except: return []
def list_to_dict(substring_list):
try:
substring_dict = {}
for i in substring_list:
substring_dict['substring='+str(i)] = True
return substring_dict
except: return None
list_example = ['co', 'or', 'rn', 'ns']
print list_to_dict(list_example)
# Transform format of X variables, and spit out a numpy array for all features
my_dict = [{'two-letter-substrings': feature_twoLetters(feature_full_last_name(i)),
'last-name': feature_full_last_name(i), 'dummy': 1} for i in X]
print my_dict[3]
Output:
{'substring=co': True, 'substring=or': True, 'substring=rn': True, 'substring=ns': True}
{'dummy': 1, 'two-letter-substrings': [u'co', u'or', u'rn', u'ns'], 'last-name': u'corns'}
Sample data:
Raw_name chineseScan
Jack Anderson non-chinese
Po Lee chinese
If I have understood correctly you want a way to encode list values in order to have a feature dictionary that DictVectorizer could use. (One year too late but) something like this can be used depending on the case:
my_dict_list = []
for i in X:
# create a new feature dictionary
feat_dict = {}
# add the features that are straight forward
feat_dict['last-name'] = feature_full_last_name(i)
feat_dict['dummy'] = 1
# for the features that have a list of values iterate over the values and
# create a custom feature for each value
for two_letters in feature_twoLetters(feature_full_last_name(i)):
# make sure the naming is unique enough so that no other feature
# unrelated to this will have the same name/ key
feat_dict['two-letter-substrings-' + two_letters] = True
# save it to the feature dictionary list that will be used in Dict vectorizer
my_dict_list.append(feat_dict)
print my_dict_list
from sklearn.feature_extraction import DictVectorizer
dict_vect = DictVectorizer(sparse=False)
transformed_x = dict_vect.fit_transform(my_dict_list)
print transformed_x
Output:
[{'dummy': 1, u'two-letter-substrings-er': True, 'last-name': u'Anderson', u'two-letter-substrings-on': True, u'two-letter-substrings-de': True, u'two-letter-substrings-An': True, u'two-letter-substrings-rs': True, u'two-letter-substrings-nd': True, u'two-letter-substrings-so': True}, {'dummy': 1, u'two-letter-substrings-ee': True, u'two-letter-substrings-Le': True, 'last-name': u'Lee'}]
[[ 1. 1. 0. 1. 0. 1. 0. 1. 1. 1. 1. 1.]
[ 1. 0. 1. 0. 1. 0. 1. 0. 0. 0. 0. 0.]]
Another thing you could do (but I don't recommend) if you don't want to create as many features as the values in your lists is something like this:
# sorting the values would be a good idea
feat_dict[frozenset(feature_twoLetters(feature_full_last_name(i)))] = True
# or
feat_dict[" ".join(feature_twoLetters(feature_full_last_name(i)))] = True
but the first one means that you can't have any duplicate values and probably both don't make good features, especially if you need fine-tuned and detailed ones. Also, they reduce the possibility of two rows having the same combination of two letter combinations, thus the classification probably won't do well.
Output:
[{'dummy': 1, 'last-name': u'Anderson', frozenset([u'on', u'rs', u'de', u'nd', u'An', u'so', u'er']): True}, {'dummy': 1, 'last-name': u'Lee', frozenset([u'ee', u'Le']): True}]
[{'dummy': 1, 'last-name': u'Anderson', u'An nd de er rs so on': True}, {'dummy': 1, u'Le ee': True, 'last-name': u'Lee'}]
[[ 1. 0. 1. 1. 0.]
[ 0. 1. 1. 0. 1.]]