So I installed pyomo, glpk, and ipopt with anaconda,
When I run the example code here: https://pyomo.readthedocs.io/en/stable/contributed_packages/mindtpy.html
from pyomo.environ import *
model = ConcreteModel()
model.x = Var(bounds=(1.0,10.0),initialize=5.0)
model.y = Var(within=Binary)
model.c1 = Constraint(expr=(model.x-3.0)**2 <= 50.0*(1-model.y))
model.c2 = Constraint(expr=model.x*log(model.x)+5.0 <= 50.0*(model.y))
model.objective = Objective(expr=model.x, sense=minimize)
SolverFactory('mindtpy').solve(model, mip_solver='glpk', nlp_solver='ipopt',tee=True)
model.objective.display()
model.display()
model.pprint()
I get the output that the binary variable has apparently become infeasible:
python minlpex.py
INFO: ---Starting MindtPy---
INFO: Original model has 2 constraints (2 nonlinear) and 0 disjunctions, with
2 variables, of which 1 are binary, 0 are integer, and 1 are continuous.
INFO: NLP 1: Solve relaxed integrality
INFO: NLP 1: OBJ: 1.0 LB: 1.0 UB: inf
INFO: ---MindtPy Master Iteration 0---
INFO: MIP 1: Solve master problem.
WARNING: Empty constraint block written in LP format - solver may error
Traceback (most recent call last):
File "minlpex.py", line 13, in <module>
op.SolverFactory('mindtpy').solve(model, mip_solver='glpk', nlp_solver='ipopt',tee=True)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/mindtpy/MindtPy.py", line 370, in solve
MindtPy_iteration_loop(solve_data, config)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/mindtpy/iterate.py", line 30, in MindtPy_iteration_loop
handle_master_mip_optimal(master_mip, solve_data, config)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/mindtpy/mip_solve.py", line 62, in handle_master_mip_optimal
config)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/gdpopt/util.py", line 199, in copy_var_list_values
v_to.set_value(value(v_from, exception=False))
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/core/base/var.py", line 173, in set_value
if valid or self._valid_value(val):
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/core/base/var.py", line 185, in _valid_value
"domain %s" % (val, type(val), self.domain))
ValueError: Numeric value `0.22709088987977885` (<class 'float'>) is not in domain Binary
So I was a little confused, since this was a code provided, I would not expect it to error like this. So I feel like I'm messing something up or I am missing some required library?
Thanks a lot.
Looks like something must be wrong with the conda pyomo install or ipopt install.
When I reinstalled using pip for ipopt and compiling pyomo from github source everything works fine.
Related
So I'm tackling this machine-learning problem (from a previous Kaggle competition for practice: https://www.kaggle.com/c/nyc-taxi-trip-duration) and I'm trying to use XGBoost but getting an error which I have no clue how to tackle. I searched on google and stack overflow but couldn't find anyone with a similar problem.
I'm using python 2.7 with the Spyder IDE through Anaconda and I'm on Windows 10. I did have some trouble installing the xgboost package so I won't completely erase the idea that it could be an installation error. However I'm also doing a Udemy course on ML and I was able to use xgboost just fine with a small dataset and I'm using the same functions.
Code
The code is pretty simple:
... import libraries
# import dataset
dataset = pd.read_csv('data/merged.csv')
y = dataset['trip_duration'].values
del dataset['trip_duration'], dataset["id"], dataset['distance']
X = dataset.values
# Split dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
# fit XGBoost to training set
classifier = XGBClassifier()
classifier.fit(X_train, y_train)
Output
However it spits out the following error:
In [1]: classifier.fit(X_train, y_train)
Traceback (most recent call last):
File "<ipython-input-44-f44724590846>", line 1, in <module>
classifier.fit(X_train, y_train)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\sklearn.py", line 464, in fit
verbose_eval=verbose)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\training.py", line 204, in train
xgb_model=xgb_model, callbacks=callbacks)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\training.py", line 74, in _train_internal
bst.update(dtrain, i, obj)
File "C:\Users\MortZ\Anaconda3\lib\site-packages\xgboost\core.py", line 819, in update
_check_call(_LIB.XGBoosterUpdateOneIter(self.handle, iteration, dtrain.handle))
WindowsError: [Error -529697949] Windows Error 0xE06D7363
I don't really know how to interpret this so any help would be very appreciated.
Thanks in advance
MortZ
Well after struggling for a few days I managed to find a solution.
A friend of mine told xgboost is known to have problems with python 2.7 so I upgraded it to 3.6 This didn't entirely solve my problem but gave me a knew error:
OSError: [WinError 541541187] Windows Error 0x20474343
After some digging I found a solution to this. The fit function I was trying to use was the source of the problem (although it did work on a different dataset so I'm not entirely sure why..).
Solution
change
classifier = XGBClassifier()
classifier.fit(X_train, y_train)
to
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
watchlist = [(dtrain, 'train'), (dtest, 'test')]
xgb_pars = {'min_child_weight': 1, 'eta': 0.5, 'colsample_bytree': 0.9,
'max_depth': 6, 'subsample': 0.9, 'lambda': 1., 'nthread': -1, 'booster' : 'gbtree', 'silent': 1, 'eval_metric': 'rmse', 'objective': 'reg:linear'}
model = xgb.train(xgb_pars, dtrain, 10, watchlist, early_stopping_rounds=2, maximize=False, verbose_eval=1)
print('Modeling RMSLE %.5f' % model.best_score)
I guess the error is because you are using XGBClassfier instead of XGBRegressor for a regression problem.
I am looking to call an rPy2 function with multiple input parameters. Here is the R function write.csv that I am trying to use. It has multiple input parameters and I need to specify more than one such parameter.
If I use it without the optional parameter row.names and column.names, it works like this:
r("write.csv")(d,file='myfilename.csv')
For my requirements, I must issue this command with the optional parameters row.names and column.names. So, I tried:
r('write.csv')(d, file='myfilename.csv', row.names=FALSE, column.names=FALSE)
but I got this error message:
File "/home/UserName/test.py", line 12
r("write.csv")(d,file='myfilename.csv',row.names=FALSE, column.names=FALSE)
SyntaxError: keyword can't be an expression
[Finished in 0.0s with exit code 1]
[shell_cmd: python -u "/home/UserName/test.py"]
[dir: /home/UserName]
[path: /home/UserName/bin:/home/UserName/.local/bin:/usr/local/sbin:/usr/local/bin:
.../usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin]
How can I achieve write.csv with row.names=FALSE and column.names=FALSE, in rPy2?
You can use Python's **.
See the note here: http://rpy2.readthedocs.io/en/version_2.8.x/robjects_functions.html#callable
Ony of my mistakes was that I should have replaced . by _, as shown in the docs here:
from rpy2.robjects.packages import importr
base = importr('base')
base.rank(0, na_last = True)
so I would analogously need row_names = TRUE. However, the . in write.csv() still remained, so this only solved part of the question. Ok, so I tried a few things to get an answer:
Generating sample data:
from rpy2.robjects import r, globalenv
from rpy2.robjects import IntVector, DataFrame
d = {'a': IntVector((1,2,3)), 'b': IntVector((4,5,6))}
dataf = DataFrame(d)
Attempts follow - 1. did not work, 2. and 3. did work:
1:
r('write_csv')(x=dataf,file='testing.csv',row_names=False)
Traceback (most recent call last):
File "C:\Users\UserName\FileD\test.py", line 18, in <module>
r('write_csv')(x=dataf,file='testing.csv',row_names=False)
File "C:\Python27\lib\site-packages\rpy2\robjects\__init__.py", line 321, in __call__
res = self.eval(p)
File "C:\Python27\lib\site-packages\rpy2\robjects\functions.py", line 178, in __call__
return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs)
File "C:\Python27\lib\site-packages\rpy2\robjects\functions.py", line 106, in __call__
res = super(Function, self).__call__(*new_args, **new_kwargs)
rpy2.rinterface.RRuntimeError: Error in eval(expr, envir, enclos) : object 'write_csv'
..not found
Error in eval(expr, envir, enclos) : object 'write_csv' not found
2.
r('''
write_csv <- function(x,verbose=FALSE)
write.csv(x,file='testing.csv',row.names=FALSE)
''')
r['write_csv'](dataf)
3.
globalenv['dataf'] = dataf
r("write.csv(dataf,file='testing2.csv',row.names=FALSE)")
I was really hoping attempt 1. would have worked. It seemed I had reproduced the example in the docs base.rank(0, na_last = True), but I think something might have still been missing.
Facing New issue with PYSNMP 4.3.3. Python 2.7.13.With SNMP GET and Getnext:
Same is working fine with Pysnmp 4.3.2.
The issue actually I am observing when it's involved with virtual env.
in the virtual env even if I downgrade Pysnmp from 4.3.3 to 4.3.2 the issue is getting reproduced.
Can some one please tell me what I am missing ?
TypeError: setComponentByType() got multiple values for keyword argument 'verifyConstraints'
>>> from pysnmp.hlapi import *
>>> errorIndication, errorStatus, errorIndex, varBinds = next(
... getCmd(SnmpEngine(),
... CommunityData('public'),
... UdpTransportTarget(('127.0.0.1', 161)),
... ContextData(),
... ObjectType(ObjectIdentity('SNMPv2-MIB','sysDescr', 0)))
... )
Traceback (most recent call last):
File "<stdin>", line 6, in <module>
File "/home/sourav/MyWorkSpace/tempproject_1/lib/python2.7/site-packages/pysnmp/hlapi/asyncore/sync/cmdgen.py", line 111, in getCmd
lookupMib=options.get('lookupMib', True)))
File "/home/sourav/MyWorkSpace/tempproject_1/lib/python2.7/site-packages/pysnmp/hlapi/asyncore/cmdgen.py", line 131, in getCmd
options.get('cbFun'), options.get('cbCtx'))
File "/home/sourav/MyWorkSpace/tempproject_1/lib/python2.7/site-packages/pysnmp/entity/rfc3413/cmdgen.py", line 214, in sendVarBinds
v2c.apiPDU.setVarBinds(reqPDU, varBinds)
File "/home/sourav/MyWorkSpace/tempproject_1/lib/python2.7/site-packages/pysnmp/proto/api/v1.py", line 136, in setVarBinds
varBindList.getComponentByPosition(idx), varBind
File "/home/sourav/MyWorkSpace/tempproject_1/lib/python2.7/site-packages/pysnmp/proto/api/v1.py", line 43, in setOIDVal
verifyConstraints=False)
TypeError: setComponentByType() got multiple values for keyword argument 'verifyConstraints'
>>>
With old style also the same:
>>> from pysnmp.entity.rfc3413.oneliner import cmdgen
>>> cmdGen = cmdgen.CommandGenerator()
>>> errorIndication, errorStatus, errorIndex, varBindTable = cmdGen.nextCmd(cmdgen.CommunityData('public'),cmdgen.UdpTransportTarget(('127.0.0.1', 161), timeout=60, retries=3),cmdgen.MibVariable('SNMPv2-MIB','sysDescr',0))
In the link bellow please refer the last two comment.
https://github.com/home-assistant/home-assistant/issues/5790
Packages used:
appdirs==1.4.2
packaging==16.8
ply==3.10
pyasn1==0.2.3
pycryptodome==3.4.5
pyparsing==2.1.10
pysmi==0.0.7
pysnmp==4.3.3
six==1.10.0
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Update:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
after downgrading Pyasn1 to 0.1.9 from pyasn1==0.2.3 it seems the code is working as usual. But the problem is while fresh install of Pysnmp 4.3.3 its taking Pyasn1==0.2.3 and its failing.
That's an unfortunate regression in pyasn1/pysnmp interaction.
You could fix that by either downgrading pyasn1 to 0.2.2 or taking pysnmp from git master or wait a little bit till fixed pysnmp comes out.
After installing the scikit-learn from source code of version 0.14.1 by 'sodu python setup.py install', I tested the package by 'nosetests sklearn --exe', and received the following information:
==================================================================================
/home/elkan/Downloads/MS2PIP/scikit-learn/sklearn/feature_selection/selector_mixin.py:7: DeprecationWarning: sklearn.feature_selection.selector_mixin.SelectorMixin has been renamed sklearn.feature_selection.from_model._LearntSelectorMixin, and this alias will be removed in version 0.16
DeprecationWarning)
/home/elkan/Downloads/MS2PIP/scikit-learn/sklearn/pls.py:7: DeprecationWarning: This module has been moved to cross_decomposition and will be removed in 0.16
"removed in 0.16", DeprecationWarning)
.......S................../home/elkan/Downloads/MS2PIP/scikit-learn/sklearn/cluster/hierarchical.py:746: DeprecationWarning: The Ward class is deprecated since 0.14 and will be removed in 0.17. Use the AgglomerativeClustering instead.
"instead.", DeprecationWarning)
.........../usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py:1423: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
.............................................../home/elkan/Downloads/MS2PIP/scikit-learn/sklearn/manifold/spectral_embedding_.py:226: UserWarning: Graph is not fully connected, spectral embedding may not work as expected.
warnings.warn("Graph is not fully connected, spectral embedding"
..................................SS..............S.................................................../home/elkan/Downloads/MS2PIP/scikit-learn/sklearn/utils/extmath.py:83: NonBLASDotWarning: Data must be of same type. Supported types are 32 and 64 bit float. Falling back to np.dot.
'Falling back to np.dot.', NonBLASDotWarning)
....................../home/elkan/Downloads/MS2PIP/scikit-learn/sklearn/decomposition/fastica_.py:271: UserWarning: Ignoring n_components with whiten=False.
warnings.warn('Ignoring n_components with whiten=False.')
..................../home/elkan/Downloads/MS2PIP/scikit-learn/sklearn/utils/extmath.py:83: NonBLASDotWarning: Data must be of same type. Supported types are 32 and 64 bit float. Falling back to np.dot.
'Falling back to np.dot.', NonBLASDotWarning)
....................................S................................../home/elkan/Downloads/MS2PIP/scikit-learn/sklearn/externals/joblib/test/test_func_inspect.py:134: UserWarning: Cannot inspect object <functools.partial object at 0xbdebf04>, ignore list will not work.
nose.tools.assert_equal(filter_args(ff, ['y'], (1, )),
FAIL: Check that gini is equivalent to mse for binary output variable
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/elkan/Downloads/MS2PIP/scikit-learn/sklearn/tree/tests/test_tree.py", line 301, in test_importances_gini_equal_mse
assert_almost_equal(clf.feature_importances_, reg.feature_importances_)
File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 452, in assert_almost_equal
return assert_array_almost_equal(actual, desired, decimal, err_msg)
File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 800, in assert_array_almost_equal
header=('Arrays are not almost equal to %d decimals' % decimal))
File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 636, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 7 decimals
(mismatch 70.0%)
x: array([ 0.2925143 , 0.27676187, 0.18835709, 0.04181255, 0.03699054,
0.01668818, 0.03661717, 0.03439216, 0.04422749, 0.03163866])
y: array([ 0.29599052, 0.27676187, 0.19146823, 0.03837769, 0.03699054,
0.01811955, 0.0362238 , 0.03439216, 0.04137032, 0.03030531])
>> raise AssertionError('\nArrays are not almost equal to 7 decimals\n\n(mismatch 70.0%)\n x: array([ 0.2925143 , 0.27676187, 0.18835709, 0.04181255, 0.03699054,\n 0.01668818, 0.03661717, 0.03439216, 0.04422749, 0.03163866])\n y: array([ 0.29599052, 0.27676187, 0.19146823, 0.03837769, 0.03699054,\n 0.01811955, 0.0362238 , 0.03439216, 0.04137032, 0.03030531])')
----------------------------------------------------------------------
Ran 3950 tests in 150.890s
FAILED (SKIP=19, failures=1)
==================================================================================
The python version is 2.7.3, OS is 32 bit.
So, what the problem might be?
Thanks.
It's a numerical precision discrepancy on 32 bit platforms. You can safely ignore it as the failing test is checking the values of the attribute clf.feature_importances_ of a random forest which usually do not need to be precise to be useful (interpretation of the most important features contributing to the RF model).
I am using the sklearn 0.14 module in Python to create a decision tree. I was hoping to use the OneHotEncoder to convert some features into categorical features. According to the documentation, I should be able to provide an array of indices to indicate which features should be converted. However, trying the following code:
xs = [[64, 15230], [3, 67673], [16, 43678]]
encoder = preprocessing.OneHotEncoder(n_values='auto', categorical_features=[1], dtype=numpy.integer)
encoder.fit(xs)
I receive the following error:
Traceback (most recent call last): File
"C:\Users\sara\Documents\Shipping
Project\PythonSandbox\CarrierDecisionTree.py", line 35, in <module>
encoder.fit(xs) File "C:\Python27\lib\site-packages\sklearn\preprocessing\data.py", line
892, in fit
self.fit_transform(X) File "C:\Python27\lib\site-packages\sklearn\preprocessing\data.py", line
944, in fit_transform
self.categorical_features, copy=True) File "C:\Python27\lib\site-packages\sklearn\preprocessing\data.py", line
795, in _transform_selected
return sparse.hstack((X_sel, X_not_sel)) File "C:\Python27\lib\site-packages\scipy\sparse\construct.py", line 417,
in hstack
return bmat([blocks], format=format, dtype=dtype) File "C:\Python27\lib\site-packages\scipy\sparse\construct.py", line 532,
in bmat
dtype = upcast( *tuple([A.dtype for A in blocks[block_mask]]) ) File "C:\Python27\lib\site-packages\scipy\sparse\sputils.py", line 53,
in upcast
raise TypeError('no supported conversion for types: %r' % (args,)) TypeError: no supported conversion for types: (dtype('int32'),
dtype('S6'))
If instead, I provide the array [0, 1] to categorical_features, it works correctly and converts both features properly. The same correct behavior occurs with using 'all' to categorical_features. However, I only want the second feature converted and not the first. I understand I could do this manually by converting one feature at a time, but I was hoping to use all the beauty of OneHotEncoder as I will be using many more features later on.
Posting as an answer, for the record:
TypeError: no supported conversion for types: (dtype('int32'), dtype('S6'))
means something in the true xs (not the one shown in the code snippet) is a string: dtype('S6') is NumPy's length-six string type.