Problems with CPLEX Python API on a mac - python-2.7

I have read many posts about problems but none of them can solve mine. Although I have been following this blog exactly I still get this error when I try to run one of the example src python files:
Traceback (most recent call last):
File "facility.py", line 25, in <module>
import cplex
File "/Users/sb/Applications/IBM/ILOG/CPLEX_Studio1251/cplex/python/x86_darwin/cplex/__init__.py", line 43, in <module>
import callbacks
File "/Users/sb/Applications/IBM/ILOG/CPLEX_Studio1251/cplex/python/x86_darwin/cplex/callbacks.py", line 48, in <module>
from _internal._aux_functions import apply_freeform_two_args, apply_freeform_one_arg
File "/Users/sb/Applications/IBM/ILOG/CPLEX_Studio1251/cplex/python/x86_darwin/cplex/_internal/__init__.py", line 22, in <module>
import _list_array_utils
File "/Users/sb/Applications/IBM/ILOG/CPLEX_Studio1251/cplex/python/x86_darwin/cplex/_internal/_list_array_utils.py", line 13, in <module>
import _pycplex as CPX
File "/Users/sb/Applications/IBM/ILOG/CPLEX_Studio1251/cplex/python/x86_darwin/cplex/_internal/_pycplex.py", line 19, in <module>
_pycplex_platform = swig_import_helper()
File "/Users/sb/Applications/IBM/ILOG/CPLEX_Studio1251/cplex/python/x86_darwin/cplex/_internal/_pycplex.py", line 15, in swig_import_helper
_mod = imp.load_module('_pycplex_platform', fp, pathname, description)
File "/Users/sb/Applications/IBM/ILOG/CPLEX_Studio1251/cplex/python/x86_darwin/cplex/_internal/_pycplex_platform.py", line 23, in <module>
from cplex._internal.py1013_cplex1251 import *
ImportError: dlopen(/Users/sb/Applications/IBM/ILOG/CPLEX_Studio1251/cplex/python/x86_darwin/cplex/_internal/py1013_cplex1251.so, 2): no suitable image found. Did find:
/Users/sb/Applications/IBM/ILOG/CPLEX_Studio1251/cplex/python/x86_darwin/cplex/_internal/py1013_cplex1251.so: mach-o, but wrong architecture
Unfortunately I am not familiar with the /.bash_profile but what is posted in the link I added at the end.
Can please someone help me out here?

A possible solution to this would be to check whether you can copy the cplex directory manually over towards the site-packages that are installed (you may need to use sudo).
From your stacktrace I see that you have installed cplex into
/Users/sb/Applications/IBM/ILOG/CPLEX_Studio1251/cplex/python/x86_darwin/
First run (I assume you python 2.7) in the interactive shell:
import site; site.getsitepackages()
See How do I find the location of my Python site-packages directory? for details about this step.
This will give you the directory of the site-packages where you need to copy the "cplex" directory to. I assume it is /Library/Python/2.7/site-packages from here
on a mac then run:
sudo cp -r ./cplex /Library/Python/2.7/site-packages/
This sets up the cplex manually as an importable package for your python installation. You should therefore be able to import cplex within the python interactive shell.

Related

Cannot run TensorFlow compiled with MKL

I've compiled the latest TensorFlow on Ubuntu 16.04 with CUDA and MKL like so
bazel build --config=opt --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" --config=mkl --config=cuda //tensorflow/tools/pip_package:build_pip_package
And now when I'm trying to run it I'm getting an error saying that one of intel's libraries can't be found. I've also found other people who're installing a different DNN framework struggling with this https://github.com/PaddlePaddle/Paddle/issues/3213 and found an Intel doc https://software.intel.com/en-us/articles/intel-mkl-dnn-part-1-library-overview-and-installation that basically says that these files should become avilable when you follow the directions in that doc as far as I understood this. I've followed those directions and everything seems to have worked, but in reality those libmklml_intel.so and libiomp5.so files weren't added to /usr/local/lib.
>>> import tensorflow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 52, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: libmklml_intel.so: cannot open shared object file: No such file or directory
Edit:
actually, they were located in /mkl-dnn/external/mklml_lnx_2018.0.20170720/lib after I've cloned mkl-dnn git and followed directions in https://software.intel.com/en-us/articles/intel-mkl-dnn-part-1-library-overview-and-installation
Actually, the libs were located in /mkl-dnn/external/mklml_lnx_2018.0.20170720/lib after I've cloned mkl-dnn git and followed directions in https://software.intel.com/en-us/articles/intel-mkl-dnn-part-1-library-overview-and-installation
So I copied them into /usr/local/lib and included those folders in .bashrc as
export LIBRARY_PATH=/usr/local/lib:$LIBRARY_PATH
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
and reloaded .bashrc
source ~/.bashrc
And now TensorFlow works.

Unable To Run IPython nbconvert From Python2.7 Virtual Environment

I have a virtual environment of Python 2.7 with ipython installed (Ubuntu 16.04.2 (Xenial) LTS.)
When I’m working in the virtual environment (after running source venv/bin/activate in bash shell while being in the parent directory of the virtual environment) I have no problem executing conversion of my jupiter’s notebook from bash shell like so:
ipython nbconvert --to html --execute my_notes.ipynb --stdout > /tmp/report.html
But when I’m trying to call that command from fabric’s task using subprocess:
command = ['ipython', 'nbconvert', '--to', 'html', '--execute', notebook_path, '--stdout']
output = subprocess.check_output(command,
cwd=os.environ['PYTHONPATH'],
env=os.environ.copy())
It always fails with this exception I cannot find a reason for it:
Traceback (most recent call last):
File "/opt/backend/venv/bin/ipython", line 7, in <module>
from IPython import start_ipython
File "/opt/backend/venv/local/lib/python2.7/site-packages/IPython/__init__.py", line 48, in <module>
from .core.application import Application
File "/opt/backend/venv/local/lib/python2.7/site-packages/IPython/core/application.py", line 25, in <module>
from IPython.core import release, crashhandler
File "/opt/backend/venv/local/lib/python2.7/site-packages/IPython/core/crashhandler.py", line 28, in <module>
from IPython.core import ultratb
File "/opt/backend/venv/local/lib/python2.7/site-packages/IPython/core/ultratb.py", line 119, in <module>
from IPython.core import debugger
File "/opt/backend/venv/local/lib/python2.7/site-packages/IPython/core/debugger.py", line 46, in <module>
from pdb import Pdb as OldPdb
File "/usr/lib/python2.7/pdb.py", line 59, in <module>
class Pdb(bdb.Bdb, cmd.Cmd):
AttributeError: 'module' object has no attribute 'Cmd'
More info to save your time.
I’ve tried:
Using same paths for PYTHONPATH as I got from PyCharm run/debug configuration.
Using nbconvert as python library from this documentation.
Tried os.system("ipython nbconvert…").
Wrapped working command (ipython nbconvert…) with a shell script and used it in subprocess.check_output and os.system.
Get drunk and bang my head on a brick wall.
And always end-up with that cursed exception.
Reposting as an answer for completeness:
There was a file called cmd.py somewhere where Python was finding it as an importable module. This was shadowing the cmd module in the standard library, which is used by pdb, which IPython imports. When pdb tried to subclass a class from cmd, that class wasn't there. Moving cmd.py out of the way lets it find the cmd module it needs.
This is an unfortunate annoyance with Python - lots of short words are already used as module names, and using them yourself produces crashes, with a wide range of different errors.

how to make a python package with numpy, pandas, scipy, sklearn inside?

I want to make a python package with numpy, pandas, scipy and sklearn, so I can take it to any linux without install python, but i came across this:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/data1/sigmoidguo/TOOLS/python27/lib/python2.7/site-packages/numpy/__init__.py", line 180, in <module>
from . import add_newdocs
File "/data1/sigmoidguo/TOOLS/python27/lib/python2.7/site-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/data1/sigmoidguo/TOOLS/python27/lib/python2.7/site-packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/data1/sigmoidguo/TOOLS/python27/lib/python2.7/site-packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/data1/sigmoidguo/TOOLS/python27/lib/python2.7/site-packages/numpy/core/__init__.py", line 14, in <module>
from . import multiarray
ImportError: libblas.so.3: cannot open shared object file: No such file or directory
How can I fix it without root permission?
PS: I don't have root permission, so I can't install python site-packages to python...
You can install anaconda. It is a multi-platform python distribution that can be installed in your home folder (with user rights). It comes with pip and conda commands to install any package you need. It already comes with all the packages you mention (numpy, pandas, scipy and sklearn), so sounds like a good fit for your needs.
Would http://python-xy.github.io/ do the trick?
(if it was windows I would suggest the portable http://winpython.sourceforge.net/)

What should I modify to solve the "No module named _sqlite3" error message?

I installed ATpy-0.9.7 on my pc successfully and I also have the Python version of "2.7.5".
But when I import atpy I get the following error message:
>>> import atpy
ERROR: ImportError: No module named _sqlite3 [unknown]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "atpy/__init__.py", line 1, in <module>
from .basetable import Table, TableSet, VectorException
File "atpy/basetable.py", line 15, in <module>
from . import registry
File "atpy/registry.py", line 186, in <module>
from . import sqltable
File "atpy/sqltable.py", line 10, in <module>
from . import sqlhelper as sql
File "atpy/sqlhelper.py", line 11, in <module>
import sqlite3
File "/export/aibn84_2/zahra/lib/Python-2.7.5/lib/python2.7/sqlite3/__init__.py", line 24, in <module>
from dbapi2 import *
File "/export/aibn84_2/zahra/lib/Python-2.7.5/lib/python2.7/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ImportError: No module named _sqlite3
I also installed db_sqlite3.egg-info. I don't know why this error message occurs!
I installed again th python2.7.5 with the following command :
./configure --prefix=$PYTHONPATH
but I also get this error after executing make:
Python build finished, but the necessary bits to build these modules were not found:
_bsddb _sqlite3 bsddb185
dbm dl gdbm
imageop sunaudiodev
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
How could I run configure in order to install required C libraries?
If you are using a self built version of Python you need to ensure that both the base and the development sqllite3 packages are installed on your system before building Python.
If they are not and, as you said, you do not have superuser privileges, you can download and build sqlite locally, and get your Python build to use that version. This blog post describes how.
According to this question
How can I install sqlite3 to Python?
...you shouldn't have to install anything to get sqlite3 for python. Before I could import atpy I did have to install astropy (which was quite involved). After I did that, everything worked.

Pydoop: No module named _hdfs_*

I was able to build and install Pydoop without errors, so, for example, I can do the following:
>>> import pydoop
>>> pydoop.__version__
'0.10.0'
However, when I try to import main Pydoop modules such as pipes or hdfs I'm getting ImportError:
>>> import pydoop.hdfs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pydoop/hdfs/__init__.py", line 79, in <module>
from fs import hdfs, default_is_local
File "pydoop/hdfs/fs.py", line 28, in <module>
hdfs_ext = pydoop.import_version_specific_module("_hdfs")
File "pydoop/__init__.py", line 111, in import_version_specific_module
return import_module(complete_mod_name(name))
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named _hdfs_2_0_0_cdh_4_3_0
In addition, when I try to use pydoop script I'm getting such a hint:
...
ImportError: /usr/local/lib/python2.7/dist-packages/pydoop/_pipes_2_0_0_cdh_4_3_0.so: undefined symbol: BIO_s_mem
BIO_s_mem is a symbol from libssl (OpenSSL), so I guess Pydoop can't find this shared library. I made sure it is available, ends with .so (as opposed to, say, .so.1) and is in LD_LIBRARY_PATH.
So what may be the reason for this error? How can I fix it (build options? environment variables?)
Any help is appreciated.
What OS version are you using? Try setting LD_PRELOAD to the path of your libssl, e.g.:
export LD_PRELOAD=/lib/x86_64-linux-gnu/libssl.so.1.0.0
Not sure about the pipes error, but I ran into your issue with _hdfs_2_0_0_cdh_4_3_0 (mine was a different version of hadoop, but I believe the problem is similar).
The setup.py script seems to want to make an egg file in /usr/local/lib/python2.7/dist-packages for pydoop, but the setup requires that it just be a folder (which will have that _hdfs_2_0_0_cdh_4_3_0.so file in it).
The solution is pretty simple: just delete /usr/local/lib/python2.7/dist-packages/pydoop-0.11.1.egg-info or the equivalent for you version.