I want to use pyomo5.5 to call solver gurobi7.5 or gurobi8.0, but found that it returns error when call the solver directly.
If I use gurobi_ampl7.5 or 8.0, it will work.
So, does that mean pyomo5.5 can't support to call gurobi directly?
For Pyomo to call Gurobi directly (through the python interface), the package gurobipy needs to be importable from the python environment in which Pyomo is running.
Try following the Python interface installation instructions from the Gurobi website: http://www.gurobi.com/documentation/8.0/quickstart_mac/the_gurobi_python_interfac.html
Related
I am trying to use matploitlib_cpp on Windows 11 with Numpy 1.24.2 and Python 3.11, but I keep running in to the following error.
Original error was: No module named 'numpy.core._multiarray_umath'
I know this has been posted at a million different places on the internet and I have tried following all the guides that says reinstalling numpy and whatever, it does not work for me. What I can see in my path ..\Python3.11\Lib\site-packages\numpy\core is that I have a file named _multiarray_umath.cp311-win_amd64.pyd but no file named _multiarray_umath. I also tried to use a virtual environement from Anaconda but I am not sure how to build matploitlib_cpp against such virtual enviornment.
I got it working by using the release binaries instead of debug binaries.
I am new to the Modelica world and installed JModelica2.14 on win10 via the binary file provided from the offical webpage. From the console I call setenv.bat, start the 64bit python envrionment and import '.\install\Python_64'. However, running the example files already throws an error. The minimal code example throwing the error is provided below. I assume that the binaries do not have a bug without anyone mentioning it. It would be great if someone could give a hint about what I am missing. Thanks a lot!
import modelicacasadi_wrapper
modelicacasadi_wrapper.OptimicaOptionsWrapper()
RuntimeError Traceback (most recent call last)
<ipython-input-11-ce2bcdfa3f06> in <module>()
----> 1 modelicacasadi_wrapper.OptimicaOptionsWrapper()
C:\JModelica.org-2.14\install\Python_64\modelicacasadi_wrapper\modelicacasadi_wrapper.pyc in __init__(self, *args)
3472 __init__(ModelicaCasADi::OptimicaOptionsWrapper self, OptimicaOptionsWrapper other) -> OptimicaOptionsWrapper
3473 """
-> 3474 this = _modelicacasadi_wrapper.new_OptimicaOptionsWrapper(*args)
3475 try:
3476 self.this.append(this)
RuntimeError: java.lang.NoClassDefFoundError org/jmodelica/optimica/compiler/ModelicaCompiler
Caused by: java.lang.ClassNotFoundException: org.jmodelica.optimica.compiler.ModelicaCompiler
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
This function is only given in binary format compiled from c++ code. therefore, I can not change the function without recompiling the library (I already tried). To me it seems like the org.jmodelica.optimica.compiler.ModelicaCompiler should have been a org.jmodelica.optimica.compiler.OptimicaCompiler. This would mean that I have to install the package from source and I haven't been sucessful with that yet.
I still use JModelica 2.14 in Python 2 and then have installed virtual environment with Conda to create a Python 3 environment where I then run the FMUs with the latest PyFMI package in Python 3.10 and Jupyter notebook. It all works very fine, but as Imke Kreuger indicated you have MSL 3.2.2 build 3 and there has been development in the Modelica Standard Library since then.
During installation you are asked whether you want "Graybox OPC Automation wrapper" and I usually say "NO" there. You may have said "YES" though, right? See Chapter 2.2.1 in the User guide.
The JModelica installation actually provide you with two different compilers.
One is for standard Modelica brings as output an FMU of CS or ME type. The other compiler is for Modelica extended with Optimica and does not bring any FMU and you are bound to work in Python 2.
Tried to reproduce your error (with my installation without the "Graybox OPC..."). If I (in the Python 2 environment) literally do the two commands, I get "Press any key to continue...." and when I press key the IPython window collapse.
However if you skip the two brackets at the end of the second command, then it is accepted!
If you write a question mark at the end you get information about what arguments you should have.
If you describe better what you want to do, we likely can help you better.
Note, it seems you want to use Optimica and that is an extension of Modelica that is only partially supported by OpenModelica, what I understand. The Optimica extension is well integrated in JModelica and originated in this context. For "ordinary" Modelica use I do not think you need to use this wrapper.
I have a TensorFlow model which I want to convert into a tflite model, which is going to be deployed on an ARM64 platform.
It happens to be that two operations of my model (RandomStandardNormal, Softplus) seem to require custom implementations. Due to execution time being not that important, I decided to go with a hybrid model that uses the extended runtime. I converted it via:
graph_def_file = './model/frozen_model.pb'
inputs = ['eval_inputs']
outputs = ['model/y']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, inputs, outputs)
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_file_name = 'vae_' + str(tf.__version__) + '.tflite'
tflite_model = converter.convert()
open(tflite_file_name, 'wb').write(tflite_model)
This worked and I ended up with a seemingly valid tflite model file. Whenever I try to load this model with an interpreter, I get an error (it does not matter if I use the Python or C++ API):
ERROR: Regular TensorFlow ops are not supported by this interpreter. Make sure you invoke the Flex delegate before inference.
ERROR: Node number 4 (FlexSoftplus) failed to prepare.
I have a hard time to find documentation on the tf website on how to invoke the Flex delegate for both APIs. I have stumbled across a header file ("tensorflow/lite/delegates/flex/delegate_data.h") which seems to be related to this issue, but including it in my C++ project yields another error:
In file included from /tensorflow/tensorflow/core/common_runtime/eager/context.h:28:0,
from /tensorflow/tensorflow/lite/delegates/flex/delegate_data.h:18,
from /tensorflow/tensorflow/lite/delegates/flex/delegate.h:19,
from demo.cpp:7:
/tensorflow/tensorflow/core/lib/core/status.h:23:10: fatal error: tensorflow/core/lib/core/error_codes.pb.h: No such file or directory
#include "tensorflow/core/lib/core/error_codes.pb.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
By any chance, has anybody encountered and resolved this before? If you have an example snippet, please share the link!
When building TensorFlow Lite libraries using the bazel pipeline, the additional TensorFlow ops library can be included and enabled as follows:
Enable monolithic builds if necessary by adding the --config=monolithic build flag.
Add the TensorFlow ops delegate library dependency to the build dependencies: tensorflow/lite/delegates/flex:delegate.
Note that the necessary TfLiteDelegate will be installed automatically when creating the interpreter at runtime as long as the delegate is linked into the client library. It is not necessary to explicitly install the delegate instance as is typically required with other delegate types.
Python pip package
Python support is actively under development.
source: https://www.tensorflow.org/lite/guide/ops_select
According to https://www.tensorflow.org/lite/guide/ops_select#android_aar on 2019/9/25
Python support of 'select operators' is actively under development.
You can test the model in Android by using FlexDelegate.
I ran my model successfully in the same way.
e.g. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/java/src/test/java/org/tensorflow/lite/InterpreterFlexTest.java
I would like to have a script invoke numpy from a c++ embedded python runtime by setting the runtime path to know about the numpy module located within site-packages.
However I get the error:
cannot import name 'multiarray'
from \Lib\site-packages\numpy\core__init_.py on the line
from . import multiarrray
I have tried to set the os.path to be xxx\numpy\core but it still cannot seem to find the multiarray.pyd file during the import statement
I have read through similar questions posed but none of the answers seem relevant to my case.
I am using Python 3.4.4 (32 bit) and have installed Numpy 1.11.1 using the wheel
numpy-1.11.1-cp34-none-win32.whl
python -m pip install numpy-1.11.1-cp34-none-win32.whl
Completed without any errors.
Seems like the failure message maybe more general than just an incomplete PYTHONPATH?
Also think it might be broader than Numpy in that ANY .pyd based package that is imported from the embedded environment will have this problem?
Any help appreciated.
Did you ensure all your NumPy includes: \numpy\core\include\numpy\ were present during the build? That's the only time I get those types of errors was if the build couldn't find all the NumPy includes... although during embedding I found that the numpy entire directory (already built on your build machine) has to be inside a directory under Py_SetPath(python35.lib;importlibs); assuming importlibs is a directory with NumPy inside and anything else you want to bundle.
Seems like the answer was to install python 3.4.1 to match the python34.dll version of 3.4.1.
In the iPython Notebook I am trying to use the notebook function %%cython_pyximport to write a cython function that I can call later on in my notebook.
I want to use this command as opposed to %%cython because there seems to be quite a bit of overhead with it. For example when I profile my code I get this:
168495 function calls in 4.606 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 3.234 3.234 4.605 4.605 {_cython_magic_0ef63e1ad591c89b73223c7a86d78802.knn_alg}
11397 0.326 0.000 0.326 0.000 {method 'reduce' of 'numpy.ufunc' objects}
987 0.152 0.000 0.266 0.000 decomp.py:92(eig)
987 0.118 0.000 0.138 0.000 function_base.py:3112(delete)
I'm hoping that using %%cython_pyximport will cut down time spent calling this function. If there is a better way please let me know.
So getting to my actual question - When I use %%cython_pyximport I get this error:
ImportError: Building module function failed: ['DistutilsPlatformError: Unable to find vcvarsall.bat\n']
Maybe it's related to something not being on my PATH but I'm not sure. What do I have to do to fix this?
I'm using Windows 7, Python 2.7.6 (Installed with Anaconda), Cython 0.20.1, iPython Notebook 2.1.0
EDIT:
So after following #IanH 's suggestion I now have this error:
fatal error: numpy/arrayobject.h: No such file or directory
It seems like additional header files need to be included for numpy to work with pyximport. On this page https://github.com/cython/cython/wiki/InstallingOnWindows there is a mention of this error and how to solve it but am lost at how to apply this so that the %%cython_pyximport command will work in my notebook.
There are two different issues here.
I'll first address the one you seem to care about.
Using pyximport instead of the cython magic function should not increase speed at all.
Given your profiling results, it appears that the real problem here is that you are calling a NumPy function on the inside of a loop.
In Cython you have to keep track of which function calls are done in C, and which are done in Python.
Numpy universal functions are Python functions and they require the cost of calling a Python function.
How you would want to fix this depends entirely on what you are doing.
If you can cleverly vectorize away the loop using NumPy operations, that is probably the best way, but not all problems can easily be solved that way.
There are ways to call LAPACK routines from Cython, as described in this answer.
If you are doing simpler operations (like summing along axes, etc), you can write a function that uses cython memoryviews to pass slices around internally in your Cython module.
There is some discussion on the proper way to do that in this blog post.
Doing these sorts of operations is usually a little harder in Cython, but it is still a very approachable problem.
Now, though I'm not convinced that pyximport will actually do what you want it to, I will still tell you how to get it working.
The error you are seeing happens when distutils tries to use the Visual Studio compiler even when you haven't gotten everything set up for it.
Anaconda, by default uses MinGW for Cython extensions, but for some reason it isn't set up to use MinGW with pyximport.
That's easy to fix though.
In your Python installation directory, (probably C:\Anaconda or something along those lines), there should be a file Anaconda\Lib\distutils\distutils.cfg. (Create it if it doesn't exist.)
Modify it so that its contents contain both of the following options:
[build]
compiler=mingw32
[build_ext]
compiler = mingw32
If I remember correctly, the first is already included in Anaconda.
As of this writing, the second is not.
You will need it there to make pyximport work.
I had this same exact problem, so my solution was instead to use:
[build]
compiler=mingw32
[build_ext]
compiler = mingw32
In the answer by IanH, I just chose to call gcc and cython directly.
I figured I'd literally just circumnavigate around all the possible errors
Firstly, you need to compile cython using:
cython_commands = ['cython', '-a', '-l', '-p', '-o', c_file_name, file_path]
cython_feedback = subprocess.call(cython_commands)
Then you need to take that .c file and compile it by telling the compiler where to look for the python libraries.
gcc_commands = ['gcc', '-shared', '-Wall', '-O3', '-I', py_include_dir, '-L', py_libs_dir, '-o', output_name,
c_file_name, '-l', a_lib]
gcc_error = subprocess.call(gcc_commands)
py_include_dir: The path to the directory in python installation labeled 'include'
py_libs_dir: The path to the directory in python installation labeled 'libs'
c_file_name: The path to which you wish to ave the middleman c file
a_lib: The name of your python installation (ex. 'python34' or 'python35' or 'python27')
Since I have VS2015 installed, had to add new environment variable
SET VS90COMNTOOLS=%VS140COMNTOOLS%
Source https://stackoverflow.com/a/10558328/625189