I'm using ML Engine to serve predictions. I used Python: 2.7, Framework: Tensorflow, Framework version: 1.8, Runtime version: 1.8 but am getting back:
Failed to load model: (Error code: 0)
I googled around but most issues have a more specific error than what I do. Is using the latest framework and runtime versions a bad idea / problematic?
Thanks!
I had the same problem using python 3.7 and framework = scikit-learn. It returned error code 0. My problem was using an algorithm that did not have a "predict" method implemented. Error code 0 seems to be a generic error class for internal model related issues.
Related
I have recently started using the C++ implementation of OpenCV and have been running into a spot of trouble. I have been experimenting with trying to estimate 3D human pose with the video from my built in camera.
To start, I looked at a project like this which accomplishes a similar task by importing an ONNX model and loading it with cv::dnn::readNetFromONNX(modelPath);. However this model only performs 2D pose estimation. From this I concluded that if I could gather a model from an alternate source and, as long as it was in the ONNX format, then it would be able to be loaded by OpenCV.
I tried going to Google Colab to use OpenVino in a safe environment to grab a copy of the model with their model downloader and model converter. These commands ended up being:
!pip install openvino-dev[onnx]
!omz_downloader --name higher-hrnet-w32-human-pose-estimation
!pip install yacs
!omz_converter --name higher-hrnet-w32-human-pose-estimation
Through the course of these commands, we see:
========== Converting higher-hrnet-w32-human-pose-estimation to ONNX
Conversion to ONNX command: /usr/bin/python3 -- /usr/local/lib/python3.7/dist-packages/open_model_zoo/model_tools/internal_scripts/pytorch_to_onnx.py --model-path=/usr/local/lib/python3.7/dist-packages/open_model_zoo/model_tools/models/public/higher-hrnet-w32-human-pose-estimation --model-path=/content/public/higher-hrnet-w32-human-pose-estimation --model-name=get_net --import-module=model '--model-param=file_config=r"/content/public/higher-hrnet-w32-human-pose-estimation/experiments/higher_hrnet.yaml"' '--model-param=weights=r"/content/public/higher-hrnet-w32-human-pose-estimation/ckpt/pose_higher_hrnet_w32_512.pth"' --input-shape=1,3,512,512 --input-names=image --output-names=embeddings,heatmaps --output-file=/content/public/higher-hrnet-w32-human-pose-estimation/higher-hrnet-w32-human-pose-estimation.onnx
ONNX check passed successfully.
========== Converting higher-hrnet-w32-human-pose-estimation to IR (FP16)
Conversion command: /usr/bin/python3 -m mo --framework=onnx --data_type=FP16 --output_dir=/content/public/higher-hrnet-w32-human-pose-estimation/FP16 --model_name=higher-hrnet-w32-human-pose-estimation --reverse_input_channels '--input_shape=[1,3,512,512]' --input=image '--mean_values=image[123.675,116.28,103.53]' '--scale_values=image[58.395,57.12,57.375]' --output=embeddings,heatmaps --input_model=/content/public/higher-hrnet-w32-human-pose-estimation/higher-hrnet-w32-human-pose-estimation.onnx
which indicates the existence of higher-hrnet-w32-human-pose-estimation.onnx in our local storage. So I downloaded that file to my local device and ran in the above context. When I run, I get a cryptic error:
Unhandled exception at 0x00007FFB62FF4F69 in posedetect.exe: Microsoft C++ exception: cv::Exception at memory location
Is there a way to load a 3D pose estimation model in OpenCV using C++?
Alternate attempt with openpose
I tried following the recommendation for using openpose as an alternative to the HigherHRNet model as described by #B200011011 is the comments. To do this I went to the open pose github and performed the following:
git clone https://github.com/CMU-Perceptual-Computing-Lab/openpose.git
cd openpose/models
getModels.bat
cp pose/body_25/pose_iter_584000.caffemodel ../source/repos/posedetect/models/
When I try to load this caffe model with cv::dnn::readNetFromCaffe(modelPath); I get a similar cryptic error as trying to load the HigherHRNet Model:
Unhandled exception at 0x00007FFB62FF4F69 in posedetect.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000E42CB0D660.
So how do I load a HigherHRNet (or other 3D pose estimation) model in OpenCV C++?
I have a TensorFlow model which I want to convert into a tflite model, which is going to be deployed on an ARM64 platform.
It happens to be that two operations of my model (RandomStandardNormal, Softplus) seem to require custom implementations. Due to execution time being not that important, I decided to go with a hybrid model that uses the extended runtime. I converted it via:
graph_def_file = './model/frozen_model.pb'
inputs = ['eval_inputs']
outputs = ['model/y']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, inputs, outputs)
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_file_name = 'vae_' + str(tf.__version__) + '.tflite'
tflite_model = converter.convert()
open(tflite_file_name, 'wb').write(tflite_model)
This worked and I ended up with a seemingly valid tflite model file. Whenever I try to load this model with an interpreter, I get an error (it does not matter if I use the Python or C++ API):
ERROR: Regular TensorFlow ops are not supported by this interpreter. Make sure you invoke the Flex delegate before inference.
ERROR: Node number 4 (FlexSoftplus) failed to prepare.
I have a hard time to find documentation on the tf website on how to invoke the Flex delegate for both APIs. I have stumbled across a header file ("tensorflow/lite/delegates/flex/delegate_data.h") which seems to be related to this issue, but including it in my C++ project yields another error:
In file included from /tensorflow/tensorflow/core/common_runtime/eager/context.h:28:0,
from /tensorflow/tensorflow/lite/delegates/flex/delegate_data.h:18,
from /tensorflow/tensorflow/lite/delegates/flex/delegate.h:19,
from demo.cpp:7:
/tensorflow/tensorflow/core/lib/core/status.h:23:10: fatal error: tensorflow/core/lib/core/error_codes.pb.h: No such file or directory
#include "tensorflow/core/lib/core/error_codes.pb.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
By any chance, has anybody encountered and resolved this before? If you have an example snippet, please share the link!
When building TensorFlow Lite libraries using the bazel pipeline, the additional TensorFlow ops library can be included and enabled as follows:
Enable monolithic builds if necessary by adding the --config=monolithic build flag.
Add the TensorFlow ops delegate library dependency to the build dependencies: tensorflow/lite/delegates/flex:delegate.
Note that the necessary TfLiteDelegate will be installed automatically when creating the interpreter at runtime as long as the delegate is linked into the client library. It is not necessary to explicitly install the delegate instance as is typically required with other delegate types.
Python pip package
Python support is actively under development.
source: https://www.tensorflow.org/lite/guide/ops_select
According to https://www.tensorflow.org/lite/guide/ops_select#android_aar on 2019/9/25
Python support of 'select operators' is actively under development.
You can test the model in Android by using FlexDelegate.
I ran my model successfully in the same way.
e.g. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/java/src/test/java/org/tensorflow/lite/InterpreterFlexTest.java
I've recently tried to implement Django-Bleach into my project, but I'm having an issue with an import library. I am currently running Python 3.6.2 and Django 1.11. When I try to define a django_bleach form in my forms.py, with the following statement:
from django_bleach.forms import BleachField
I am receiving the following error:
ModuleNotFoundError: No module named 'django.utils.importlib'
I spent the better part of this afternoon researching this error and I have come to understand that the django.utils.importlib statement was deprecated in 1.9. However, I can't seem to determine a workaround for this problem. I did try the suggestion outlined in this issue but it didn't seem to make a difference. I still receive the error. Cannot import importlib
I'm also wondering if I should be using bleach instead of django-bleach as django-bleach doesn't seem to be updated since 2014. Thanks in advance for your suggestions and help.
Package you are trying to use doesen't seem to be maintained.
Error you are facing is related to the forms.py line 7
from django.utils.importlib import import_module
If you are really into the following wrapper package you could fork/fix it and install your forked version instead
Ultimately wound up incorporating just bleach into my django installation. It appears django-bleach is no longer supported for Python 3. Using Bleach and incorporating it according to the documentation allowed me resolve this issue.
this is my first stack overflow question so please pardon any ignorance about the forum on my part.
I am using python 2.7.6 64 bit and pandas 0.13.1-1 from Enthought Canopy 1.3.0.1715 on a win 7 machine. I have numpy 1.8.0-1 and numexpr 2.2.2-2.
I have inconsistent error behviour running the following on a pandas series of 10,000 numpy.float64 loaded from hdf:
import python
s = pandas.read_hdf(r'C:\test\test.h5', 'test')
s/2.
This gives me inconsistent behaviour, it sometimes works and sometimes throws:
OMP: Error #134: Cannot set thread affinity mask.
OMP: System error #87: The parameter is incorrect.
I have replicated this error on other machines, and the test case is derived from a unit test failure (with the above error) which was replicted on several machines and from a server. This has come up in an upgrade from pandas 0.12 to pandas 0.13.
The following consistantly runs with no error:
import python
s = pandas.read_hdf(r'C:\test\test.h5', 'test')
s.apply(lambda x: x/2.)
and,
import python
s = pandas.read_hdf(r'C:\test\test.h5', 'test')
pandas.computation.expressions.set_use_numexpr(False)
s/2.
Thanks for the help.
This is a very similar problem to as described in this issue, and the linked issue
It seems that only canopy is experiencing these issues. I think it has to do with the canopy numpy MKL build, but that is a guess as I have not been able to reproduce this. So here are some work-arounds:
try to upgrade numexpr to 2.4 (current version), or downgrade to 2.1
try to use numpy 1.8.1 via canopy
try to install numpy/numexpr from the posted binaries here; I don't know exactly how canopy works so not sure if this is possible
uninstall numexpr
you can also disable numexpr support via pandas.computation.expressions.set_use_numexpr(False). Note that numexpr is REQUIRED in order to read/use HDF5 files via PyTables. However the expressions disabling of numexpr should just disable it for 'computations' (and not the HDF access).
Basically, I am getting a unreasonable amount of errors using these libraries:
django==1.4.3
pyelasticsearch==0.6
simplejson==3.3.0
django-haystack==2.1.0
The errors I get are:
From python2.7/site-packages/haystack/query.py:
index_queryset() got an unexpected keyword argument 'using'
I just remove this and it works locallay
/srv/www/projects/k-state-union/lib/haystack/backends/elasticsearch_backend.py:
raise MissingDependency("The 'elasticsearch' backend requires the installation of 'pyelasticsearch'. Please refer to the documentation.")
This error occurs when pyelasticsearch fails to be imported. If I let it fail naturally:
/srv/www/.virtualenvs/k-state-union/lib/python2.6/site-packages/pyelasticsearch/client.py:
from simplejson import JSONDecodeError
Which works in the python interpreter.
The errors seem to indicate that I am not using the intended versions of pyelasticsearch and haystack. What do I need to do to get this up and running?
There are two different python libraries for Elasticsearch out there. I switched from pyelasticsearch to elasticsearch and it worked.