How do I load a HigherHRNet in OpenCV C++? - c++

I have recently started using the C++ implementation of OpenCV and have been running into a spot of trouble. I have been experimenting with trying to estimate 3D human pose with the video from my built in camera.
To start, I looked at a project like this which accomplishes a similar task by importing an ONNX model and loading it with cv::dnn::readNetFromONNX(modelPath);. However this model only performs 2D pose estimation. From this I concluded that if I could gather a model from an alternate source and, as long as it was in the ONNX format, then it would be able to be loaded by OpenCV.
I tried going to Google Colab to use OpenVino in a safe environment to grab a copy of the model with their model downloader and model converter. These commands ended up being:
!pip install openvino-dev[onnx]
!omz_downloader --name higher-hrnet-w32-human-pose-estimation
!pip install yacs
!omz_converter --name higher-hrnet-w32-human-pose-estimation
Through the course of these commands, we see:
========== Converting higher-hrnet-w32-human-pose-estimation to ONNX
Conversion to ONNX command: /usr/bin/python3 -- /usr/local/lib/python3.7/dist-packages/open_model_zoo/model_tools/internal_scripts/pytorch_to_onnx.py --model-path=/usr/local/lib/python3.7/dist-packages/open_model_zoo/model_tools/models/public/higher-hrnet-w32-human-pose-estimation --model-path=/content/public/higher-hrnet-w32-human-pose-estimation --model-name=get_net --import-module=model '--model-param=file_config=r"/content/public/higher-hrnet-w32-human-pose-estimation/experiments/higher_hrnet.yaml"' '--model-param=weights=r"/content/public/higher-hrnet-w32-human-pose-estimation/ckpt/pose_higher_hrnet_w32_512.pth"' --input-shape=1,3,512,512 --input-names=image --output-names=embeddings,heatmaps --output-file=/content/public/higher-hrnet-w32-human-pose-estimation/higher-hrnet-w32-human-pose-estimation.onnx
ONNX check passed successfully.
========== Converting higher-hrnet-w32-human-pose-estimation to IR (FP16)
Conversion command: /usr/bin/python3 -m mo --framework=onnx --data_type=FP16 --output_dir=/content/public/higher-hrnet-w32-human-pose-estimation/FP16 --model_name=higher-hrnet-w32-human-pose-estimation --reverse_input_channels '--input_shape=[1,3,512,512]' --input=image '--mean_values=image[123.675,116.28,103.53]' '--scale_values=image[58.395,57.12,57.375]' --output=embeddings,heatmaps --input_model=/content/public/higher-hrnet-w32-human-pose-estimation/higher-hrnet-w32-human-pose-estimation.onnx
which indicates the existence of higher-hrnet-w32-human-pose-estimation.onnx in our local storage. So I downloaded that file to my local device and ran in the above context. When I run, I get a cryptic error:
Unhandled exception at 0x00007FFB62FF4F69 in posedetect.exe: Microsoft C++ exception: cv::Exception at memory location
Is there a way to load a 3D pose estimation model in OpenCV using C++?
Alternate attempt with openpose
I tried following the recommendation for using openpose as an alternative to the HigherHRNet model as described by #B200011011 is the comments. To do this I went to the open pose github and performed the following:
git clone https://github.com/CMU-Perceptual-Computing-Lab/openpose.git
cd openpose/models
getModels.bat
cp pose/body_25/pose_iter_584000.caffemodel ../source/repos/posedetect/models/
When I try to load this caffe model with cv::dnn::readNetFromCaffe(modelPath); I get a similar cryptic error as trying to load the HigherHRNet Model:
Unhandled exception at 0x00007FFB62FF4F69 in posedetect.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000E42CB0D660.
So how do I load a HigherHRNet (or other 3D pose estimation) model in OpenCV C++?

Related

java.lang.NoClassDefFoundError with JModelica2.14

I am new to the Modelica world and installed JModelica2.14 on win10 via the binary file provided from the offical webpage. From the console I call setenv.bat, start the 64bit python envrionment and import '.\install\Python_64'. However, running the example files already throws an error. The minimal code example throwing the error is provided below. I assume that the binaries do not have a bug without anyone mentioning it. It would be great if someone could give a hint about what I am missing. Thanks a lot!
import modelicacasadi_wrapper
modelicacasadi_wrapper.OptimicaOptionsWrapper()
RuntimeError Traceback (most recent call last)
<ipython-input-11-ce2bcdfa3f06> in <module>()
----> 1 modelicacasadi_wrapper.OptimicaOptionsWrapper()
C:\JModelica.org-2.14\install\Python_64\modelicacasadi_wrapper\modelicacasadi_wrapper.pyc in __init__(self, *args)
3472 __init__(ModelicaCasADi::OptimicaOptionsWrapper self, OptimicaOptionsWrapper other) -> OptimicaOptionsWrapper
3473 """
-> 3474 this = _modelicacasadi_wrapper.new_OptimicaOptionsWrapper(*args)
3475 try:
3476 self.this.append(this)
RuntimeError: java.lang.NoClassDefFoundError org/jmodelica/optimica/compiler/ModelicaCompiler
Caused by: java.lang.ClassNotFoundException: org.jmodelica.optimica.compiler.ModelicaCompiler
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
This function is only given in binary format compiled from c++ code. therefore, I can not change the function without recompiling the library (I already tried). To me it seems like the org.jmodelica.optimica.compiler.ModelicaCompiler should have been a org.jmodelica.optimica.compiler.OptimicaCompiler. This would mean that I have to install the package from source and I haven't been sucessful with that yet.
I still use JModelica 2.14 in Python 2 and then have installed virtual environment with Conda to create a Python 3 environment where I then run the FMUs with the latest PyFMI package in Python 3.10 and Jupyter notebook. It all works very fine, but as Imke Kreuger indicated you have MSL 3.2.2 build 3 and there has been development in the Modelica Standard Library since then.
During installation you are asked whether you want "Graybox OPC Automation wrapper" and I usually say "NO" there. You may have said "YES" though, right? See Chapter 2.2.1 in the User guide.
The JModelica installation actually provide you with two different compilers.
One is for standard Modelica brings as output an FMU of CS or ME type. The other compiler is for Modelica extended with Optimica and does not bring any FMU and you are bound to work in Python 2.
Tried to reproduce your error (with my installation without the "Graybox OPC..."). If I (in the Python 2 environment) literally do the two commands, I get "Press any key to continue...." and when I press key the IPython window collapse.
However if you skip the two brackets at the end of the second command, then it is accepted!
If you write a question mark at the end you get information about what arguments you should have.
If you describe better what you want to do, we likely can help you better.
Note, it seems you want to use Optimica and that is an extension of Modelica that is only partially supported by OpenModelica, what I understand. The Optimica extension is well integrated in JModelica and originated in this context. For "ordinary" Modelica use I do not think you need to use this wrapper.

How to invoke the Flex delegate for tflite interpreters?

I have a TensorFlow model which I want to convert into a tflite model, which is going to be deployed on an ARM64 platform.
It happens to be that two operations of my model (RandomStandardNormal, Softplus) seem to require custom implementations. Due to execution time being not that important, I decided to go with a hybrid model that uses the extended runtime. I converted it via:
graph_def_file = './model/frozen_model.pb'
inputs = ['eval_inputs']
outputs = ['model/y']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, inputs, outputs)
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_file_name = 'vae_' + str(tf.__version__) + '.tflite'
tflite_model = converter.convert()
open(tflite_file_name, 'wb').write(tflite_model)
This worked and I ended up with a seemingly valid tflite model file. Whenever I try to load this model with an interpreter, I get an error (it does not matter if I use the Python or C++ API):
ERROR: Regular TensorFlow ops are not supported by this interpreter. Make sure you invoke the Flex delegate before inference.
ERROR: Node number 4 (FlexSoftplus) failed to prepare.
I have a hard time to find documentation on the tf website on how to invoke the Flex delegate for both APIs. I have stumbled across a header file ("tensorflow/lite/delegates/flex/delegate_data.h") which seems to be related to this issue, but including it in my C++ project yields another error:
In file included from /tensorflow/tensorflow/core/common_runtime/eager/context.h:28:0,
from /tensorflow/tensorflow/lite/delegates/flex/delegate_data.h:18,
from /tensorflow/tensorflow/lite/delegates/flex/delegate.h:19,
from demo.cpp:7:
/tensorflow/tensorflow/core/lib/core/status.h:23:10: fatal error: tensorflow/core/lib/core/error_codes.pb.h: No such file or directory
#include "tensorflow/core/lib/core/error_codes.pb.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
By any chance, has anybody encountered and resolved this before? If you have an example snippet, please share the link!
When building TensorFlow Lite libraries using the bazel pipeline, the additional TensorFlow ops library can be included and enabled as follows:
Enable monolithic builds if necessary by adding the --config=monolithic build flag.
Add the TensorFlow ops delegate library dependency to the build dependencies: tensorflow/lite/delegates/flex:delegate.
Note that the necessary TfLiteDelegate will be installed automatically when creating the interpreter at runtime as long as the delegate is linked into the client library. It is not necessary to explicitly install the delegate instance as is typically required with other delegate types.
Python pip package
Python support is actively under development.
source: https://www.tensorflow.org/lite/guide/ops_select
According to https://www.tensorflow.org/lite/guide/ops_select#android_aar on 2019/9/25
Python support of 'select operators' is actively under development.
You can test the model in Android by using FlexDelegate.
I ran my model successfully in the same way.
e.g. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/java/src/test/java/org/tensorflow/lite/InterpreterFlexTest.java

Running Tensorflow model in Visual Studio just with opencv4

I made a training model on tensorflow using python and now I would like to run the model in C++ Visual Studio just using OpenCV 4.
1) The frozen_inference_graph.pb that I generate is not usable if I want to run c++ as I made in python ? If not is it possible to save somehow the training ? I mean using the other files model.ckpt.meta/model.ckpt.index or checkpoints to generate again the inference graph and run in C++?
2) Is it possible with the last release of the opencv 4 to run the model without any other programs(C++ API, TensorFlow, Bazel, Eigen3) ? I was running a example code that I found(https://github.com/pirahansiah/opencv4) and it was working normally but when I was using my model I was getting this error
OpenCV(4.0.0-dev) Error: Unspecified error (Input layer not found: \
Preprocessor/map/while/NextIteration) in \
cv::dnn::dnn4_v20181205::`anonymous-namespace'::TFImporter::connect, \
file C:\opencv\source\opencv-master\modules\dnn\src\tensorflow\tf_importer.cpp, \
line 497
printed by C:\Users\<username>\Desktop\TEST\Tensorflow\x64\Debug\Tensorflow.exe

Pabot - Unable to run parallel robotframework tests

So, I'm working on a robotframework test project, and the goal is to run several test suites in parallel. For this purpose, pabot was chosen as the solution. I am trying to implement it, but with little success.
My issue is: after installing Pabot (which, I might say, I did by cloning the project and running "setup.py install", instead of using pip, since the corporate proxy I'm behind has proven an obstacle I can't overcome), I created a new directory in the project tree, moved some suites there, and ran:
pabot --processes 2 --outputdir pabot_results Login*.robot
Doing so results in the following error message:
2018-10-10 10:27:30.449000 [PID:9676] [0] EXECUTING Suites.LoginAdmin
2018-10-10 10:27:30.449000 PID:400 EXECUTING Suites.LoginUser
2018-10-10 10:27:30.777000 PID:400 FAILED Suites.LoginUser
2018-10-10 10:27:30.777000 [PID:9676] [0] FAILED Suites.LoginAdmin
WARN: No output files in "pabot_results\pabot_results"
Output:
[ ERROR ] Reading XML source '' failed: invalid mode ('rb') or filename
Try --help for usage information.
Elapsed time: 0 minutes 0.578 seconds
Upon inspecting the stderr file that was generated, I have this message:
Traceback (most recent call last):
File "C:\Python27\Lib\site-packages\robotframework-3.1a2.dev1-py2.7.egg\robot\running\runner.py", line 22, in
from .context import EXECUTION_CONTEXTS
ValueError: Attempted relative import in non-package
Apparently, this has to do with something from the runner.py script, which, if I'm not mistaken, came with the installation of robotframework. Since manually modifying that script does not seem to me the optimal solution, my question is, what am I missing here? Did I forget to do anything while setting this up? Or is this an issue of compatibility between versions?
This project is using Maven as the tool to manage dependencies. The version I am running is 3.5.4. I am using a Windows 10, 64bit system; I have Python 2.7.14, and Robot Framework 3.1a2.dev1. The Pabot version is 0.44. Obviously, I added C:\Python27 and C:\Python27\Scripts to the PATH environment variable.
Edit: I am also using robotframework-maven-plugin version 1.4.0.8, if that happens to be relevant.
Edit 2: added the error messages in text format.
I believe I've come across an issue similar when setting up parallel execution on my machine. Firstly I would confirm that pabot is installed using pip show robotframework-pabot.
Then you should define the directory your results are going to using -d.
I then modified the name of the -o to Output.xml to make it easy to identify.
This is a copy of the code I use. Runs optimally with 8 processes
pabot --processes 8 -d results -o Output.xml Tests
Seems that you stumbled on a bug in the prerelease version of robot framework (3.1a2.dev1).
Please install a release version of robot framework. For example 3.0.4.
Just in case anyone happens to stumble upon this issue in the future:
Since I can't use pip, and I tried a good deal of workarounds that eventually made things more unstable, I ended up saving my project and removing everything Python-related from my system, so as to allow me to install everything from scratch. In a Windows 10, 64bit system, I used:
Python 2.7.14
wxPython 2.8.12.1, win64, unicode, for py27
setuptools 40.2.0 (to allow me to use the easy_install command)
Robot Framework 3.0.4
robotremoteserver 1.1
Selenium2Library 3.0.0
and Pabot version 0.45.
I might add that, when installing the Selenium2Library the way I described above, it eventually tries to download some things from the pip repositories - which, if you have a proxy, will cause you trouble. I solved this problem by browsing https://pypi.org/simple/selenium/, manually downloading the 2.53.6 .tar.gz file, then extracting it and running setup.py install on the command line.
PS: Ideally, though, anyone should be able to use proxy settings from the command line (--proxy http://user:password#server:port) to get pip and then use it; however, for some reason, probably related to network security configurations that I didn't want to lose time with, this didn't work in my case.

Jpegs in Django-wiki

I'm trying to get django-wiki running.
It works well so far, except I can't display .jpeg images.
At first I had trouble when only importing jpeg files in the webapp.
I fixed this modifying setup.py of PIL's setup.py as follows:
JPEG_ROOT = libinclude("/usr/lib")
# Line 214
add_directory(library_dirs, "/usr/lib")
add_directory(library_dirs, "/usr/lib/x86_64-linux-gnu")
Jpeg libs I have currently installed:
libjpeg-progs
libjpeg62:amd64
libjpeg62-dev:amd64
libjpeg8:amd64
libopenjpeg2:amd64
After install PIL with pip install PIL, I get this output which doesn't look that bad, at least I thought so
*** TKINTER support not available
--- JPEG support available
--- ZLIB (PNG/ZIP) support available
*** FREETYPE2 support not available
*** LITTLECMS support not available
No error messages (and no "decoder ntót available") and I can view the images properly on my server, which means upload works great. But in the wiki only the file names are shown and when I click on them I get
"This image failed to load."
Could someone please help me? I can't find any error output (debug mode is activated).
Thanks in advance
You are compiling software! You need to install development libraries for these things to compile, e.g. apt-get install libjpeg-dev.
Also, install Pillow, it has less chances of failure to compile - pip install pillow.