Running Tensorflow model in Visual Studio just with opencv4 - c++

I made a training model on tensorflow using python and now I would like to run the model in C++ Visual Studio just using OpenCV 4.
1) The frozen_inference_graph.pb that I generate is not usable if I want to run c++ as I made in python ? If not is it possible to save somehow the training ? I mean using the other files model.ckpt.meta/model.ckpt.index or checkpoints to generate again the inference graph and run in C++?
2) Is it possible with the last release of the opencv 4 to run the model without any other programs(C++ API, TensorFlow, Bazel, Eigen3) ? I was running a example code that I found(https://github.com/pirahansiah/opencv4) and it was working normally but when I was using my model I was getting this error
OpenCV(4.0.0-dev) Error: Unspecified error (Input layer not found: \
Preprocessor/map/while/NextIteration) in \
cv::dnn::dnn4_v20181205::`anonymous-namespace'::TFImporter::connect, \
file C:\opencv\source\opencv-master\modules\dnn\src\tensorflow\tf_importer.cpp, \
line 497
printed by C:\Users\<username>\Desktop\TEST\Tensorflow\x64\Debug\Tensorflow.exe

Related

How do I load a HigherHRNet in OpenCV C++?

I have recently started using the C++ implementation of OpenCV and have been running into a spot of trouble. I have been experimenting with trying to estimate 3D human pose with the video from my built in camera.
To start, I looked at a project like this which accomplishes a similar task by importing an ONNX model and loading it with cv::dnn::readNetFromONNX(modelPath);. However this model only performs 2D pose estimation. From this I concluded that if I could gather a model from an alternate source and, as long as it was in the ONNX format, then it would be able to be loaded by OpenCV.
I tried going to Google Colab to use OpenVino in a safe environment to grab a copy of the model with their model downloader and model converter. These commands ended up being:
!pip install openvino-dev[onnx]
!omz_downloader --name higher-hrnet-w32-human-pose-estimation
!pip install yacs
!omz_converter --name higher-hrnet-w32-human-pose-estimation
Through the course of these commands, we see:
========== Converting higher-hrnet-w32-human-pose-estimation to ONNX
Conversion to ONNX command: /usr/bin/python3 -- /usr/local/lib/python3.7/dist-packages/open_model_zoo/model_tools/internal_scripts/pytorch_to_onnx.py --model-path=/usr/local/lib/python3.7/dist-packages/open_model_zoo/model_tools/models/public/higher-hrnet-w32-human-pose-estimation --model-path=/content/public/higher-hrnet-w32-human-pose-estimation --model-name=get_net --import-module=model '--model-param=file_config=r"/content/public/higher-hrnet-w32-human-pose-estimation/experiments/higher_hrnet.yaml"' '--model-param=weights=r"/content/public/higher-hrnet-w32-human-pose-estimation/ckpt/pose_higher_hrnet_w32_512.pth"' --input-shape=1,3,512,512 --input-names=image --output-names=embeddings,heatmaps --output-file=/content/public/higher-hrnet-w32-human-pose-estimation/higher-hrnet-w32-human-pose-estimation.onnx
ONNX check passed successfully.
========== Converting higher-hrnet-w32-human-pose-estimation to IR (FP16)
Conversion command: /usr/bin/python3 -m mo --framework=onnx --data_type=FP16 --output_dir=/content/public/higher-hrnet-w32-human-pose-estimation/FP16 --model_name=higher-hrnet-w32-human-pose-estimation --reverse_input_channels '--input_shape=[1,3,512,512]' --input=image '--mean_values=image[123.675,116.28,103.53]' '--scale_values=image[58.395,57.12,57.375]' --output=embeddings,heatmaps --input_model=/content/public/higher-hrnet-w32-human-pose-estimation/higher-hrnet-w32-human-pose-estimation.onnx
which indicates the existence of higher-hrnet-w32-human-pose-estimation.onnx in our local storage. So I downloaded that file to my local device and ran in the above context. When I run, I get a cryptic error:
Unhandled exception at 0x00007FFB62FF4F69 in posedetect.exe: Microsoft C++ exception: cv::Exception at memory location
Is there a way to load a 3D pose estimation model in OpenCV using C++?
Alternate attempt with openpose
I tried following the recommendation for using openpose as an alternative to the HigherHRNet model as described by #B200011011 is the comments. To do this I went to the open pose github and performed the following:
git clone https://github.com/CMU-Perceptual-Computing-Lab/openpose.git
cd openpose/models
getModels.bat
cp pose/body_25/pose_iter_584000.caffemodel ../source/repos/posedetect/models/
When I try to load this caffe model with cv::dnn::readNetFromCaffe(modelPath); I get a similar cryptic error as trying to load the HigherHRNet Model:
Unhandled exception at 0x00007FFB62FF4F69 in posedetect.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000E42CB0D660.
So how do I load a HigherHRNet (or other 3D pose estimation) model in OpenCV C++?

not recognized as a supported file format ECW gdal api

I'm trying to use ECW files in my application. I've built GDAL Library whit this command:
./configure --with-ecw:/usr/local/hexagon
after completion of build process, when I Entered:
gdalinfo --formats | grep ECW
I got:
ECW -raster- (rw+): ERDAS Compressed Wavelets (SDK 5.5)
JP2ECW -raster,vector- (rw+v): ERDAS JPEG2000 (SDK 5.5)
also when I've used
gdalinfo map.ecw
it returns all metadata of ECW files.
but when I compile my C++ program, it returns:
Error: GDAL Dataset returned null from read
ERROR 4: `map.ecw' not recognized as a supported file format.
Dose anyone know why I can't use ECW files in C++ program?
By the way, I use
Cmake
,GDAL 3.3.0
,Erdas-ECW SDK 5.5 hexagon
for building the program.
I found the answer. This problem occurs if the gdal_bin binary package is installed before creating GDAL.
Just make sure gdal_bin is deleted before installing the version you created.

How to invoke the Flex delegate for tflite interpreters?

I have a TensorFlow model which I want to convert into a tflite model, which is going to be deployed on an ARM64 platform.
It happens to be that two operations of my model (RandomStandardNormal, Softplus) seem to require custom implementations. Due to execution time being not that important, I decided to go with a hybrid model that uses the extended runtime. I converted it via:
graph_def_file = './model/frozen_model.pb'
inputs = ['eval_inputs']
outputs = ['model/y']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, inputs, outputs)
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_file_name = 'vae_' + str(tf.__version__) + '.tflite'
tflite_model = converter.convert()
open(tflite_file_name, 'wb').write(tflite_model)
This worked and I ended up with a seemingly valid tflite model file. Whenever I try to load this model with an interpreter, I get an error (it does not matter if I use the Python or C++ API):
ERROR: Regular TensorFlow ops are not supported by this interpreter. Make sure you invoke the Flex delegate before inference.
ERROR: Node number 4 (FlexSoftplus) failed to prepare.
I have a hard time to find documentation on the tf website on how to invoke the Flex delegate for both APIs. I have stumbled across a header file ("tensorflow/lite/delegates/flex/delegate_data.h") which seems to be related to this issue, but including it in my C++ project yields another error:
In file included from /tensorflow/tensorflow/core/common_runtime/eager/context.h:28:0,
from /tensorflow/tensorflow/lite/delegates/flex/delegate_data.h:18,
from /tensorflow/tensorflow/lite/delegates/flex/delegate.h:19,
from demo.cpp:7:
/tensorflow/tensorflow/core/lib/core/status.h:23:10: fatal error: tensorflow/core/lib/core/error_codes.pb.h: No such file or directory
#include "tensorflow/core/lib/core/error_codes.pb.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
By any chance, has anybody encountered and resolved this before? If you have an example snippet, please share the link!
When building TensorFlow Lite libraries using the bazel pipeline, the additional TensorFlow ops library can be included and enabled as follows:
Enable monolithic builds if necessary by adding the --config=monolithic build flag.
Add the TensorFlow ops delegate library dependency to the build dependencies: tensorflow/lite/delegates/flex:delegate.
Note that the necessary TfLiteDelegate will be installed automatically when creating the interpreter at runtime as long as the delegate is linked into the client library. It is not necessary to explicitly install the delegate instance as is typically required with other delegate types.
Python pip package
Python support is actively under development.
source: https://www.tensorflow.org/lite/guide/ops_select
According to https://www.tensorflow.org/lite/guide/ops_select#android_aar on 2019/9/25
Python support of 'select operators' is actively under development.
You can test the model in Android by using FlexDelegate.
I ran my model successfully in the same way.
e.g. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/java/src/test/java/org/tensorflow/lite/InterpreterFlexTest.java

itk library error in python-xy 2.7.10

I recenlty installed python-xy 2.7.10 and trying to run a simple script using itk fails due to the following error:
RuntimeError:
C:\u\itk-git_b\Modules\Remote\SCIFIO\src\itkSCIFIOImageIO.cxx:274:
itk::ERROR: SCIFIOImageIO(0295DBE0): SCIFIO_PATH is not set. This
environment variable must point to the directory containing the SCIFIO
JAR files
The script I'm running now is simple enough:
import itk
pixelType = itk.UC
imageType = itk.Image[pixelType, 2]
readerType = itk.ImageFileReader[imageType]
reader = readerType.New()
reader.SetFileName("./Sand_sample.bmp")
reader.Update()
I assume you enabled Module_SCIFIO when configuring ITK with CMake. So you can either disable it, reconfigure and recompile. Otherwise, you can properly set it up. Maybe you need to run sudo make install (or build the INSTALL target in Visual Studio). Or you could do what the error complains about and set environment variable SCIFIO_PATH (*nix, Win).
You can find more info about SCIFIO in corresponding ITK class documentation.

OpenCV Facedetect sample not working

I'm trying to run the Facedetect sample using OpenCV, I compiled the code the build_all.sh file.
Then I ran the code with
sudo ./facedetect --cascade="../../data/haarcascades/haarcascade_frontalface_alt.xml" --nested-cascade="../../data/haarcascades/haarcascade_eye.xml" --scale=1.3 lena.jpg
A few instructions are displayed
This program demonstrates the cascade recognizer. Now you can use Haar or LBP features.
This classifier can recognize many kinds of rigid objects, once the appropriate classifier is trained.
It's most known use is for faces.
Usage:
./facedetect [--cascade=<cascade_path> this is the primary trained classifier such as frontal face]
[--nested-cascade[=nested_cascade_path this an optional secondary classifier such as eyes]]
[--scale=<image scale greater or equal to 1, try 1.3 for example>]
[--try-flip]
[filename|camera_index]
see facedetect.cmd for one call:
./facedetect --cascade="../../data/haarcascades/haarcascade_frontalface_alt.xml" --nested-cascade="../../data/haarcascades/haarcascade_eye.xml" --scale=1.3
During execution:
Hit any key to quit.
Using OpenCV version 2.4.8
Processing 1 --cascade=../../data/haarcascades/haarcascade_frontalface_alt.xml
from which we have cascadeName= ../../data/haarcascades/haarcascade_frontalface_alt.xml
Processing 2 --nested-cascade=../../data/haarcascades/haarcascade_eye.xml
Processing 3 --scale=1.3
from which we read scale = 1.3
Processing 4 lena.jpg
And in the end I get the following error:
OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvNamedWindow, file /home/userk/Development/OpenCV/opencv-2.4.8/modules/highgui/src/window.cpp, line 483
terminate called after throwing an instance of 'cv::Exception'
what(): /home/userk/Development/OpenCV/opencv-2.4.8/modules/highgui/src/window.cpp:483: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvNamedWindow
The packages libgtk2.0-dev and pkg-config were already installed. Do you guys have any advice?
SOLVED:
As Berak suggested, I recompiled opencv from its main directory with:
cmake -D CMAKE_BUILD_TYPE=RELEASE ..
make
sudo make install