OpenCV faceDetecter yaml model loading error - c++

I have an error loading a .yaml model to FacemarkLBF from openCV.
cv_landmarks = cv::face::FacemarkLBF::create();
std::cout << "Loading OpenCV model for landmark detection." << std::endl;
cv_landmarks->loadModel("lbfmodel.yaml");
faceDetector.load("haarcascade_frontalface_alt2.xml");
Im getting this error:
loading data from : lbfmodel.yaml
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.3.0) /tmp/opencv-20200408-5080-l00ytm/opencv-4.3.0/opencv_contrib/modules/face/src/facemarkLBF.cpp:487: error: (-5:Bad argument) No valid input file was given, please check the given filename. in function 'loadModel'
This model works fine on visual studio, but I need to make the project using Xcode to use it later for iOS.
PS: I tried different models, and I got always the same error.

Provide absolute path for reading the model and it will work then.

Related

Failed to load pre trained onnx models in OpenCV C++

This is my first time with ONNX models and I’m not sure if I’m having a newbie problem so sorry in advance!
I’ve just tried to load a couple of models and I have the same assert always:
[ERROR:0#0.460] global onnx_importer.cpp:1054 cv::dnn::dnn4_v20221220::ONNXImporter::handleNode DNN/ONNX: ERROR during processing node with 3 inputs and 1 outputs: [Concat]:(onnx_node!Concat_2) from domain='ai.onnx'
OpenCV: terminate handler is called! The last OpenCV error is:
OpenCV(4.7.0-dev) Error: Unspecified error (> Node [Concat#ai.onnx]:(onnx_node!Concat_2) parse error: OpenCV(4.7.0-dev) C:\GHA-OCV-2\_work\ci-gha-workflow\ci-gha-workflow\opencv\modules\dnn\src\layers\concat_layer.cpp:105: error: (-215:Assertion failed) curShape.size() == outputs[0].size() in function 'cv::dnn::ConcatLayerImpl::getMemoryShapes'
> ) in cv::dnn::dnn4_v20221220::ONNXImporter::handleNode, file C:\GHA-OCV-2\_work\ci-gha-workflow\ci-gha-workflow\opencv\modules\dnn\src\onnx\onnx_importer.cpp, line 1073
Both models come from https://github.com/PeterL1n/RobustVideoMatting and they are “rvm_resnet50_fp32.onnx” and “rvm_mobilenetv3_fp32.onnx”
Obviously I’m loading them with
robustNN = cv::dnn::readNetFromONNX(robustNNPath);
Thank you in advance for any tip!

can't load digits trained caffe model with opencv readnetfromcaffe

I've built digits from this tutorial recently, everything is ok and I finally trained my AlexNet model (also trained a SqueezNet so that I can upload the model here) ! the problem is when I download my model from Digits, I can not load it into my program for testing!I have tested my program with GoogleNet downloaded from this link and it's working fine!
I'm using OpenCV readNetFromCaffe in this function to load Caffe model
void deepNetwork::loadModel( cv::String model ,cv::String weight ,string lablesPath,int ps){
patchSize=ps;
labeslPath=lablesPath;
try
{
net = dnn::readNetFromCaffe(weight,model);
cerr<<"loaded succ"<<endl;
}
catch (cv::Exception& e)
{
std::cerr << "Exception: " << e.what() << std::endl;
}}
I get the following error loading my model
OpenCV Error: Assertion failed (pbBlob.raw_data_type() ==
caffe::FLOAT16) in blo
bFromProto, file
/home/nvidia/build-opencv/opencv/modules/dnn/src/caffe/caffe_im
porter.cpp, line 242 Exception:
/home/nvidia/build-opencv/opencv/modules/dnn/src/caffe/caffe_importer
.cpp:242: error: (-215) pbBlob.raw_data_type() == caffe::FLOAT16 in
function blo
bFromProto
OpenCV Error: Requested object was not found (Requested blob "data"
not found) i
n setInput, file
/home/nvidia/build-opencv/opencv/modules/dnn/src/dnn.cpp, line
1606 terminate called after throwing an instance of 'cv::Exception'
what():
/home/nvidia/build-opencv/opencv/modules/dnn/src/dnn.cpp:1606: error:
(-204) Requested blob "data" not found in function setInput
Aborted (core dumped)
any help would be appreciated <3
opencv version 3.3.1 also tested on (3.3.0 ,3.4.1) same error!
testing on a system without Cuda, Cudnn or Caffe just pure c++ and OpenCv...
but i've trained my model on a aws ec2 instance (p3.2xlarge ) with Cuda,Cudnn and caffe !
you can download the trained squeezNet model (.prototxt and .caffemodel) here
finally, I found the problem!
it's a version problem I have digits 6.1.1 working with nvcaffe 0.17.0 for training which is not compatible with previous Caffe and OpenCv libraries ! you have to downgrade NvCaffe to version 0.15.14 and it will open with OpenCv easily!
OpenCV DNN model expect caffemodel in BVLC format. But, NVCaffe stores the caffe model in more efficient format which different than BVLC Caffe.
If you want model compatible with both BVLC/Caffe as well as NVcaffe.
Add this flag in solver.prototxt
store_blobs_in_old_format = true
Please read the DIGITS NVCaffe Documentation.
NVCaffe Documenation - store_blobs_in_old_format

Tensorflow c++ api failed set gpu/cpu number: SetDefaultDevice error: Duplicate registration of device factory for type GPU with the same priority 210

I am loading a trianed tensorflow model and runs it. When I try to set the gpu number, an error raises.
F tensorflow/core/common_runtime/device_factory.cc:77] Duplicate
registration of device factory for type GPU with the same priority 210
Code I use is like:
tensorflow::GraphDef graph_def;
tensorflow::Status graphLoadedStatus = ReadBinaryProto(tensorflow::Env::Default(),model_path, &graph_def);
if (!graphLoadedStatus.ok()) {
std::cerr <<"Model path : " << graphLoadedStatus.ToString() << std::endl;
return graphLoadedStatus;
}
// set device to be on gpu
tensorflow::graph::SetDefaultDevice("/gpu:3", &graph_def);
Google it and no result except the tensoeflow's source code: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/common_runtime/device_factory.cc
Any one have met this? Or can tf's contributor give me some clues?
It looks like a problem of priority, so I tried to run with root, but no use.
Btw, I am using c++. If I use python, I can set the gpu number with no error.
I rebuild tensorflow from the source and use the new libtensorflow_cc.so and head files. Then this problem vanishes.
I think this is caused by my libtensorflow_cc.so and head files are not the same version.

Inference error with TensorFlow C++ on iOS: "Invalid argument: Session was not created with a graph before Run()!"

I am trying to run my model on iOS using TensorFlow's C++ API. The model is a SavedModel saved as a .pb file. However, calls to Session::Run() result in the error:
"Invalid argument: Session was not created with a graph before Run()!"
In Python, I can successfully run inference on the model with the following code:
with tf.Session() as sess:
tf.saved_model.loader.load(sess, ['serve'], '/path/to/model/export')
result = sess.run(['OutputTensorA:0', 'OutputTensorB:0'], feed_dict={
'InputTensorA:0': np.array([5000.00] * 1000).reshape(1, 1000),
'InputTensorB:0': np.array([300.00] * 1000).reshape(1, 1000)
})
print(result[0])
print(result[1])
In C++ on iOS, I try to mimick this working snippit as follows:
tensorflow::Input::Initializer input_a(5000.00, tensorflow::TensorShape({1, 1000}));
tensorflow::Input::Initializer input_b(300.00, tensorflow::TensorShape({1, 1000}));
tensorflow::Session* session_pointer = nullptr;
tensorflow::SessionOptions options;
tensorflow::Status session_status = tensorflow::NewSession(options, &session_pointer);
std::cout << session_status.ToString() << std::endl; // prints OK
std::unique_ptr<tensorflow::Session> session(session_pointer);
tensorflow::GraphDef model_graph;
NSString* model_path = FilePathForResourceName(#"saved_model", #"pb");
PortableReadFileToProto([model_path UTF8String], &model_graph);
tensorflow::Status session_init = session->Create(model_graph);
std::cout << session_init.ToString() << std::endl; // prints OK
std::vector<tensorflow::Tensor> outputs;
tensorflow::Status session_run = session->Run({{"InputTensorA:0", input_a.tensor}, {"InputTensorB:0", input_b.tensor}}, {"OutputTensorA:0", "OutputTensorB:0"}, {}, &outputs);
std::cout << session_run.ToString() << std::endl; // Invalid argument: Session was not created with a graph before Run()!
The methods FilePathForResourceName and PortableReadFileToProto are taken from the TensorFlow iOS sample found here.
What is the problem? I noticed that this happens regardless of how simple the model is (see my issue report on GitHub), which means the problem is not with the specifics of the model.
The primary issue here is that you are exporting your graph to a SavedModel in Python but then reading it in as a GraphDef in C++. While both have a .pb extension and are similar, they are not equivalent.
What is happening is you are reading in the SavedModel with PortableReadFileToProto() and it is failing, leaving an empty pointer (model_graph) to a GraphDef object. So after the execution of PortableReadFileToProto(), model_graph remains an empty, but valid, GraphDef, which is why the error says Session was not created with a graph before Run(). session->Create() succeeds because you successfully created a session with an empty graph.
The way to check if PortableReadFileToProto() fails is to check its return value. It returns a bool, which will be 0 if reading in the graph failed. If you wish to obtain a descriptive error here, use ReadBinaryProto(). Another way you can tell if reading the graph failed is by checking the value of model_graph.node_size(). If this is 0, then you have an empty graph and reading it in has failed.
While you can use TensorFlow's C API to perform inference on a SavedModel by using TF_LoadSessionFromSavedModel() and TF_SessionRun(), the recomended method is to export your graph to a frozen model using freeze_graph.py or write to a GraphDef using tf.train.write_graph(). I will demonstrate successful inference with a model exported using tf.train.write_graph():
In Python:
# Build graph, call it g
g = tf.Graph()
with g.as_default():
input_tensor_a = tf.placeholder(dtype=tf.int32, name="InputTensorA")
input_tensor_b = tf.placeholder(dtype=tf.int32, name="InputTensorB")
output_tensor_a = tf.stack([input_tensor_a], name="OutputTensorA")
output_tensor_b = tf.stack([input_tensor_b], name="OutputTensorB")
# Save graph g
with tf.Session(graph=g) as sess:
sess.run(tf.global_variables_initializer())
tf.train.write_graph(
graph_or_graph_def=sess.graph_def,
logdir='/path/to/export',
name='saved_model.pb',
as_text=False
)
In C++ (Xcode):
using namespace tensorflow;
using namespace std;
NSMutableArray* predictions = [NSMutableArray array];
Input::Initializer input_tensor_a(1, TensorShape({1}));
Input::Initializer input_tensor_b(2, TensorShape({1}));
SessionOptions options;
Session* session_pointer = nullptr;
Status session_status = NewSession(options, &session_pointer);
unique_ptr<Session> session(session_pointer);
GraphDef model_graph;
string model_path = string([FilePathForResourceName(#"saved_model", #"pb") UTF8String]);
Status load_graph = ReadBinaryProto(Env::Default(), model_path, &model_graph);
Status session_init = session->Create(model_graph);
cout << "Session creation Status: " << session_init.ToString() << endl;
cout << "Number of nodes in model_graph: " << model_graph.node_size() << endl;
cout << "Load graph Status: " << load_graph.ToString() << endl;
vector<pair<string, Tensor>> feed_dict = {
{"InputTensorA:0", input_tensor_a.tensor},
{"InputTensorB:0", input_tensor_b.tensor}
};
vector<Tensor> outputs;
Status session_run = session->Run(feed_dict, {"OutputTensorA:0", "OutputTensorB:0"}, {}, &outputs);
[predictions addObject:outputs[0].scalar<int>()];
[predictions addObject:outputs[1].scalar<int>()];
Status session_close = session->Close();
This general method will work, but you will likely experience issues with required operations missing from the TensorFlow library you built and therefore inference would still fail. To combat this, first make sure that you have built the latest TensorFlow 1.3 by cloning the repo on your machine and running tensorflow/contrib/makefile/build_all_ios.sh from the root tensorflow-1.3.0 directory. It is unlikely that inference will work for a custom, non-canned model if you use the TensorFlow-experimental Pod like the examples. Once you have a static library built using build_all_ios.sh, you need to link it up in your .xcconfig by following the instructions here.
Once you successfully link the static library built using the makefile with Xcode, you will likely still get errors that prevent inference. While the actual errors you will get depend on your implementation, you will receive errors that fall into two different forms:
OpKernel ('op: "[operation]" device_type: "CPU"') for unknown op:
[operation]
No OpKernel was registered to support Op '[operation]' with these
attrs. Registered devices: [CPU], Registered kernels: [...]
Error #1 means that the .cc file from tensorflow/core/ops or tensorflow/core/kernels for the corresponding operation (or closely associated operation) is not in the tf_op_files.txt file in tensorflow/contrib/makefile. You will have to find the .cc that contains REGISTER_OP("YourOperation") and add it to tf_op_files.txt. You must rebuild by running tensorflow/contrib/makefile/build_all_ios.sh again.
Error #2 means that the .cc file for the corresponding operation is in your tf_op_files.txt file, but you have supplied the operation with a data type that it (a) doesn't support or (b) is stripped off to reduce the size of the build.
One "gotcha" is that if you are using tf.float64 in the implementation of your model, this is exported as TF_DOUBLE in your .pb file and this is not supported by most operations. Use tf.float32 in place of tf.float64 and then re-save your model using tf.train.write_graph().
If you are still receiving error #2 after checking you are providing the correct datatype to the operation, you will need to either remove __ANDROID_TYPES_SLIM__ in the makefile located at tensorflow/contrib/makefile or replace it with __ANDROID_TYPES_FULL__ and then rebuild.
After getting passed errors #1 and #2, you will likely have successful inference.
One addition to the very comprehensive explanation above:
#jshapy8 is right in saying "You will have to find the .cc that contains REGISTER_OP("YourOperation") and add it to tf_op_files.txt" and there is a process that can simplify that a bit:
## build the print_selective_register_header tool. Run from tensorflow root
bazel build tensorflow/python/tools:print_selective_registration_header
bazel-bin/tensorflow/python/tools/print_selective_registration_header \
--graphs=<path to your frozen model file here>/model_frozen.pb > ops_to_register.h
This creates a .h file that lists only the ops needed for your specific model.
Now when compiling your static libraries follow the Build By Hand instructions here
The instructions say to do the following:
make -f tensorflow/contrib/makefile/Makefile \
TARGET=IOS \
IOS_ARCH=ARM64
But you can pass a lot to the makefile specific to your needs and I've found the following your best bet:
make -f tensorflow/contrib/makefile/Makefile \
TARGET=IOS IOS_ARCH=ARM64,x86_64 OPTFLAGS="-O3 -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION"
In particular you are telling it here to compile for just two of the 5 architectures to speed up compiling time (full list is: i386 x86_64 armv7 armv7s arm64 and obviously takes longer) - IOS_ARCH=ARM64,x86_64 - and then you are telling it not to compile for ANDROID_TYPES_SLIM (which will give you the Float/Int casting issues referred to above) and then finally you are telling it to pull all the necessary ops kernel files and include them in the make process.
Update . Not sure why this wasn't working for me yesterday, but this is probably a cleaner and safer method:
build_all_ios.sh OPTFLAGS="-O3 -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION"
If you want to speed things up edit compile_ios_tensorflow.sh in the /Makefile directory. Look for the following line:
BUILD_TARGET="i386 x86_64 armv7 armv7s arm64"
and change it to:
BUILD_TARGET="x86_64 arm64"

error retrieving background image from BackgroundSubtractorMOG2

I'm trying to get the background image from BackgroundSubtractorMOG2:
bg->getBackgroundImage(back);
but I get a Thread 1 SIGABRT (which as a c++ n00b puzzles me)
and this error:
OpenCV Error: Assertion failed (nchannels == 3) in getBackgroundImage, file /Users/hm/Downloads/OpenCV-2.4.4/modules/video/src/bgfg_gaussmix2.cpp, line 579
libc++abi.dylib: terminate called throwing an exception
(lldb)
I'm not sure what the problem is, suspecting it's something to do with the nmixtures paramater, but I've left that as the default(3). Any hints ?
It looks like you need to use 3 channel images rather than grayscale. Make sure the image type you are using is CV_8UC3 or if you are reading from a file use cv::imread('path/to/file') with no additional arguments.