Failed to load pre trained onnx models in OpenCV C++ - c++

This is my first time with ONNX models and I’m not sure if I’m having a newbie problem so sorry in advance!
I’ve just tried to load a couple of models and I have the same assert always:
[ERROR:0#0.460] global onnx_importer.cpp:1054 cv::dnn::dnn4_v20221220::ONNXImporter::handleNode DNN/ONNX: ERROR during processing node with 3 inputs and 1 outputs: [Concat]:(onnx_node!Concat_2) from domain='ai.onnx'
OpenCV: terminate handler is called! The last OpenCV error is:
OpenCV(4.7.0-dev) Error: Unspecified error (> Node [Concat#ai.onnx]:(onnx_node!Concat_2) parse error: OpenCV(4.7.0-dev) C:\GHA-OCV-2\_work\ci-gha-workflow\ci-gha-workflow\opencv\modules\dnn\src\layers\concat_layer.cpp:105: error: (-215:Assertion failed) curShape.size() == outputs[0].size() in function 'cv::dnn::ConcatLayerImpl::getMemoryShapes'
> ) in cv::dnn::dnn4_v20221220::ONNXImporter::handleNode, file C:\GHA-OCV-2\_work\ci-gha-workflow\ci-gha-workflow\opencv\modules\dnn\src\onnx\onnx_importer.cpp, line 1073
Both models come from https://github.com/PeterL1n/RobustVideoMatting and they are “rvm_resnet50_fp32.onnx” and “rvm_mobilenetv3_fp32.onnx”
Obviously I’m loading them with
robustNN = cv::dnn::readNetFromONNX(robustNNPath);
Thank you in advance for any tip!

Related

OpenCV faceDetecter yaml model loading error

I have an error loading a .yaml model to FacemarkLBF from openCV.
cv_landmarks = cv::face::FacemarkLBF::create();
std::cout << "Loading OpenCV model for landmark detection." << std::endl;
cv_landmarks->loadModel("lbfmodel.yaml");
faceDetector.load("haarcascade_frontalface_alt2.xml");
Im getting this error:
loading data from : lbfmodel.yaml
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.3.0) /tmp/opencv-20200408-5080-l00ytm/opencv-4.3.0/opencv_contrib/modules/face/src/facemarkLBF.cpp:487: error: (-5:Bad argument) No valid input file was given, please check the given filename. in function 'loadModel'
This model works fine on visual studio, but I need to make the project using Xcode to use it later for iOS.
PS: I tried different models, and I got always the same error.
Provide absolute path for reading the model and it will work then.

can't load digits trained caffe model with opencv readnetfromcaffe

I've built digits from this tutorial recently, everything is ok and I finally trained my AlexNet model (also trained a SqueezNet so that I can upload the model here) ! the problem is when I download my model from Digits, I can not load it into my program for testing!I have tested my program with GoogleNet downloaded from this link and it's working fine!
I'm using OpenCV readNetFromCaffe in this function to load Caffe model
void deepNetwork::loadModel( cv::String model ,cv::String weight ,string lablesPath,int ps){
patchSize=ps;
labeslPath=lablesPath;
try
{
net = dnn::readNetFromCaffe(weight,model);
cerr<<"loaded succ"<<endl;
}
catch (cv::Exception& e)
{
std::cerr << "Exception: " << e.what() << std::endl;
}}
I get the following error loading my model
OpenCV Error: Assertion failed (pbBlob.raw_data_type() ==
caffe::FLOAT16) in blo
bFromProto, file
/home/nvidia/build-opencv/opencv/modules/dnn/src/caffe/caffe_im
porter.cpp, line 242 Exception:
/home/nvidia/build-opencv/opencv/modules/dnn/src/caffe/caffe_importer
.cpp:242: error: (-215) pbBlob.raw_data_type() == caffe::FLOAT16 in
function blo
bFromProto
OpenCV Error: Requested object was not found (Requested blob "data"
not found) i
n setInput, file
/home/nvidia/build-opencv/opencv/modules/dnn/src/dnn.cpp, line
1606 terminate called after throwing an instance of 'cv::Exception'
what():
/home/nvidia/build-opencv/opencv/modules/dnn/src/dnn.cpp:1606: error:
(-204) Requested blob "data" not found in function setInput
Aborted (core dumped)
any help would be appreciated <3
opencv version 3.3.1 also tested on (3.3.0 ,3.4.1) same error!
testing on a system without Cuda, Cudnn or Caffe just pure c++ and OpenCv...
but i've trained my model on a aws ec2 instance (p3.2xlarge ) with Cuda,Cudnn and caffe !
you can download the trained squeezNet model (.prototxt and .caffemodel) here
finally, I found the problem!
it's a version problem I have digits 6.1.1 working with nvcaffe 0.17.0 for training which is not compatible with previous Caffe and OpenCv libraries ! you have to downgrade NvCaffe to version 0.15.14 and it will open with OpenCv easily!
OpenCV DNN model expect caffemodel in BVLC format. But, NVCaffe stores the caffe model in more efficient format which different than BVLC Caffe.
If you want model compatible with both BVLC/Caffe as well as NVcaffe.
Add this flag in solver.prototxt
store_blobs_in_old_format = true
Please read the DIGITS NVCaffe Documentation.
NVCaffe Documenation - store_blobs_in_old_format

deeplearning4j: cannot use an existing Word2Vec dutchembeddings

I tried to use the dutchembeddings in Word2Vec format with dl4j. But an exception is thrown when loadStaticModel is called: "Unable to guess input file format"
WordVectorSerializer.loadStaticModel(new File(WORD_VECTORS_PATH)
https://github.com/clips/dutchembeddings (I downloaded the wikipedia 160 tar.gz)
How can I get the dutchembeddings in Word2Vec format working with dl4j?
Stacktrace
Loading word vectors and creating DataSetIterators
o.d.m.e.l.WordVectorSerializer - Trying DL4j format...
o.d.m.e.l.WordVectorSerializer - Trying CSVReader...
o.d.m.e.l.WordVectorSerializer - Trying BinaryReader...
Exception in thread "main" java.lang.RuntimeException: Unable to guess input file format
at org.deeplearning4j.models.embeddings.loader.WordVectorSerializer.loadStaticModel(WordVectorSerializer.java:2646)
at org.deeplearning4j.examples.convolution.sentenceclassification.CnnDutchSentenceClassification.main(CnnDutchSentenceClassification.java:122)
Process finished with exit code 1

fasttext assertion "counts.size() == osz_" failed

I am trying to use fasttext for text classification and I am training on a corpus of 850MB of texts on Windows, but I keep getting the following error:
assertion "counts.size() == osz_" failed: file "src/model.cc", line 206, function: void fasttext::Model::setTargetCounts(const std::vector<long int>&) Aborted (core dumped)
I checked the values of counts.size() and osz_ and found that counts.size = 2515626 and osz_ = 300. When I call in.good() on the input stream in FastText::loadModel i get 0, in.fail()=1 and in.eof()=1.
I am using the following commands to train and test my model:
./fasttext supervised -input fasttextinput -output fasttextmodel -dim 300 -epoch 5 -minCount 5 -wordNgrams 2
./fasttext test fasttextmodel.bin fasttextinput
My input data is properly formatted according to the fasttext github page, so I am wondering if this is some failure of me or a bug.
Thanks for any support on this!
To close this thread:
As #Sixhobbits' pointed out the error was related to https://github.com/facebookresearch/fastText/issues/73 (running out of disk space when saving the fastText supervised model)

error retrieving background image from BackgroundSubtractorMOG2

I'm trying to get the background image from BackgroundSubtractorMOG2:
bg->getBackgroundImage(back);
but I get a Thread 1 SIGABRT (which as a c++ n00b puzzles me)
and this error:
OpenCV Error: Assertion failed (nchannels == 3) in getBackgroundImage, file /Users/hm/Downloads/OpenCV-2.4.4/modules/video/src/bgfg_gaussmix2.cpp, line 579
libc++abi.dylib: terminate called throwing an exception
(lldb)
I'm not sure what the problem is, suspecting it's something to do with the nmixtures paramater, but I've left that as the default(3). Any hints ?
It looks like you need to use 3 channel images rather than grayscale. Make sure the image type you are using is CV_8UC3 or if you are reading from a file use cv::imread('path/to/file') with no additional arguments.