OpenCV - OutOfMemory with big dataset - c++

I am working with OpenCV2.4 and SVM classification and I need to load a big dataset (about 400Mb of data) in C++. I've been able to save this dataset under a XML file, but I am unable to load it after that. Indedd, I receive the following message :
OpenCV Error: Insufficient memory (Failed to allocate 408909812 bytes) in OutOfMemoryError, file (my opencv2.4 directory)modules\core\src\alloc.cpp, line 52 - error: (-4)
How could I increase the available memory (I have plenty of free RAM) ?
Thanks a lot !
EDIT :
Here is the place where the problem appears. The code works when I load a smaller file
std::cout<<"ok 0"<<std::endl;
FileStorage XML_Data(Filename, FileStorage::READ);
XML_Data["Data"]>>m_Data_Matrix;
XML_Data.release();
std::cout<<"ok 1"<<std::endl;
EDIT 2 :
Problem solved : the solution was to compile my application and OpenCV2.4.5 as a 64 bit application. I've installed a 64 bit version of MinGW, build OpenCV with this new version (and using cmake to configure) and then modified the compiler used by codeblocks.
You could find these links usefull : http://forums.codeblocks.org/index.php?topic=13016.0 and http://www.drangon.org/mingw.

Related

"Bad allocation" in OpenCV in VS2017 Debug Mode

I'm currently working with C++/OpenCV using VS2017 and had no problems reading a video-stream from a file (using VideoCapture).
However, I am getting the following error message, when building in Debug Mode:
warning: Error opening file (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:901)
warning: pΦ∩á╬ (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:902)
[ERROR:0] VIDEOIO(cvCreateCapture_MSMF(filename)): raised C++ exception:
bad allocation
I am wondering where the error might be coming from, as the program works perfectly fine in Release Mode.
I might add, that the video files I am testing with are approx. 2.7 GB to 8.8 GB large.
Is this an allocator issue inside the VS2017-Debugger hitting the int32_max limit
of 2³¹ bit (even though it is a 64Bit-Process)?

Can't read saved TensorFlow model (failed to seek to header entry)

I am trying to read SavedModel with TensorFlow C++ API. The model was saved with TF Python code and my model directory has the following structure:
saved_model.pb
variables
├── variables.data-00000-of-00001
└── variables.index
I managed to read it successfully in Ubuntu with the following line of code:
tensorflow::LoadSavedModel(sessOpt, runOpt, modelDir, {tensorflow::kSavedModelTagServe}, &model);
However when I build the same code for Windows it fails to read the model. This is what TensorFlow outputs:
2017-07-25 16:16:15.112591: I C:\all\lib\serving\tensorflow\tensorflow\cc\saved_model\loader.cc:155]
Restoring SavedModel bundle.
2017-07-25 16:16:15.126391: W op_kernel.cc:1192]
Data loss: Unable to read file (C:/model/1/variables/variables.index).
Perhaps the file is corrupt or was produced by a newer version of TensorFlow with format changes (failed to seek to header entry): corrupted compressed block contents
2017-07-25 16:16:15.127325: W op_kernel.cc:1192]
Data loss: Unable to read file (C:/model/1/variables/variables.index).
Perhaps the file is corrupt or was produced by a newer version of TensorFlow with format changes (failed to seek to header entry): corrupted compressed block contents
...
Same lines over and over, 40 times in total
...
2017-07-25 16:16:15.162735: I C:\all\lib\serving\tensorflow\tensorflow\cc\saved_model\loader.cc:284] Loading SavedModel: fail. Took 80176 microseconds.
The version of TensorFlow is exactly the same, so there are no issues with that. The errors occur in the ctor BundleReader::BundleReader in the following line:
iter_->Seek(kHeaderEntryKey);
This is all part of the function that restores weights from the filesystem to the current session. TF basically runs save/restore_all operation to load the weights. Interestingly enough, it is done on a thread pool which on my machine has 12 threads. Due to that 12 threads simultaneously access variables.index file and I know that Windows does not like things like that.
I tried tuning session options for LoadSavedModel function:
sessionOpt.config.set_inter_op_parallelism_threads(1);
sessionOpt.config.set_intra_op_parallelism_threads(1);
sessionOpt.config.set_use_per_session_threads(1);
But unfortunately this does not seem to change anything.
Does anyone have any idea what else I can try? Should I file a bug report or maybe there's a problem with my code?
Ok, I've found the culprit. Turns out it's not related to multithreading issues.
The CMake build scripts provided in tensorflow/contrib/cmake do not support SNAPPY compression library, so the resulting application could not decompress my model. After I added SNAPPY library to CMakeLists.txt it started to work fine.
I'll most likely contribute the change soon so it can help the others having the same issue.

OpenCV error when loading Fisher face recognizer in Xcode 8

I've imported opencv and opencv_contrib frameworks in an Xcode project, and in my Objective-C++ file, I load a face classifier and then a Fisher face recognizer:
// set up classifier, recognizer, and webcam
-(void) setupAnalyzer:(NSString *)faceCascadeName :(NSString *)fisherDatasetName
{
// load face classifier
cout << "loading face classifier..." << endl;
String faceCascadeNameString = string([faceCascadeName UTF8String]);
faceCascade.load(faceCascadeNameString);
// load face recognizer
cout << "loading face recognizer..." << endl;
fishface = createFisherFaceRecognizer();
String fisherDatasetNameString = string([fisherDatasetName UTF8String]);
fishface->load(fisherDatasetNameString);
}
When I call this function from Swift, it seems the face classifier loads just fine with an xml file I have in my project. But when I try to load the Fisher face recognizer using another xml file in my project, Xcode shows this error:
OpenCV Error: Unspecified error (File can’t be opened for reading!) in load, file ~/opencv/modules/face/src/facerec.cpp, line 61
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: ~/opencv/modules/face/src/facerec.cpp:61: error: (-2) File can’t be opened for reading! in function load
I've tried rebuilding the OpenCV frameworks in different ways and I keep getting the same error!
First of all, I'm confused why the program is looking to source code that isn't contained in the project (it's looking for facerec.cpp in another directory on my computer). Also, why does the cascade classifier load just fine? This makes me think it's an issue with the way I built the opencv_contrib modules, because the face classifier comes from opencv_contrib. But I tried rebuilding opencv_contrib, and I still get this OpenCV error.
Any help would be greatly appreciated!
[UPDATE]
It is not an issue with building the contrib module. I manually included the module in Xcode, so it's now looking within the project for the facerec.cpp, but it still can't open the xml file for reading.
First of all, I'm confused why the program is looking to source code that isn't contained in the project (it's looking for facerec.cpp in another directory on my computer).
It doesn't search for file. It shows you where error was happen. Since your library was build on your machine and has debug information it can point you source file and line number where error happen.
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: ~/opencv/modules/face/src/facerec.cpp:61: error: (-2) File can’t be opened for reading! in function load
This message means that exception was generated at line 61 in file facerec.cpp. You need to check if your data is available for reading.
Figured it out! This answer helped me realize that I needed to get the app bundle's path to the xml files: OpenCV Cascade Classifier load error iOS

dlib object detection always return zero results on iOS

I have been using dlib object detection successfully on mac. Now I want to use it in an iOS app. However, after spending countless hours, dlib object detector always returns zero rectangles.
ifstream fin(dataDir + "/object_detector.svm", ios::binary);
typedef dlib::scan_fhog_pyramid<dlib::pyramid_down<6> > image_scanner_type;
dlib::object_detector<image_scanner_type> detector;
dlib::deserialize(detector, fin);
vector<dlib::rectangle> dets = detector(dlibImage);
To make sure it’s not due to a different image, I am using exact same image for which detector returns 1 hit on mac. I have also printed uchar from part of the image in both mac and iOS, and it’s returning same values. So image data is exactly the same.
Probably dlib library is not built correctly for iOS. I have tried multiple approaches for this. From /example/build dir, below command was invoked.
cmake -G Xcode ..
cmake --build . --config Release
It generated dlib.xcodeproj project in dlib_build dir. I opened the project in xcode, changed architecture to iOS (armv7, arm64) and rebuild the library. This library was linked to my project. I got zero results with this approach. dlib was built in debug mode; I did not get any assertion errors.
Second approach tried was to use dlib/all/source.app in my project. I used all the proprocessing flags that are used by cmake or dlib.xcodeproj project. No errors, but still no matches.
I have compared build settings of my xcode project with examples.xcodeproj generated by cmake and it’s same. Also checked the xcode project from https://github.com/zweigraf/face-landmarking-ios, but no help.
Strange thing is detector takes couple of seconds to process and comes back with zero matches. So it’s doing something. I wish there was a debug logging that I can turn on for the detector.
I am out of ideas. Will appreciate if anyone can help. dlib is a wonderful library, I just wish it would have been easier to work with on iOS.
dlib is working fine on iOS too. Kicking myself for it, but I mixed up the detector instances. The detector on which I called below line was not used for object detection.
dlib::deserialize(detector, fin);
I was just using an empty detector instance, and it was returning 0 detections. By empty detector, I mean it was defined but deserialize method was not invoked. It would have been nice if dlib returned an error or warning, if a detector not loaded with object_detector.svm file is used for detection.
I have observed the same behavior with shape detector too. If sp.dat is not loaded, it silently reports 0 parts detected. Posting this as answer, in case someone else also makes such a silly mistake.

How to write a big amount of multipage TIFF files?

Hi everybody from a beginner in Python. I try to convert a huge file of raw video data into multiple multipage TIFF files by using the "freeimage.write_multipage()" function of the freeimage package from the Mahotas library (Python 2.7). Unfortunately, it seems that this "very easy to use" function doesn't release memory when running the script. So, my script works fine for small input raw files (less than 1 GB) but crashes with bigger files (a 3 GB input file crashes with Win XP pro 32 - ram 3.2 GB). My goal is to convert input files up to 1.5 TB.
When running my script, the Windows Task manager shows an increase of the used ram, output file after output file until the crash which release all the used ram. An extract of the reported error is: "... RuntimeError : mahotas.freeimage: FreeImage error: Memory allocation failed..."
From Stackoverflow, I saw different advices for building multipages TIFF files with using scripts in Image Magic or Irfanview but I think it's impossible for my needs (I have thousands of pictures to assemble).
Thank you for any help.