i tried to run the fisher algorithm provided by openCV community.
http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html#tp91
But it produces the following error.
OpenCV Error: Image step is wrong (The matrix is not continuous, thus its number
of rows can not be changed) in unknown function, file
......\src\opencv\modules\core\src\matrix.cpp, line 802"
And found out that error is caused by,
Ptr<FaceRecognizer> model = createFisherFaceRecognizer();
model->train(images, labels); //Error occurs when i cal this method
Images i am feeding are the same size. following links say it could be due to release libraries.
OpenCV 2.0 C++ API using imshow: returns unhandled exception and "bad-flag"
Getting OpenCV Error "Image step is wrong" in Fisherfaces.train() method
But i removed all the release libraries from the project and using the debug mode in visual studio 2010. I am using OpenCV 2.4.5
But i could not go pass the error.
please help.
Thank you.
Related
One more time, I want to ask for your help after long and unfruitful researches.
I have developped an app using OpenCV which works very well is Visual Studio in debug and release mode.
I tried to deploy it, but i get a fatal error when I start the .exe. By using a log file and doing tests, I figured out the problem was in function cv::resize in the following code :
Mat logo = imread("lena.png");
cv::resize(logo, logo, Size(), 0.55, 0.55, INTER_CUBIC);
I found very weird that i can call cv::imread (and cv::Mat) without throwing exception but not resize.
If I try to run the .exe created by the compilation in release mode, i get the following errod code : 0xc000007b.
I read it could be a problem of compatibility between 32/64-bits dll, without being able to solve it. I try to build a 64-bits appplication. The path for opencv lib is define like this : C:\opencv\build\x64\vc14\lib
Thank you for your help,
Valentin B
I've imported opencv and opencv_contrib frameworks in an Xcode project, and in my Objective-C++ file, I load a face classifier and then a Fisher face recognizer:
// set up classifier, recognizer, and webcam
-(void) setupAnalyzer:(NSString *)faceCascadeName :(NSString *)fisherDatasetName
{
// load face classifier
cout << "loading face classifier..." << endl;
String faceCascadeNameString = string([faceCascadeName UTF8String]);
faceCascade.load(faceCascadeNameString);
// load face recognizer
cout << "loading face recognizer..." << endl;
fishface = createFisherFaceRecognizer();
String fisherDatasetNameString = string([fisherDatasetName UTF8String]);
fishface->load(fisherDatasetNameString);
}
When I call this function from Swift, it seems the face classifier loads just fine with an xml file I have in my project. But when I try to load the Fisher face recognizer using another xml file in my project, Xcode shows this error:
OpenCV Error: Unspecified error (File can’t be opened for reading!) in load, file ~/opencv/modules/face/src/facerec.cpp, line 61
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: ~/opencv/modules/face/src/facerec.cpp:61: error: (-2) File can’t be opened for reading! in function load
I've tried rebuilding the OpenCV frameworks in different ways and I keep getting the same error!
First of all, I'm confused why the program is looking to source code that isn't contained in the project (it's looking for facerec.cpp in another directory on my computer). Also, why does the cascade classifier load just fine? This makes me think it's an issue with the way I built the opencv_contrib modules, because the face classifier comes from opencv_contrib. But I tried rebuilding opencv_contrib, and I still get this OpenCV error.
Any help would be greatly appreciated!
[UPDATE]
It is not an issue with building the contrib module. I manually included the module in Xcode, so it's now looking within the project for the facerec.cpp, but it still can't open the xml file for reading.
First of all, I'm confused why the program is looking to source code that isn't contained in the project (it's looking for facerec.cpp in another directory on my computer).
It doesn't search for file. It shows you where error was happen. Since your library was build on your machine and has debug information it can point you source file and line number where error happen.
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: ~/opencv/modules/face/src/facerec.cpp:61: error: (-2) File can’t be opened for reading! in function load
This message means that exception was generated at line 61 in file facerec.cpp. You need to check if your data is available for reading.
Figured it out! This answer helped me realize that I needed to get the app bundle's path to the xml files: OpenCV Cascade Classifier load error iOS
I have been using dlib object detection successfully on mac. Now I want to use it in an iOS app. However, after spending countless hours, dlib object detector always returns zero rectangles.
ifstream fin(dataDir + "/object_detector.svm", ios::binary);
typedef dlib::scan_fhog_pyramid<dlib::pyramid_down<6> > image_scanner_type;
dlib::object_detector<image_scanner_type> detector;
dlib::deserialize(detector, fin);
vector<dlib::rectangle> dets = detector(dlibImage);
To make sure it’s not due to a different image, I am using exact same image for which detector returns 1 hit on mac. I have also printed uchar from part of the image in both mac and iOS, and it’s returning same values. So image data is exactly the same.
Probably dlib library is not built correctly for iOS. I have tried multiple approaches for this. From /example/build dir, below command was invoked.
cmake -G Xcode ..
cmake --build . --config Release
It generated dlib.xcodeproj project in dlib_build dir. I opened the project in xcode, changed architecture to iOS (armv7, arm64) and rebuild the library. This library was linked to my project. I got zero results with this approach. dlib was built in debug mode; I did not get any assertion errors.
Second approach tried was to use dlib/all/source.app in my project. I used all the proprocessing flags that are used by cmake or dlib.xcodeproj project. No errors, but still no matches.
I have compared build settings of my xcode project with examples.xcodeproj generated by cmake and it’s same. Also checked the xcode project from https://github.com/zweigraf/face-landmarking-ios, but no help.
Strange thing is detector takes couple of seconds to process and comes back with zero matches. So it’s doing something. I wish there was a debug logging that I can turn on for the detector.
I am out of ideas. Will appreciate if anyone can help. dlib is a wonderful library, I just wish it would have been easier to work with on iOS.
dlib is working fine on iOS too. Kicking myself for it, but I mixed up the detector instances. The detector on which I called below line was not used for object detection.
dlib::deserialize(detector, fin);
I was just using an empty detector instance, and it was returning 0 detections. By empty detector, I mean it was defined but deserialize method was not invoked. It would have been nice if dlib returned an error or warning, if a detector not loaded with object_detector.svm file is used for detection.
I have observed the same behavior with shape detector too. If sp.dat is not loaded, it silently reports 0 parts detected. Posting this as answer, in case someone else also makes such a silly mistake.
I am trying to run an OpenCV with DirectX example: d3dsample.cpp from [here]. But it crashes on
initializeContextFromD3D11Device
The error is "Access violation executing ..." (look like it is not possible to initialize the context from D3D to OpenCL. I have no idea to solve with this problem.
I found some possible duplicate at OpenCV question board [here], but they are no any progress through last year.
If some one success to build this sample code, please give a suggestion.
PS: I am using OpenCV 3.1 on Visual Studio 2013 (vc120), Nvidia GTX980 graphics card.
Update:
I am trying to debug with both d3d10_interop and d3d11_interop.
The d3d10_interop gave me a exeption:
C:\OpenCV\3.1\sources\modules\core\src\directx.cpp:449: error:
(-222) OpenCL: Can't create context for DirectX interop in function
cv::directx::ocl::initializeContextFromD3D10Device\n" ...} ...}
I'm learning OpenCV and I have tried running this sample which comes from opencv official samples. This sample uses SURF to find a known object. I created a VS2010 project and added the following to the project:
opencv_core231.lib opencv_highgui231.lib opencv_features2d231.lib
opencv_video231.lib
I can compile the project successfully. However, when I run it I receive the following error:
Expression: vector subscript out of range
I debugged the program and found out the error occurs on lin e60: double dist = matches[i].distance. I don't understand why I am getting the error. Can enyone help me correct this?