I'm working on a application that requires OpenCV object detection using Haar cascade classifier.
I'm using OpenCV 2.3.1 with VS2010 on a 64bit Windows Machine.
I compiled and built OpenCV myself and didn't use any pre-compiled binaries.
First, I wanted to start meddling with the example facedetect.cpp that's included in OpenCV.
I built it with no errors, but when I'm trying to run it won't open the cascade classifier xml file (the CascadeClassifier.load() function returns false). I didn't change anything from the sample source code.
I'm using the xml file that is distributed with OpenCV so the problem isn't with the xml file.
I also made sure that the application can access and read the file using a simple fopen.
I believe (but not sure) that the problem seems to be that the cascade classifier is of an "old" type.
But in the OpenCV documentation it is specifically implied that the new CascadeClassifier object can open both "old" and "new" cascade classifiers.
Here's a link: http://opencv.itseez.com/modules/objdetect/doc/cascade_classification.html#cascadeclassifier-load
I even tried using the pre-compiled OpenCV2.2 binary and it works excellent with that xml. And then I tried to compile the 2.2 sample source code, and again it couldn't load the xml.
I'm aware that I can try using the old object CvHaarClassifierCascade, but I prefer to use the latest version of OpenCV and its objects.
Does anyone have a clue what am I doing wrong?
Give the complete path of the xml file
String face = "c:/data/xml/haarcascade_frontalface_alt.xml";
It should work!
I had the same situation. I solved it when I realized that I’m linking release libs in Debug configuration. Changing opencv_231*.lib to opencv_*231d.lib has solved the problem.
CascadeClassifier::load is not the only function causing such troubles, see this thread for details: OpenCV imread(filename) fails in debug mode when using release libraries.
I got this working by using notepad++. I converted all the relevant xml files to ANSI and also deleted the first line <?xml version="1.0"?> and then rewrote it by hand.
If you are using windows then check the path. Concern is
escape sequence in the path.
forward or backward slash depending on operating system.
It should be like C:\\Ceemple\\data\\haarcascades\\haarcascade_frontalface_alt.xml.
(by the way I am using Ceemple IDE)
Related
I know there are ways of using Tensorflow in C++ they even have a documentation for it but I can seem to be able to get the library for it. I've checked the build from source instructions but it seems to builds a pip package rather than a library I can link to my project. I also found a tutorial but when I tried it out I ran out of memory and my computer crashed. My question is, how can I actually get the C++ library to work on my project? I do have these requirements, I have to work on windows with Visual Studio in C++. What I would love to is if I could get a pre-compiled DLL that I could just link but I haven't found such a thing and I'm open to other alternatives.
I can't comment so I am writing this as an answer.
If you don't mind using Keras, you could use the package frugally deep. I haven't seen a library myself either, but I came across frugally deep and it seemed easy to implement. I am currently trying to use it, so I cannot guarantee it will work.
You could check out neural2D from here:
https://github.com/davidrmiller/neural2d
It is a neural network implementation without any dependent libraries (all written from scratch).
I would say that the best option is to use cppflow, an easy wrapper that I created to use Tensorflow from C++ easily.
You won't need to install anything, just download the TF C API, and place it somewhere in your computer. You can take a look to the docs to see how to do it, and how to use the library.
The answer seems to be that it is hard :-(
Try this to start. You can follow the latest instructions for building from source on Windows up to the point of building the pip package. But don't do that - do this/these instead:
bazel -config=opt //tensorflow:tensorflow.dll
bazel -config=opt //tensorflow:tensorflow.lib
bazel -config=opt tensorflow:install_headers
That much seems to work fine. The problems really start when you try to use Any of the header files - you will probably get compilation errors, at least with TF version >= 2.0. I have tried:
Build the label_image example (instructions in the readme.md file)
It builds and runs fine on Windows, meaning all the headers and source are there somewhere
Try incorporating that source into Windows console executable: runs into compiler errors due to conflicts with std::min & std::max, probably due to Windows SDK.
Include c_api.h in a Windows console application: won't compile.
Include TF-Lite header files: won't compile.
There is little point investing the lengthy compile time in the first two bazel commands if you can't get the headers to compile :-(
You may have time to invest in resolving these errors; I don't. At this stage Tensorflow lacks sufficient support for Windows C++ to rely on it, particularly in a commercial setting. I suggest exploring these options instead:
If TF-Lite is an option, watch this
Windows ML/Direct ML (requires conversion of TF models to ONNX format)
CPPFlow
Frugally Deep
Keras2CPP
UPDATE: having explored the list above, I eventually found the following worked best in my context (real-time continuous item recognition):
convert models to ONNX format (use tf2onnx or keras2onnx
use Microsoft's ONNX runtime
Even though Microsoft recommends using DirectML where milliseconds matter, the performance of ONNX runtime using DirectML as an execution provider means we can run a 224x224 RGB image through our Intel GPU in around 20ms, which is quick enough for us. But it was still hard finding our way to this answer
I'm new to vtk, and I've succesfully built vtk 8.1.1 from source, using Cmake and Visual Studio 2017, with the default options and examples.
I've already solved an issue with the Infovis folder examples.
Now, I'm trying to run the examples from the Modelling folder:
The problem is that when I try to run these examples, it opens a window that closes so fast I can't even see what it says, so I have no clue about the error.
The Delaunay3D.cxx file begins with these comments:
`// Delaunay3D
// Usage: Delaunay3D InputFile(.vtp) OutputFile(.vtu)
// where
// InputFile is an XML PolyData file with extension .vtp
// OutputFile is an XML Unstructured Grid file with extension .vtu
`
So it looks like I need external data files, and the same is true for the other examples. But, where do I get these files, and where do I place them?
Some of the examples in the source files are not complete i.e. as you found out, some of them require external input files which may be missing or mistakes in CMakeLists.txt etc. In the parent folder of the folder that you have attached screenshot of (i.e. the Modelling directory) there is also a folder for Python examples. In that folder, there is a Delaunay3D.py file which creates random points as input instead of reading them from file. So you can do the same. The names and signatures of functions in Python and C++ are the same by modifying the Delaunay3D.cxx code or adding some code in the TestDelaunay3D.cxx. But there is no such file for the finance example, unfortunately.
I find it useful to use VTK code along with Paraview. Paraview is built on top of VTK. It has most of the VTK filters available through the GUI. In Paraview you can also create some data and save it to file using File->Save Data. You can then use that as input for the examples. Once you become familiar with VTK file types and VTK sources, generating data does not require a lot of code. So you can do it yourself by modifying any of the example code (like it is done in the Delaunay3D.py).
About where to place the input files, in this particular case you can place them anywhere but when you run the executable that was built, you must enter the path of the input file correctly on the command line.
Updates based on comments:
The Python wrappers provide almost complete features available with the C++ version. The exceptions are noted here. If you decide to use VTK Python then a good resource to read is the VTK Numpy interface.
Paraview implements a majority of VTK filters and sources. So it can do a lot of creation and modification of geometries. In addition, you can use programmable filters and sources for doing things which are not available through Gui. In the programmable filters you can write any Python script which can import vtk and use all its functionality.
But if for your use case you only need a subset of the functionality Paraview provides then you may want to write your own GUI.
openCV 2.4.3 / Xcode 4.5.2 / mac osx 10.8.2
I am trying to get openCV working with iOS. I am attempting to use the the prebuilt 2.4.3 framework from openCV.org. However I am getting the following xcode project build errors that suggest the compiler doesn't know it is dealing with c++, eg
#include <list> !'list' file not found
namespace cv !unknown type name 'namespace'
This only seems to concern the following header files:
"opencv2/nonfree/features2d.hpp"
"opencv2/nonfree/nonfree.hpp"
"opencv2/video/video.hpp"
if I don't include these three files in opencv.hpp (or anywhere else) I seem to be able to compile and use openCV ok. The trouble is, I do need the nonfree files as I am experimenting with SURF - which has been moved to nonfree recently.
This is really a twofold question (sorry ;-)
how do I convince the compiler that these are c++ headers?
which headers exactly do I need to use SURF?
update
I have cloned the openCV git repository and built a new framework from that. This approach had not worked previously, but today I realised that I was not using the current version of CMAKE. I had been using CMAKE 2.8.2 and this would fail to build opencv for ios. Current version CMAKE 2.8.10 builds it without any issues (that's an object lesson in obeying the docs, which do say CMAKE min. v2.8.8 is required).
Now when I add this current build of the opencv framework in an Xcode project I can include features2d and nonfree and build smoothly. The only problem remains with one header: video/background_segm.hpp, which still yields:
#include <list> !'list' file not found
If I comment that line out I get an error on the next line:
namespace cv !unknown type name 'namespace'
It seems clear that the compiler doesn't recognise this as a C++ header, even though it is suffixed .hpp.
In opencv2/video/video.hpp if I remove
#include "opencv2/video/background_segm.hpp"
I can build with video.hpp also (although I guess it would be unusable in practice).
Unfortunately I still can't get SURF to work. When I run the project it crashes with this error:
OpenCV Error: The function/feature is not implemented (OpenCV was built without SURF support)
This is triggered in legacy/features2d.cpp:
Ptr<Feature2D> surf = Algorithm::create<Feature2D>("Feature2D.SURF");
if( surf.empty() )
CV_Error(CV_StsNotImplemented, "OpenCV was built without SURF support");
The questions remain...
how do I convince the compiler that background_segm.hpp is a legit c++ header?
how do I enable SURF support?
I have everything working now. After having no joy with the pre-built iOS library available from openCV.org this is what I did...
compile openCV for iOS from a clone of the gitHub repository. Run build_framework.py (in the ios folder of the distribution), pointing to an output directory of your choosing. Be sure to have an up-to-date copy of CMake or you will trip over like I did.
Your output folder will end up with two subfolders, build and opencv2.framework. Drag the latter into your Xcode project
Add the following line in the project-Prefix.pch file
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif
(should go above the #ifdef __OBJC__ line)
That is sufficient to get most of openCV working. However it is a very good idea to avoid "objective-C++" (mixing your c++ code in the same files as your objective-C). To manage this you create a thin "wrapper" object (which will be obj-C++) to mediate between your obj-C classes and c++ code. The wrapper essentially has only two roles: to translate data formats (eg UIImage <-> cv::Mat), and to translate between obj-C methods and C++ function calls. See my answer to this question for details (and a github-hosted example project)
To get SURF (and SIFT) working requires a couple of additional steps, as SURF is somewhat deprecated due to licensing issues (it's been moved into nonfree which does not load automatically).
These includes need to be added in files where you are using SURF
#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/legacy/compat.hpp>
The code I am working with uses the C interfaces for SURF (eg cvExtractSURF), so we also need to add this line before calling these functions:
cv::initModule_nonfree();
The other part of my question, how to force Xcode to compile as C++, was a bit of a red herring (there must have been something compatibility issue with the openCV build I was using) - and is no longer required for this solution. However the answer is first, to rename your .m files .mm (for objective-C++) or .cpp (for pure C++) … but if that doesn't work, you can force the issue in the file inspector by changing 'file type'.
update
You also need to take care that the C++ standard library is set correctly in any projects that use the openCV framework. Older versions of openCV (to.2.4.2) want libstdc++, newer (2.4.3+) expect libc++. Details here:
https://stackoverflow.com/a/14186883/1375695
update 2
openCV now installs with cocoaPods. To quote SebastienThiebaud
OpenCV is available on Cocoapods. Add one line in your Podfile: pod 'OpenCV'. Pretty easy.
"Pretty easy" ... given all our previous hassles, could be the understatement of [last] year...
openCV implementation class just add .m to .mm as implementation file.
#import <UIKit/UIKit.h>
#import <opencv2/opencv.hpp>
#interface ViewController : UIViewController
#end
.mm file run in C++ compiler in iOS, so it not show error.
I am new to the XCode environment so apologies if this sounds trivial.
I am trying to use a third party library using for OpenCV that helps me do some blob analysis, CCL etc. Hosted here
I have searched a lot but there doesn't seem to be much documentation on how to go about adding and using these new libraries. I am using XCode 4.5.2, OpenCV 2.4.2.
When I simply add all the header files to the project folder and #include them in the source code, it fails to compile. I have also tried adding the "Header Paths" but it doesn't help. What am I missing?
I have tried to follow the instructions (compiling it using the terminal but it doesn't compile too) I am not clear on how or when exactly to use CMAKE.
Any help will be appreciated. Thank you.
I would suggest you using CvBlob on google code which is different from the one on willowgarage, I have got recently confused with this so take a look at this question for alternative blob analysis libraries.
Moreover, CvBlob has also a good support community here. (Search on "[cvblobslib]" or on "[blob] [opencv]")
Try this: cvBlob: OSX installation
Once you get it compiled, you need to include the library under Link Binary with Libraries in Build Phases. (This screenshot shows the core, imgproc, and highgui libraries. Your cvBlob library would go in the same place.)
I'm trying to get my webcam to capture video in OpenCV, version 2.2 in Windows 7 64 bit. However, I'm having some difficulties. None of the sample binaries that come with OpenCV can detect my webcam. Recently I came across this posting which suggested that the answer lies in recompiling a file, opencv_highgui with the property HAVE_VIDEOINPUT HAVE_DSHOW in the property page.
Can't access webcam with OpenCV
However, I'm unsure about procedurally how to do this. Can someone recommend as to how to go about this? Thanks.
Roughly, these are the important steps:
Download the OpenCV 2.2 source code,
set up a project to compile it, according to the InstallGuide,
make any changes you need to make in the code,
build the opencv_highgui library (dll and lib files, probably), and
replace these in your original project.
If you can configure the project to generate the highgui files only (and not every library in OpenCV), do so, since the change you need to do shouldn't affect other modules. This saves some time.
The detailed instructions to build OpenCV are in: http://opencv.willowgarage.com/wiki/InstallGuide. You should follow this guide.