I am doing my project on hand gesture recognition using OpenCV, C++ code. After feature extraction, there is a need of training, and testing. for that I have downloaded the format from CRF++ yet another toolkit. I am using now crf++-0.54 version. I have used " crf_learn -a MIRA Templatefile Trainfile Model_crf" format.
I have prepared both template and train file in '.CSV' format.
But it showing the following error:
MIRA doesn't support multi-thrading. use thread_num=1.
waiting for solution.................
You could try crf++ in a newer version since version 0.55 includes a fix about multithreading support. Latest available version on the website is 0.58 by the way.
Related
I have recently found Yolo implementations in PyTorch (e.g. https://github.com/ultralytics/yolov3). What I would like to know if this is really the same (in terms of model accuracy, speed and so on) like the one with Darknet backbone?
I am asking because it is waaaaaay easier with PyTorch (as I am struggling with installing Darknet on windows).
Kind regards,
Can
Follow these step to install darknet framework on window10.
I recommend to clone darknet from AlexeyAB repository since it works great on windows10 and a lot of community support.(https://github.com/AlexeyAB/darknet).
And now it has a python wrapper so you could implement it on python.
Clone darknet repositoriey.
install vcpkg.(https://github.com/microsoft/vcpkg)
Install visual studio 2017.
Install CUDA and CUDNN.
Add CUDNN into system environment. Variable name = 'CUDNN' , variable value =
'installed path'.
Add 'CUDA_TOOLKIT_ROOT_DIR' into system environment. Variable name = 'CUDNN',
variable value = 'installed path\NVIDIA GPU Computing Toolkit\CUDA\v10.2.
build with powershell command '.\build.ps1' in darknet directory.
Hope you find this help :).
YOLO (You Only Look Once) is a one shot detector method to detect object in a certain image. It can work with Darknet, Pytorch, Tensorflow, Keras etc. frameworks. YOLO and darknet complements together pretty well as it has a robust support for CUDA & CUDNN. Use whichever framework you want !!
What I want to do
First of all, my goal is using Tensorflow C++ API as a library on Windows, which is part of my project, instead of building my project inside Tensorflow.
Background
I had achieved this by building Tensorflow with CMake. However, from Tensorflow 1.10, building with CMake was deprecated and Bazel is recommended instead. But the official way to use C++ API is building project inside Tensorflow with Bazel. Thus, this way is not good for me.
What I have done
To use a newer version of Tensorflow, I have been trying to build Tensorflow with Bazel as a standalone library.
Some maintainer denoted that it is possible by substituting //tensorflow/tools/pip_package:build_pip_package to //tensorflow:libtensorflow_cc.so in the official tutorial. But in fact I encountered some problems and solved them by reading this tutorial. Now I have successfully built libtensorflow_cc.so.
What the problem is
However, I have no idea what should be done next to use the built result. And it is exactly what my problem is. There is no documentation of course. Only some incomplete ideas on it I have found, and I will show all of them, trying to give you more information:
There is somebody already successfully linking built *.so and having solved the problems he has encountered.
There is a repo doing the what I want to do on Ubuntu and Arch Linux. I have contacted with the maintainer and he told me that they have no plan for supporting Windows now.
A related issue: Building a .dll on Windows.
A related issue: Packaged TensorFlow C++ library for bazel-independent use.
A related issue: Feature request: provide a means to configure, build, and install that includes cc.
A related question: How to build and use Google TensorFlow C++ api. The scope of this question is a little larger without 'using bazel' and 'on Windows' restrictions.
A related pull request: C++ API
There must be someone struggling with similar problems like me. I hope this question can build a reservoir of ways to solve the problem.
It's over 2 years since this question was asked, and the news is not good: it seems there are insufficient people with Windows skills in a position to provide the support to integrate Tensorflow into Windows applications using the familiar headers + library model. And Tensorflow advances week by week, meaning that the Windows support falls further behind.
In my assessment, the path to building on Windows is currently blocked due to inadequate documentation. It's not so much that "There is no documentation of course" as the OP asserts, it's that the sparse documentation is distributed throughout dozens of separate posts, each of which dates rapidly with the continuing development of the Tensorflow along paths other than Windows C++.
I originally gave this answer to a similar question, but updated it with advice along the following lines yesterday:
Windows is a Microsoft product, so watch what Microsoft is doing
Hint: Microsoft is investing in the ONNX format
you can convert Tensorflow to ONNX, or Keras to ONNX
You can implement your (ONNX) model on Windows in C++ in at least 3 ways:
Windows ML (uses Onnx runtime)
Onnx runtime (supports DirectML as an execution provider)
DirectML (how Microsoft uses graphics cards to boost performance)
We don't have the latest or best hardware (e.g. we have Intel graphics cards), but have been able to get a solution based on Onnx runtime that classifies 224 x 224 RGB images in about 20 milliseconds for us. We found the Windows ML path much more difficult to work with legacy code, and also slower to run.
I have read that MetaCost is different from CostClassifiers. I have seen that MetaCost is available in 3.6 GUI, but not in 3.8's
It probably is an extension now.
You need to install it.
I have a C++ logo detection project which uses OpenCV 2.3.1. I need to upgrade this project to OpenCV 3.0. For example instead of using (I actually mean replacing) IplImage I would like to use cv::Mat. I know that everything will not be automatically upgraded without some manual coding.
Question: I would like to know if there is any way to at least do some of the work automatically, by using a software or third party library.
I recently had to upgrade an old OpenCV project to make use of some extra features offered in 2.4.* versions (coming from version 2.2). There is no tool or library that will help you detect what you need to change. I had to upgrade and then fix certain parts of my code that used functions that had changed slightly.
A really neat resource you can use is this: API changes/compatibility report for the OpenCV library
It allows you to check the backward compatibility % between versions and see the main changes introduced in each library version. So you can use this to fix every conflict you see once you update the library to the version you want.
I am runnung Visual Stadio2010, and have build the OpenCV2.4 with Cmake2.8, during the confugration have set :
WITH_CUDA flag on
CUDA_SDK_ROOT_DIR :C:/ProgramData/NVIDIA Corporation/NVIDIA GPU Computing SDK 4.2
CUDA_TOOLKIT_ROOT_DIR: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v4.2
and then build the whole project in visual studio, successfully.
I am using NVIDIA Quadro 5000, and have tested the examples in "OpenCV-2.4.0-GPU-demos-pack-win32", which all of the works without any error.
also the core and highgui libraries function works fine too.but I cant run anything related to GPU functions in openCV.
this code return me 0 which according to documentation means no device has been find:
int deviceCount =cv::gpu::getCudaEnabledDeviceCount();
std::cout << "index " << deviceCount <<"\n";
which the same as device number number from the GPUdemopack examples, but any other gpu function shows me the following error:
OpenCV Error: No GPU support in unknown function file c:\slave\wininstallerMegaPack\src\opencv\modules\core\src\gpumat.cpp,line193
any body has any idea? please let me know. Thanks
OpenCV 2.4 is still in beta and is not ready to be used for serious projects. It has several build problems on Windows and Mac OS X as far as I could test.
I suggest you stick with the 2.3.1 which is the last stable release. Don't use the 2.4 unless there's a feature in there that you really really need.
EDIT:
By the way, OpenCV 2.3.1 only supports CUDA 4.0.
Run devicequery.exe from the Cuda SDK ( CUDA sdk 4.1\C\bin\win32\Release ) and check the compute capability value of your card.
Then in cmake for opencv, check the CUDA_ARCH_BIN includes this value.
Earlier cards only did 1.1 and don't have ARCH_PTX (the new CUDA binary format) - it's possible to make opencv build only for the new format - which doesn't need as much runtime compilation
You are saying that you had build OpenCV yourself, but the file path from error message (c:\slave\wininstallerMegaPack\...) clearly indicates that you are using prebuilt OpenCV from sourceforge. If you have really build OpenCV yourself, then you have to troubleshoot your environment and find why wrong binaries are used. (The simplest thing you can do - remove any OpenCV binaries from your PC and make a clean full build of both OpenCV and your app.)
OpenCV 2.4 betas have a packaging bug making gpu-enabled binaries useless. So you have to rebuild the library from source or use OpenCV 2.3.1 (CUDA 4.0 indeed).
GPU demos pack is tricky - it has own copy of all binaries it might need. However it can not be used for development.
Final OpenCV 2.4 release is awaited in few days. Windows package will include working CUDA binaries.
EDIT:
OpenCV 2.4.0 is out!