Yolo: PyTorch vs. Darknet - computer-vision

I have recently found Yolo implementations in PyTorch (e.g. https://github.com/ultralytics/yolov3). What I would like to know if this is really the same (in terms of model accuracy, speed and so on) like the one with Darknet backbone?
I am asking because it is waaaaaay easier with PyTorch (as I am struggling with installing Darknet on windows).
Kind regards,
Can

Follow these step to install darknet framework on window10.
I recommend to clone darknet from AlexeyAB repository since it works great on windows10 and a lot of community support.(https://github.com/AlexeyAB/darknet).
And now it has a python wrapper so you could implement it on python.
Clone darknet repositoriey.
install vcpkg.(https://github.com/microsoft/vcpkg)
Install visual studio 2017.
Install CUDA and CUDNN.
Add CUDNN into system environment. Variable name = 'CUDNN' , variable value =
'installed path'.
Add 'CUDA_TOOLKIT_ROOT_DIR' into system environment. Variable name = 'CUDNN',
variable value = 'installed path\NVIDIA GPU Computing Toolkit\CUDA\v10.2.
build with powershell command '.\build.ps1' in darknet directory.
Hope you find this help :).

YOLO (You Only Look Once) is a one shot detector method to detect object in a certain image. It can work with Darknet, Pytorch, Tensorflow, Keras etc. frameworks. YOLO and darknet complements together pretty well as it has a robust support for CUDA & CUDNN. Use whichever framework you want !!

Related

Yolov4 onnxruntime C++

I need to deploy a yolov4 inference model and I want to use onnxruntime with tensorRT backend. I don't know how to post process yolov4 detection result in C++. I have a sample written in python but I can not find C++ sample.
https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov4
Is there a sample to know how to process yolov4 onnx result ?
Thanks
we have built something similar. For now we have only yolov3 with onnxruntime in C++ but we are testing yolov4 and will be avaiable in our next release. If you want have a look here https://github.com/ai4prod/ai4prod.

Could OpenCV compile/use with WASI(WebAssembly System Interface)?

WASI (WebAssembly System Interface) is intended to bring WebAssembly outside the browser.
I built a simple face recognition application with the eigenfaces example of OpenCV 4.3.0 (See: https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html#eigenfaces-in-opencv) and made it work.
Recently I would like to build a WebAssembly(WASM)-based face recognition application with OpenCV. I searched for WASM+face recognition and I got lots of git repositories and examples with opencv_js.wasm and uses it with a JavaScript binding.
My purpose is to build a standalone *.wasm module rather than html+js+wasm project, hence I ran into WASI(WebAssembly System Interface). Several runtimes such as wasmtime and wasmer could run standalone *.wasm which is compiled from C/C++ with an WASI toolchain(wasicc, wasic++, etc., such as wasienv project).
Have you guys have any ideas or experiences to build a standalone face recognition/detection/... or similar projects with WASI? Really appreciate for your reply!

How to build and use Google tensorflow C++ API on ARM processor

This is a follow-on to "how-to-build-and-use-google-tensorflow-c-api" : can any one explain how to build a Tensorflow C++ program on an ARM processor? I'm thinking specifically of Nvidia's Jetson family of GPU devices. Nvidia has lots and lots of documentation for these, but it all seems to be for Python (like this), for toy examples, and nothing for anyone who wants to write a C++ program using the full tensorflow API (if one even exists) for their own machine learning models. I'd like to be able to build programs like this one, which is a deep learning inference and exactly what the Jetson is supposedly made for.
I've found Web sites that offer links to installers too, but they all seem to be for the x86 architecture instead of ARM.
I have the same question about Bazel. I gather from all the unsatisfactory documentation I've been looking at that Bazel is mandatory for anyone who wants to build tensorflow programs using a GPU, but all of the installation instructions I can find are either incomplete or for a different architecture such as x86 (for example https://www.osetc.com/en/how-to-install-bazel-on-ubuntu-14-04-16-04-18-04-linux.html
I'll add that any link or github repository that dumps a load of code in my lap without making clear the prerequisites (since my little Jetson may not have the stuff installed that you assume) or the commands needed to actually build it (especially if it includes a project file for a compiler I never heard of) isn't very much help.

How to program with C++ API library on Windows using Bazel?

What I want to do
First of all, my goal is using Tensorflow C++ API as a library on Windows, which is part of my project, instead of building my project inside Tensorflow.
Background
I had achieved this by building Tensorflow with CMake. However, from Tensorflow 1.10, building with CMake was deprecated and Bazel is recommended instead. But the official way to use C++ API is building project inside Tensorflow with Bazel. Thus, this way is not good for me.
What I have done
To use a newer version of Tensorflow, I have been trying to build Tensorflow with Bazel as a standalone library.
Some maintainer denoted that it is possible by substituting //tensorflow/tools/pip_package:build_pip_package to //tensorflow:libtensorflow_cc.so in the official tutorial. But in fact I encountered some problems and solved them by reading this tutorial. Now I have successfully built libtensorflow_cc.so.
What the problem is
However, I have no idea what should be done next to use the built result. And it is exactly what my problem is. There is no documentation of course. Only some incomplete ideas on it I have found, and I will show all of them, trying to give you more information:
There is somebody already successfully linking built *.so and having solved the problems he has encountered.
There is a repo doing the what I want to do on Ubuntu and Arch Linux. I have contacted with the maintainer and he told me that they have no plan for supporting Windows now.
A related issue: Building a .dll on Windows.
A related issue: Packaged TensorFlow C++ library for bazel-independent use.
A related issue: Feature request: provide a means to configure, build, and install that includes cc.
A related question: How to build and use Google TensorFlow C++ api. The scope of this question is a little larger without 'using bazel' and 'on Windows' restrictions.
A related pull request: C++ API
There must be someone struggling with similar problems like me. I hope this question can build a reservoir of ways to solve the problem.
It's over 2 years since this question was asked, and the news is not good: it seems there are insufficient people with Windows skills in a position to provide the support to integrate Tensorflow into Windows applications using the familiar headers + library model. And Tensorflow advances week by week, meaning that the Windows support falls further behind.
In my assessment, the path to building on Windows is currently blocked due to inadequate documentation. It's not so much that "There is no documentation of course" as the OP asserts, it's that the sparse documentation is distributed throughout dozens of separate posts, each of which dates rapidly with the continuing development of the Tensorflow along paths other than Windows C++.
I originally gave this answer to a similar question, but updated it with advice along the following lines yesterday:
Windows is a Microsoft product, so watch what Microsoft is doing
Hint: Microsoft is investing in the ONNX format
you can convert Tensorflow to ONNX, or Keras to ONNX
You can implement your (ONNX) model on Windows in C++ in at least 3 ways:
Windows ML (uses Onnx runtime)
Onnx runtime (supports DirectML as an execution provider)
DirectML (how Microsoft uses graphics cards to boost performance)
We don't have the latest or best hardware (e.g. we have Intel graphics cards), but have been able to get a solution based on Onnx runtime that classifies 224 x 224 RGB images in about 20 milliseconds for us. We found the Windows ML path much more difficult to work with legacy code, and also slower to run.

How to deploy a Tensorflow trained model for inference for a Windows standalone application

I would like to use a model trained with Tensorflow in a Windows standalone desktop application. I only need to perform predictions, I can train the model with Tensorflow Python API. What is the recommended approach?
I know there is a C++ API, but it is really hard to compile it, especially on Windows. Can I find any prebuilt C++ Tensorflow binaries for Windows?
Is there an easy way to distribute Python with Tensorflow as a Windows installer prerequisite?
Can I import the Tensorflow model in another technology and use it for inference? OpenCv DNN module has a function which imports data from Tensorflow, but I understood it has many limitations, and I was not able to import and use a model with OpenCv.
Thanks for help!
I was challenging the same issues as you.
You should at least try to compile it (try CMake, it might be easier)
If you still having trouble:
Compiler is out of Heap Space
Standalone Windows Lib
Basic Tensorflow Handling with C++
I asked a similar question and eventually found my own way to the answer. In the end, I found the Tensorflow instructions were actually pretty good (it was my reading them that was bad!). I have not tried using Bazel for Windows, but building Tensorflow using CMake ended up working fine.
The main issue was the compiler heap space issue. This always seems to occur in some random place if you are using the MS Visual Studio 32-bit compiler (default). The key is to make sure you run vcvarsall.bat or vcvars64.bat or whatever it takes to invoke the 64-bit compiler (in Task Manager, it should show up as cl.exe, not cl.exe *32) I found it hard (read: impossible) to get Visual Studio to use the 64-bit compiler, but using the MSBuild tool to compile on the command line worked fine.
Once you can build the example program, you have an example of an application that links to a static tensorflow library to do its stuff. You can just make your own application link to this library for what you want.