I'm looking at torchscript found here and libtorch found here. torchscript allows us to convert a python model into a module that can be loaded into C++ application while libtorch allows us to train and test a model in C++.
How are they different in terms of speed? Why would I use libtorch over torchscript or vice versa?
Related
I need to deploy a yolov4 inference model and I want to use onnxruntime with tensorRT backend. I don't know how to post process yolov4 detection result in C++. I have a sample written in python but I can not find C++ sample.
https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov4
Is there a sample to know how to process yolov4 onnx result ?
Thanks
we have built something similar. For now we have only yolov3 with onnxruntime in C++ but we are testing yolov4 and will be avaiable in our next release. If you want have a look here https://github.com/ai4prod/ai4prod.
I have recently found Yolo implementations in PyTorch (e.g. https://github.com/ultralytics/yolov3). What I would like to know if this is really the same (in terms of model accuracy, speed and so on) like the one with Darknet backbone?
I am asking because it is waaaaaay easier with PyTorch (as I am struggling with installing Darknet on windows).
Kind regards,
Can
Follow these step to install darknet framework on window10.
I recommend to clone darknet from AlexeyAB repository since it works great on windows10 and a lot of community support.(https://github.com/AlexeyAB/darknet).
And now it has a python wrapper so you could implement it on python.
Clone darknet repositoriey.
install vcpkg.(https://github.com/microsoft/vcpkg)
Install visual studio 2017.
Install CUDA and CUDNN.
Add CUDNN into system environment. Variable name = 'CUDNN' , variable value =
'installed path'.
Add 'CUDA_TOOLKIT_ROOT_DIR' into system environment. Variable name = 'CUDNN',
variable value = 'installed path\NVIDIA GPU Computing Toolkit\CUDA\v10.2.
build with powershell command '.\build.ps1' in darknet directory.
Hope you find this help :).
YOLO (You Only Look Once) is a one shot detector method to detect object in a certain image. It can work with Darknet, Pytorch, Tensorflow, Keras etc. frameworks. YOLO and darknet complements together pretty well as it has a robust support for CUDA & CUDNN. Use whichever framework you want !!
I need to load and run an ONNX-model in a C++ environment using Libtorch on Windows 10 (Visual Studio 2015, v140). Searching the web, there seem to be almost exclusivly instructions for how to do it in Python. Is there a well documented way/does anyone know how to this in C++?
Here's two C++ based resources that might be relevant:
The ONNX Runtime C++ API enables inference and loading ONNX models with C++.
Windows ML C++ APIs can be leveraged to load ONNX models in C++ Windows desktop applications.
I wonder if there is a way of building a convolutional neural network with openCV. Basically I have already trained the cifarnet cnn using the python API of Tensorflow but now I want to run the inference without tensorflow by using C++. The only open-source lib that I can use is opencv. Do you know if I can do that with opencv instead of creating the network manually?
Try to look at deep learning module from opencv_contrib. Some sample with evaluating model trained in TensorFlow in dnn/samples/tf_inception.cpp. Also some hints about making snapshot in issue: https://github.com/opencv/opencv_contrib/issues/1029#issuecomment-290070240.
What technique/library is used for Python binding in OpenCV2.0?
I am aware there there are a number of libraries for C++/Python binding and that previous versions of OpenCV were using SWING library.
I am testing Python in Python Tools for Visual Studio which has code completition (intellisense) built in. However, for current OpenCV Python bindings it displays only function names in interactive window. In editor, it does not even display the function names.
Is it possible to have intellisense working on parameter level for C++ Python bindings?
What technique/library is used for Python binding in OpenCV2.0?
Vadim Pisarevsky, one of the core developers of OpenCV, has given a brief answer for this question here: How Python API is generated?. He says:
We do not use SWIG or any other standard wrapper generation tool. We
did not find such tools that would produce satisfying results.
Instead, we use our own purely Python-based solution for parsing
OpenCV headers
The parser is at opencv/modules/python/src2/hdr_parser.py
After all the API is extracted, we use some more python code
(opencv/modules/python/src2/gen2.py) to produce Python wrappers.