I am working on hand gesture recognition using computer vision for motion simulation. I do not have as good knowledge of python as i have of c++ and hence have programmed an opencv code in c++. Now i want this code to work in a blender.
Please tell me how can i integrate this code in blender.
Without altering blender's source code and compiling your own custom version, you will need to use an addon to use your code within blender. Blender uses python for it's addon system, each addon is a python module. You can use python's ctypes module to call compiled code from a python script.
While normally an addon is written in python it is possible to use or integrate a compiled C/C++ python module that can be used in blender. I'm not 100% sure if you can compile the module and add it to blender's addon folder or whether you need to have a folder with the library and a small python script that loads it.
You may want to look at cython, it takes python code and turns it into C/C++ code that can be compiled, this may give you a starting point to linking with your code. Have a look at CubeSurfer for an example of using cython for a blender addon.
For blender specific help you will find blender.stackexchange.com better.
Related
I know there are ways of using Tensorflow in C++ they even have a documentation for it but I can seem to be able to get the library for it. I've checked the build from source instructions but it seems to builds a pip package rather than a library I can link to my project. I also found a tutorial but when I tried it out I ran out of memory and my computer crashed. My question is, how can I actually get the C++ library to work on my project? I do have these requirements, I have to work on windows with Visual Studio in C++. What I would love to is if I could get a pre-compiled DLL that I could just link but I haven't found such a thing and I'm open to other alternatives.
I can't comment so I am writing this as an answer.
If you don't mind using Keras, you could use the package frugally deep. I haven't seen a library myself either, but I came across frugally deep and it seemed easy to implement. I am currently trying to use it, so I cannot guarantee it will work.
You could check out neural2D from here:
https://github.com/davidrmiller/neural2d
It is a neural network implementation without any dependent libraries (all written from scratch).
I would say that the best option is to use cppflow, an easy wrapper that I created to use Tensorflow from C++ easily.
You won't need to install anything, just download the TF C API, and place it somewhere in your computer. You can take a look to the docs to see how to do it, and how to use the library.
The answer seems to be that it is hard :-(
Try this to start. You can follow the latest instructions for building from source on Windows up to the point of building the pip package. But don't do that - do this/these instead:
bazel -config=opt //tensorflow:tensorflow.dll
bazel -config=opt //tensorflow:tensorflow.lib
bazel -config=opt tensorflow:install_headers
That much seems to work fine. The problems really start when you try to use Any of the header files - you will probably get compilation errors, at least with TF version >= 2.0. I have tried:
Build the label_image example (instructions in the readme.md file)
It builds and runs fine on Windows, meaning all the headers and source are there somewhere
Try incorporating that source into Windows console executable: runs into compiler errors due to conflicts with std::min & std::max, probably due to Windows SDK.
Include c_api.h in a Windows console application: won't compile.
Include TF-Lite header files: won't compile.
There is little point investing the lengthy compile time in the first two bazel commands if you can't get the headers to compile :-(
You may have time to invest in resolving these errors; I don't. At this stage Tensorflow lacks sufficient support for Windows C++ to rely on it, particularly in a commercial setting. I suggest exploring these options instead:
If TF-Lite is an option, watch this
Windows ML/Direct ML (requires conversion of TF models to ONNX format)
CPPFlow
Frugally Deep
Keras2CPP
UPDATE: having explored the list above, I eventually found the following worked best in my context (real-time continuous item recognition):
convert models to ONNX format (use tf2onnx or keras2onnx
use Microsoft's ONNX runtime
Even though Microsoft recommends using DirectML where milliseconds matter, the performance of ONNX runtime using DirectML as an execution provider means we can run a 224x224 RGB image through our Intel GPU in around 20ms, which is quick enough for us. But it was still hard finding our way to this answer
We have trained our models and tested them successfully using the provided Python scripts. However, we now want to deploy it on our website and run a web-service for the second round of tests.
Is there a C++ wrapper so that we can use to run/execute our models the same way we do with Python scripts?
I think the easiest way is to use cppflow. It is a C++ wrapper for the TensorFlow C API. It is simple but really easy to use and you do not need to install it neither compiling with Bazel. You just have to download the C API and use it like this:
Model model("graph.pb");
model.restore("path/to/checkpoint");
auto input = new Tensor(model, "input");
auto output = new Tensor(model, "output");
model.run(input, output);
You'll find code to run object detection on C++ here. You'll need an exported graph (.pb format), that you can get using the TF object detection API.
The compilation used to be tricky (except if you put your project in the tensorflow directory and compile everything with bazel, but you might not want to do that). I think it's supposed to be easier now, but I don't know how; or you can follow these instructions to compile tensorflow on its own and use it in a cmake project. You have another example of runing a graph in c++ here.
I have Tensorflow with python api and got these checkpoint model file:
model.ckpt-17763.data-00000-of-00001
model.ckpt-17763.index
model.ckpt-17763.meta
But I want a C/C++ shared library (.so file) when integrating into production. So I need to load these model file and inference with C++ code and compile to a shared library. Is there some tutorial or sample for doing this?
You can write c++ code to load and use your graph with the instructions given here.
You can use the files here to make a Cmake project with tensorflow outside the TF repository, and compile your library.
However, you'll still need the .ckpt files next to your .so, I don't know how to intergate them inside it.
There are a lot of questions about that on S.O., and a few tutorials (see the two cited in this answer) but since tensorflow is evolving fast, they quickly become outdated, and it's always a bit of struggle to get it to work properly (totally feasible, and getting easier, though).
I have a problem. I write a python script to make my work faster and now I want to share it with my team.
I don't want them to mess with some imports that are missing in the basic python installation. I know there is a way to compile python to exe, but I wonder if I can compile the code and the imports without messing with py2exe.
Does python have a built-in solution for that?
I saw that python have pyc compile option. Does it compile the import modules as well?
Thanks,
Or
No I don't believe you have a built-in standalone compilation mode native to Python. The pyc is a compiled code but not the kind you usually distribute as an executable program (meaning you would still need the Python interpreter).
If you don't want to use py2exe or other similar packages I advise you to use a portable version of Python with which you can distribute your software (see for example WinPython). The easiest way to accomplish this is by giving the portable distribution together with your code and perhaps a batch file (or similar, if you want to have a .exe alike behavior).
NOTE: You can provide the pyc compile code of the libraries you are using and putting them on the root of you software (or just stating where those imports should happen) but I predict this will give you problems in the future due to dependencies between different libraries. So, it's possible although I would hardly considerate it a good solution for what it seems to me you are trying to achieve.
I have a C++ program and I want to implement scripts on it. The desired scenario is, I have an executable of c++ code, it then calls at specific times a python script so it knows what to do through the embeded interpreter and the script then uses some form of API from the c++ program. This is where I ran into a problem. To expose c++ code to python you need to compile a DLL of the wrappers that you want and load it as a module inside python and that breaks my intention of python accessing the executable's functions.
Any way to resolve this problem without resorting to put so much pieces of c++ on a shared library?
What you want to do is to embed Python code into your application. There is an article on python.org on how to do that using raw CPython, but it's not that exhaustive when it comes to C++. A better bet might be to use Boost.Python or SWIG.