I'm trying to try out the tensorflow models which are location in tensorflow model zoo. Since I'm not familiar with the Bazel compilation procedure, I'm somehow confused how these models should be compiled and used. Does anyone know how it's being done?
First use blaze build to build a target and run the target file under blaze-bin directory. E.g. in the inception model:
# Build the model. Note that we need to make sure the TensorFlow is ready to
# use before this as this command will not build TensorFlow.
bazel build inception/imagenet_train
# run it
bazel-bin/inception/imagenet_train --num_gpus=1 --batch_size=32 --train_dir=/tmp/imagenet_train --data_dir=/tmp/imagenet_data
Related
I am trying to use edgetpu USB accelerator with Intel ATOM single board computer and C++ API for real-time inference.
C++ API for edgetpu is based on TensorFlow lite C++ API. I need to include header files from tensorflow/lite directory (e.g. tensorflow/lite/interpreter.h).
My question is can I build tensorflow only with Lite (not other operations used for training )? if yes, how can I do it?
Because installing everything will take long time.
Assuming that you are using a Linux-based system, the following instruction should work:
Clone the repository, then checkout to the stable release (currently r1.14):
git clone https://github.com/tensorflow/tensorflow
git checkout r1.14
cd tensorflow
Download dependencies:
./tensorflow/lite/tools/make/download_dependencies.sh
Build it (by default it builds a Linux library, there are other options as well for other platforms):
make -f ./tensorflow/lite/tools/make/Makefile
Now, you'll need to link the built library in your project, add this to your makefile:
TENSORFLOW_PATH = path/to/tensorflow/
TFLITE_MAKE_PATH = $(TENSORFLOW_PATH)/tensorflow/lite/tools/make
CLAGS += \
-L$(TFLITE_MAKE_PATH)/gen/linux_x86_64/obj \
-L$(TFLITE_MAKE_PATH)/gen/linux_x86_64/lib/ \
-ltensorflow-lite -ldl
What you need a standalone build that is out of tensorflow repo. I have tensorflow lite project that may help you, you need to cross compile it for respective platform type.
I'm trying to get chrome V8 embedded in my C++ project, and I can only get what I could call, my project being embedded in V8. My only concern with this is that my program is cross-platform and I would like build commands to be the same. I started development it on Windows, but I'm using a mac now to get V8 running.
I can get V8 built and their samples running using this setup:
Get this: https://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/depot_tools_tutorial.html#_setting_up
get source: https://v8.dev/docs/source-code
build: https://v8.dev/docs/build
My current solution has a few commands install, build, run. The build command is more complicated as it attempts to automatically edit the BUILD.gn file in V8 to insert your project instead of V8. It will add all files in your source directory to the sources.
This approach feels very wrong for a few reasons. The first being that there is almost definitely a better way to configure my project than editing a build script with a python script. Secondly, I would like V8 to be embedded in my project, not the other way around. I only have SDL2 as a dependency but I have cross platform CMake builds setup, which would be abandoned for however V8 builds the source files. I feel this way could get hard to manage if I add more dependencies.
I'm currently working with a small test project with one source file.
EDIT: I can't find anything on embedding V8 between running a sample and API usage
The usual approach is to have a step in your build system that builds the V8 library as a dependency (as well as any other dependencies you might have). For that, it should use the official V8 build instructions. If you have a split between steps to get sources/dependencies and compiling them, then getting depot_tools and calling fetch_v8/gclient sync belongs in there. Note that you probably want to pin a version (latest stable branch) rather than using tip-of-tree. So, in pseudocode, you'd have something like:
step get_dependencies:
download/update depot_tools
download/update V8 # pinned_revision (using depot_tools)
step compile (depends on "get_dependencies"):
cd v8; gn args out/...; ninja -C out/...;
cd sdl; build sdl
build your own code, linking against V8/sdl/other deps.
Many build systems already have convenient ways to do these things. I don't know CMake very well though, so I can't suggest anything specific there.
I agree that using scripts to automatically modify BUILD.gn feels wrong. It'll probably also turn out to be brittle and high-maintenance over time.
I got V8 building with CMake very easily using brew:
brew install v8
then add the following lines to CMakeLists.txt
file(GLOB_RECURSE V8_LIB # just GLOB is probably fine
"/usr/local/opt/v8/lib/*.dylib"
)
include_directories(
YOUR_INCLUDES
/usr/local/opt/v8
/usr/local/opt/v8/include
)
target_link_libraries(YOUR_PROJECT LINK_PUBLIC YOUR_LIBS ${V8_LIB})
Worked on Mojave 10.14.1
All,
I'm working on a new C++ project for an embedded system. Part of the system is some legacy Python code that we'll need to interface too. I've already prototyped a C++ to Python interface using the various PyImport_ImportModule functions etc. provided by Python, and tested this on my host system (Ubuntu 64 bit 17.04).
However, the build system in the new project also tries to build all dependencies, so it builds Python 2.7.13 from source. The problem I am seeing is the interface code that used to work with the host system Python is not working with the newly built from source Python. The error I am seeing is "time.so: undefined symbol: PyExc_ValueError", and the .py file I'm trying to call from C++ does import time as one of the first few lines. I checked and time.so is present in the custom built Python and I did update LD_LIBRARY_PATH to include it, but this didn't help. At the end of the build for Python I do see these warnings, so perhaps one of them is relevant?
Python build finished, but the necessary bits to build these modules were not found:
_bsddb _sqlite3 _ssl
_tkinter bsddb185 bz2
dbm dl gdbm
imageop readline sunaudiodev
zlib
Can anyone suggest what to try next? We are not enabling any special options or using any non standard flags in the Python we're building from source (perhaps some extra settings are required)?
This is usually happening to either:
clean build required or
wrong libpython lib being linked. I would suggest to start with trying clean build and then double check your linking flags (make sure you build for Python-2.7 and link to Python-2.7 and not to say Python-3.* etc).
Also, please see this discussion, it looks like a very similar issue: https://www.panda3d.org/forums/viewtopic.php?t=13222
Edit: this also might be relevant: undefined symbol: PyExc_ImportError when embedding Python in C
I have a big C++ program built with Automake and it would be a very big hassle (practically nearly impossible given my time constraints) to convert it to use the Bazel build system. Is there any way I can use a TensorFlow trained model (deep convolutional net) within my program to make predictions (I don't need to do learning within the program right now, but it would be really cool if that can also be done)? Like using TensorFlow as a library?
Thanks!
TensorFlow has a pretty narrow C API exported in the c_api.h header file. The library file can be produced by building the libtensorflow.so target. The C API has everything you require to load a pre-trained model and run inference on it to make predictions. You don't need to convert your build system to Bazel. All you need is to use Bazel to build //tensorflow:libtensorflow.so target, and copy the libtensorflow.so and c_api.h to where you see fit.
The running environment is ubuntu 12.04. Most of the time my python script have to import some external libraries or modules before run. When I distribute the script to some other linux machines. I have to install some necessary modules and libraries again.
Is there some way to package all necessary modules into one single python file and running without installing any module? Thanks
Just combine your files to one file. But it bad way. Select from better solutions:
create deb-package with all depends. In next times system will automatically install all libraries, will check correct state and will remove your files.
using rsync
get actually version from your version control system.
I have wrote script for generating deb-package after commit to our version control system.