Info
I built a Tensorflow (TF) model from Keras and converted it to Tensorflow-Lite (TFL)
I built an Android app in Android Studio and used the Java API to run the TFL model
In the Java app, I used the TFL Support Library (see here), and the TensorFlow Lite AAR from JCenter by including implementation 'org.tensorflow:tensorflow-lite:+' under my build.gradle dependencies
Inference times are not so great, so now I want to use TFL in Android's NDK.
So I built an exact copy of the Java app in Android Studio's NDK, and now I'm trying to include the TFL libs in the project. I followed TensorFlow-Lite's Android guide and built the TFL library locally (and got an AAR file), and included the library in my NDK project in Android Studio.
Now I'm trying to use the TFL library in my C++ file, by trying to #include it in code, but I get an error message: cannot find tensorflow (or any other name I'm trying to use, according to the name I give it in my CMakeLists.txt file).
Files
App build.gradle:
apply plugin: 'com.android.application'
android {
compileSdkVersion 29
buildToolsVersion "29.0.3"
defaultConfig {
applicationId "com.ndk.tflite"
minSdkVersion 28
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
externalNativeBuild {
cmake {
cppFlags ""
}
}
ndk {
abiFilters 'arm64-v8a'
}
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
// tf lite
aaptOptions {
noCompress "tflite"
}
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"
version "3.10.2"
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
// tflite build
compile(name:'tensorflow-lite', ext:'aar')
}
Project build.gradle:
buildscript {
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.6.2'
}
}
allprojects {
repositories {
google()
jcenter()
// native tflite
flatDir {
dirs 'libs'
}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
CMakeLists.txt:
cmake_minimum_required(VERSION 3.4.1)
add_library( # Sets the name of the library.
native-lib
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
native-lib.cpp )
add_library( # Sets the name of the library.
tensorflow-lite
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
native-lib.cpp )
find_library( # Sets the name of the path variable.
log-lib
# Specifies the name of the NDK library that
# you want CMake to locate.
log )
target_link_libraries( # Specifies the target library.
native-lib tensorflow-lite
# Links the target library to the log library
# included in the NDK.
${log-lib} )
native-lib.cpp:
#include <jni.h>
#include <string>
#include "tensorflow"
extern "C" JNIEXPORT jstring JNICALL
Java_com_xvu_f32c_1jni_MainActivity_stringFromJNI(
JNIEnv* env,
jobject /* this */) {
std::string hello = "Hello from C++";
return env->NewStringUTF(hello.c_str());
}
class FlatBufferModel {
// Build a model based on a file. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromFile(
const char* filename,
ErrorReporter* error_reporter);
// Build a model based on a pre-loaded flatbuffer. The caller retains
// ownership of the buffer and should keep it alive until the returned object
// is destroyed. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
const char* buffer,
size_t buffer_size,
ErrorReporter* error_reporter);
};
Progress
I also tried to follow these:
Problems with using tensorflow lite C++ API in Android Studio Project
Android C++ NDK : some shared libraries refuses to link in runtime
How to build TensorFlow Lite as a static library and link to it from a separate (CMake) project?
how to set input of Tensorflow Lite C++
How can I build only TensorFlow lite and not all TensorFlow from source?
but in my case I used Bazel to build the TFL libs.
Trying to build the classification demo of (label_image), I managed to build it and adb push to my device, but when trying to run I got the following error:
ERROR: Could not open './mobilenet_quant_v1_224.tflite'.
Failed to mmap model ./mobilenet_quant_v1_224.tflite
I followed zimenglyu's post: trying to set android_sdk_repository / android_ndk_repository in WORKSPACE got me an error: WORKSPACE:149:1: Cannot redefine repository after any load statement in the WORKSPACE file (for repository 'androidsdk'), and locating these statements at different places resulted in the same error.
I deleted these changes to WORKSPACE and continued with zimenglyu's post: I've compiled libtensorflowLite.so, and edited CMakeLists.txt so that the libtensorflowLite.so file was referenced, but left the FlatBuffer part out. The Android project compiled successfully, but there was no evident change, I still can't include any TFLite libraries.
Trying to compile TFL, I added a cc_binary to tensorflow/tensorflow/lite/BUILD (following the label_image example):
cc_binary(
name = "native-lib",
srcs = [
"native-lib.cpp",
],
linkopts = tflite_experimental_runtime_linkopts() + select({
"//tensorflow:android": [
"-pie",
"-lm",
],
"//conditions:default": [],
}),
deps = [
"//tensorflow/lite/c:common",
"//tensorflow/lite:framework",
"//tensorflow/lite:string_util",
"//tensorflow/lite/delegates/nnapi:nnapi_delegate",
"//tensorflow/lite/kernels:builtin_ops",
"//tensorflow/lite/profiling:profiler",
"//tensorflow/lite/tools/evaluation:utils",
] + select({
"//tensorflow:android": [
"//tensorflow/lite/delegates/gpu:delegate",
],
"//tensorflow:android_arm64": [
"//tensorflow/lite/delegates/gpu:delegate",
],
"//conditions:default": [],
}),
)
and trying to build it for x86_64, and arm64-v8a I get an error: cc_toolchain_suite rule #local_config_cc//:toolchain: cc_toolchain_suite '#local_config_cc//:toolchain' does not contain a toolchain for cpu 'x86_64'.
Checking external/local_config_cc/BUILD (which provided the error) in line 47:
cc_toolchain_suite(
name = "toolchain",
toolchains = {
"k8|compiler": ":cc-compiler-k8",
"k8": ":cc-compiler-k8",
"armeabi-v7a|compiler": ":cc-compiler-armeabi-v7a",
"armeabi-v7a": ":cc-compiler-armeabi-v7a",
},
)
and these are the only 2 cc_toolchains found. Searching the repository for "cc-compiler-" I only found "aarch64", which I assumed is for the 64-bit ARM, but nothing with "x86_64". There are "x64_windows", though - and I'm on Linux.
Trying to build with aarch64 like so:
bazel build -c opt --fat_apk_cpu=aarch64 --cpu=aarch64 --host_crosstool_top=#bazel_tools//tools/cpp:toolchain //tensorflow/lite/java:tensorflow-lite
results in an error:
ERROR: /.../external/local_config_cc/BUILD:47:1: in cc_toolchain_suite rule #local_config_cc//:toolchain: cc_toolchain_suite '#local_config_cc//:toolchain' does not contain a toolchain for cpu 'aarch64'
Using the libraries in Android Studio:
I was able to build the library for x86_64 architecture by changing the soname in build config and using full paths in CMakeLists.txt. This resulted in a .so shared library. Also - I was able to build the library for arm64-v8a using the TFLite Docker container, by adjusting the aarch64_makefile.inc file, but I did not change any build options, and let build_aarch64_lib.sh whatever it builds. This resulted in a .a static library.
So now I have two TFLite libs, but I'm still unable to use them (I can't #include "..." anything for example).
When trying to build the project, using only x86_64 works fine, but trying to include the arm64-v8a library results in ninja error: '.../libtensorflow-lite.a', needed by '.../app/build/intermediates/cmake/debug/obj/armeabi-v7a/libnative-lib.so', missing and no known rule to make it.
Different approach - build/compile source files with Gradle:
I created a Native C++ project in Android Studio
I took the basic C/C++ source files and headers from Tensorflow's lite directory, and created a similar structure in app/src/main/cpp, in which I include the (A) tensorflow, (B) absl and (C) flatbuffers files
I changed the #include "tensorflow/... lines in all of tensorflow's header files to relative paths so the compiler can find them.
In the app's build.gradle I added a no-compression line for the .tflite file: aaptOptions { noCompress "tflite" }
I added an assets directory to the app
In native-lib.cpp I added some example code from the TFLite website
Tried to build the project with the source files included (build target is arm64-v8a).
I get an error:
/path/to/Android/Sdk/ndk/20.0.5594570/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/memory:2339: error: undefined reference to 'tflite::impl::Interpreter::~Interpreter()'
in <memory>, line 2339 is the "delete __ptr;" line:
_LIBCPP_INLINE_VISIBILITY void operator()(_Tp* __ptr) const _NOEXCEPT {
static_assert(sizeof(_Tp) > 0,
"default_delete can not delete incomplete type");
static_assert(!is_void<_Tp>::value,
"default_delete can not delete incomplete type");
delete __ptr;
}
Question
How can I include the TFLite libraries in Android Studio, so I can run a TFL inference from the NDK?
Alternatively - how can I use gradle (currently with cmake) to build and compile the source files?
I use Native TFL with C-API in the following way:
SETUP:
Download the latest version of TensorFlow Lite AAR file
Change the file type of downloaded .arr file to .zip and unzip the file to get the shared library (.so file)
Download all header files from the c directory in the TFL repository
Create an Android C++ app in Android Studio
Create a jni directory (New -> Folder -> JNI Folder) in app/src/main and also create architecture sub-directories in it (arm64-v8a or x86_64 for example)
Put all header files in the jni directory (next to the architecture directories), and put the shared library inside the architecture directory/ies
Open the CMakeLists.txt file and include an add_library stanza for the TFL library, the path to the shared library in a set_target_properties stanza and the headers in include_directories stanza (see below, in NOTES section)
Sync Gradle
USAGE:
In native-lib.cpp include the headers, for example:
#include "../jni/c_api.h"
#include "../jni/common.h"
#include "../jni/builtin_ops.h"
TFL functions can be called directly, for example:
TfLiteModel * model = TfLiteModelCreateFromFile(full_path);
TfLiteInterpreter * interpreter = TfLiteInterpreterCreate(model);
TfLiteInterpreterAllocateTensors(interpreter);
TfLiteTensor * input_tensor =
TfLiteInterpreterGetInputTensor(interpreter, 0);
const TfLiteTensor * output_tensor =
TfLiteInterpreterGetOutputTensor(interpreter, 0);
TfLiteStatus from_status = TfLiteTensorCopyFromBuffer(
input_tensor,
input_data,
TfLiteTensorByteSize(input_tensor));
TfLiteStatus interpreter_invoke_status = TfLiteInterpreterInvoke(interpreter);
TfLiteStatus to_status = TfLiteTensorCopyToBuffer(
output_tensor,
output_data,
TfLiteTensorByteSize(output_tensor));
NOTES:
In this setup SDK version 29 was used
cmake environment also included cppFlags "-frtti -fexceptions"
CMakeLists.txt example:
set(JNI_DIR ${CMAKE_CURRENT_SOURCE_DIR}/../jni)
add_library(tflite-lib SHARED IMPORTED)
set_target_properties(tflite-lib
PROPERTIES IMPORTED_LOCATION
${JNI_DIR}/${ANDROID_ABI}/libtfl.so)
include_directories( ${JNI_DIR} )
target_link_libraries(
native-lib
tflite-lib
...)
I have also struggled with building TF Lite C++ APIs for Android. Fortunately, I managed to make it work.
The problem is we need to configure the Bazel build process before running the bazel build ... commands. The TF Lite Android Quick Start guide doesn't mention it.
Step-by-step guide (https://github.com/cuongvng/TF-Lite-Cpp-API-for-Android):
Step 1: Install Bazel
Step 2: Clone the TensorFlow repo
git clone https://github.com/tensorflow/tensorflow
cd ./tensorflow/
Step 3: Configure Android build
Before running the bazel build ... command, you need to configure the build process. Do so by executing
./configure
The configure file is at the root of the tensorflow directory, which you cd to at Step 2.
Now you have to input some configurations on the command line:
$ ./configure
You have bazel 3.7.2-homebrew installed.
Please specify the location of python. [Default is /Library/Developer/CommandLineTools/usr/bin/python3]: /Users/cuongvng/opt/miniconda3/envs/style-transfer-tf-lite/bin/python
First is the location of python, because ./configure executes the .configure.py file.
Choose the location that has Numpy installed, otherwise the later build will fail.
Here I point it to the python executable of a conda environment.
Next,
Found possible Python library paths:
/Users/cuongvng/opt/miniconda3/envs/style-transfer-tf-lite/lib/python3.7/site-packages
Please input the desired Python library path to use. Default is [/Users/cuongvng/opt/miniconda3/envs/style-transfer-tf-lite/lib/python3.7/site-packages]
I press Enter to use the default site-packages, which contains necessary libraries to build TF.
Next,
Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: N
No CUDA support will be enabled for TensorFlow.
Do you wish to download a fresh release of clang? (Experimental) [y/N]: N
Clang will not be downloaded.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]:
Key in as showed above, on the last line type Enter.
Then it asks you whether to configure ./WORKSPACE for Android builds, type y to add configurations.
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: y
Searching for NDK and SDK installations.
Please specify the home path of the Android NDK to use. [Default is /Users/cuongvng/library/Android/Sdk/ndk-bundle]: /Users/cuongvng/Library/Android/sdk/ndk/21.1.6352462
That is the home path of the Android NDK (version 21.1.6352462) on my local machine.
Note that when you ls the path, it must include platforms, e.g.:
$ ls /Users/cuongvng/Library/Android/sdk/ndk/21.1.6352462
CHANGELOG.md build ndk-stack prebuilt source.properties wrap.sh
NOTICE meta ndk-which python-packages sources
NOTICE.toolchain ndk-build package.xml shader-tools sysroot
README.md ndk-gdb platforms simpleperf toolchains
For now I ignore the resulting WARNING, then choose the min NDK API level
WARNING: The NDK version in /Users/cuongvng/Library/Android/sdk/ndk/21.1.6352462 is 21, which is not supported by Bazel (officially supported versions: [10, 11, 12, 13, 14, 15, 16, 17, 18]). Please use another version. Compiling Android targets may result in confusing errors.
Please specify the (min) Android NDK API level to use. [Available levels: ['16', '17', '18', '19', '21', '22', '23', '24', '26', '27', '28', '29']] [Default is 21]: 29
Next
Please specify the home path of the Android SDK to use. [Default is /Users/cuongvng/library/Android/Sdk]: /Users/cuongvng/Library/Android/sdk
Please specify the Android SDK API level to use. [Available levels: ['28', '29', '30']] [Default is 30]: 30
Please specify an Android build tools version to use. [Available versions: ['29.0.2', '29.0.3', '30.0.3', '31.0.0-rc1']] [Default is 31.0.0-rc1]: 30.0.3
That is all for Android build configs. Choose N for all questions appearing later:
Step 4: Build the shared library (.so)
Now you can run the bazel build command to generate libraries for your target architecture:
bazel build -c opt --config=android_arm //tensorflow/lite:libtensorflowlite.so
# or
bazel build -c opt --config=android_arm64 //tensorflow/lite:libtensorflowlite.so
It should work without errors.
The generated library would be saved at ./bazel-bin/tensorflow/lite/libtensorflowlite.so.
last week I recieved my brand new Colibri VF61 with the Aster carrier board from Toradex.
I followed Toradex's guide on how to prepare the board to cross compile with qt here.
Everything from the tuturial went perfect, however I tried deploying my app and everything goes fine until I open the executable on my target device because I get the following message :
error while loading shared libraries: libQt5PrintSupport.so.5: cannot open shared object file: No such file or directory
I went to see if I had any Qt files in my target device at all, and there weren't so I went to my sysroot folder in my host device and I copied all the Qt files to my target device, (Qt5PrintSupport was there) but after I copied all the files in the exact same location as they where in my sysroot the same error keep appearing.
The files I copied were:
LibIcal Qt5Core Qt5OpenGLExtensions Qt5Svg
PulseAudio Qt5DBus Qt5Positioning Qt5SystemInfo
Qt5 Qt5Declarative Qt5PrintSupport Qt5Test
Qt53DCore Qt5Designer Qt5PublishSubscribe Qt5UiPlugin
Qt53DExtras Qt5Enginio Qt5Qml Qt5UiTools
Qt53DInput Qt5Gui Qt5Quick Qt5WebChannel
Qt53DLogic Qt5Help Qt5QuickTest Qt5WebKit
Qt53DQuick Qt5LinguistTools Qt5QuickWidgets Qt5WebKitWidgets
Qt53DQuickExtras Qt5Location Qt5Script Qt5WebSockets
Qt53DQuickInput Qt5Multimedia Qt5ScriptTools Qt5Widgets
Qt53DQuickRender Qt5MultimediaWidgets Qt5Sensors Qt5X11Extras
Qt53DRender Qt5Network Qt5SerialPort Qt5Xml
Qt5Bluetooth Qt5Nfc Qt5ServiceFramework Qt5XmlPatterns
Qt5Concurrent Qt5OpenGL Qt5Sql libxml2
Inside /usr/lib/cmake
and:
imports libexec mkspecs plugins qml
folders to /usr/lib/qt5
I have noticed that the problem may be that I dont have the "lib" folder inside /usr/lib/qt5 however I don't know how to create it since it wasn't in my sysroot.
Summing up: I want to execute my app by cross compiling but the lib folder is missing and I don't know how to create it or link it.
Having a library in the same path with the app using it doesn't necessarily mean that your app can find it. Follow one of the methods below;
Install the required qt libraries into system's standard lib directory(e.g. /usr/lib/)
Set environment variable LD_LIBRARY_PATH to where your qt libraries exist. Convention is generally having a bash script setting it then launching your app) or
At compile time, you set rpath variable to the location of qt libraries folder (-rpath for gcc)
I'm trying to build a qbs project using the leap motion library but on running the project am given the following error:
dyld: Library not loaded: #loader_path/libLeap.dylib
Referenced from: /Users/pball/Work/Code/Qt/build-LeapTest-Desktop-Debug/qtc_Desktop_95cbad6a-debug/install-root/LeapTest
Reason: image not found
My qbs file:
import qbs
CppApplication {
consoleApplication: true
files: "main.cpp"
Group { // Properties for the produced executable
fileTagsFilter: product.type
qbs.install: true
}
cpp.includePaths: [".","/Users/pball/LeapSDK/include"]
cpp.libraryPaths: ["/Users/pball/LeapSDK/lib"]
cpp.dynamicLibraries: "Leap"
}
libLeap.dylib is in that location.
Using Qt 5.6.0
New to using qbs so any help / pointers greatly appreciated.
This is not a qbs-specific issue, but rather requires understanding of how dynamic libraries are loaded on macOS. Please check the documentation on dyld and Run-Path Dependent Libraries.
That said, based on the install name of your dependent shared library libLeap.dylib, if you copy it to the same directory as your LeapTest application binary, it should be loaded successfully.
I am a mac user running 10.10.5 Yosemite and Xcode 6.3.2. In Documents there is a folder (generated by a command line tool project written in C++) called 'Game' with this structure:
Game:
Game.xcodeproj (my project)
Game:
main.cpp (my c++ source)
Build:
Products:
Debug:
Game (an executable)
SDL2.framework (a development library)
I want to link to the SDL2.framework in Build/Products/Debug in such a way that it will run on any Mac, from this specific folder.
I have performed these steps:
I dragged SDL2.framework from Build/Products/Debug into my Project Navigator. I unchecked 'copy if needed.'
This automatically added the 'Link Binary with Libraries' build phase in which SDL2.framework is included.
SDL2.framework is referenced relative to group- in 'Build/Products/Debug/SDL2.framework,' as desired.
In my main.cpp I write:
# include "SDL2/SDL.h"
The generated executable runs on my machine no matter where the 'Game' folder highest in the hierarchy is.
My project is not referencing the SDL2.framework in /Library/Frameworks. I can change the name of the SDL2.framework there and the program still runs. This suggests it is taking it from /Build/Products/Debug as I want it to.
However, when I take this whole folder onto a mac identical to mine the executable runs, but fails with this error:
Last login: Sat Mar 26 11:32:08 on ttys002
[~/ 1> /Volumes/NO\ NAME/Game/Build/Products/Debug/Game ; exit;
dyld: Library not loaded: #rpath/SDL2.framework/Versions/A/SDL2
Referenced from: /Volumes/NO NAME/Game/Build/Products/Debug/Game
Reason: image not found
Trace/BPT trap
logout
[Process completed]
Having spent hours searching for a solution, I am dumbfounded. What am I doing wrong?
I am getting the following issue:
/Users/luke/Desktop/trainHOG/trainhog ; exit;
dyld: Library not loaded: lib/libopencv_core.3.0.dylib
Referenced from: /Users/luke/Desktop/trainHOG/trainhog
Reason: image not found
Trace/BPT trap: 5
logout
I am using a Mac running OSX v10.9.5 with openCV 3.0 alpha.
The library in question is definitely in the folder. I have tried deleting it and pasting it back into the folder, I have completely deleted and reinstalled openCV and macports, and I have tried the export DYLD_LIBRARY_PATH = "path to dynamic libs here..", but nothing has worked. I have even rebooted my computer on several occasions!
Does anyone have any further suggestions? I am out of ideas
OpenCV 3.3
OSX 10.13
fist have a test, you can use clang++ -o a -I ./include -L ./lib
-lopencv_core.your.version
then you can generate executable file a ,run it ,if have the error massage.
you will find the error reason cannot find the lib when you are link.
if you want to solve error on terminal
you can use export DYLD_LIBRARY_PATH=your/lib:$DYLD_LIBRARY_PATH
if you want to solve error on Xcode
in build page , go to "Runpath Search Paths"
add you lib path
If you use
export DYLD_LIBRARY_PATH = "path to dynamic libs here.."
is it applied to the environment of your program?
You can check the environment variables of a running process with
ps -p <pid> -wwwE
Does this help?
If you are having this Problem:
dyld: Library not loaded: *.dylib ... Reason: no suitable image found.
Means your *.dylib files are not signed with your Apple iD develop account, and
There is two ways to solve that:
1) The right way: Code sign all your files with errors with this command:
codesign -f -s "Mac Developer: YOURDEVELOPEREMAIL" /usr/local/opt/*/lib/*.dylib
2) The temporary way(until you don't deploy to App Store): Inside Xcode, go to:
[YourProjectFile] --> [YourTargetFile] --> "Signing & Capabilities" --> and Enable "Disable Library Validation"
Done :D
Seems to be a bug in some versions of OpenCV's CMake configuration files which incorrectly record relative paths in the installed dylibs, reasonably easy to fix.
This answer on answers.opencv.org addresses the issue. In OpenCVModule.cmake and every instance of CMakeLists.txt replace INSTALL_NAME_DIR lib with INSTALL_NAME_DIR ${CMAKE_INSTALL_PREFIX}/lib.