I am currently trying to use Tensorflow's shared library in a non-bazel project, so I creat a .so file from tensorflow using bazel.
but when I launch a c++ program that uses both Opencv and Tensorflow, it makes me the following error :
[libprotobuf FATAL external/protobuf/src/google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
Abandon (core dumped)
Can you help me?
Thank you
You should rebuild TensorFlow with a linker script to avoid making third party symbols global in the shared library that Bazel creates. This is how the Android Java/JNI library for TensorFlow is able to coexist with the pre-installed protobuf library on the device (look at the build rules in tensorflow/contrib/android for a working example)
Here's a BUILD file that I adapted from the Android library to do this:
package(default_visibility = ["//visibility:public"])
licenses(["notice"]) # Apache 2.0
exports_files(["LICENSE"])
load(
"//tensorflow:tensorflow.bzl",
"tf_copts",
"if_android",
)
exports_files([
"version_script.lds",
])
# Build the native .so.
# bazel build //tensorflow/contrib/android_ndk:libtensorflow_cc_inference.so \
# --crosstool_top=//external:android/crosstool \
# --host_crosstool_top=#bazel_tools//tools/cpp:toolchain \
# --cpu=armeabi-v7a
LINKER_SCRIPT = "//tensorflow/contrib/android:version_script.lds"
cc_binary(
name = "libtensorflow_cc_inference.so",
srcs = [],
copts = tf_copts() + [
"-ffunction-sections",
"-fdata-sections",
],
linkopts = if_android([
"-landroid",
"-latomic",
"-ldl",
"-llog",
"-lm",
"-z defs",
"-s",
"-Wl,--gc-sections",
"-Wl,--version-script", # This line must be directly followed by LINKER_SCRIPT.
LINKER_SCRIPT,
]),
linkshared = 1,
linkstatic = 1,
tags = [
"manual",
"notap",
],
deps = [
"//tensorflow/core:android_tensorflow_lib",
LINKER_SCRIPT,
],
)
And the contents of version_script.lds:
{
global:
extern "C++" {
tensorflow::*;
};
local:
*;
};
This will make everything in the tensorflow namespace global and available through the library, while hiding the reset and preventing it from conflicting with protobuf.
(wasted a ton of time on this so I hope it helps!)
The error indicates that the program was complied using headers (.h files) from protobuf 2.6.1. These headers are typically found in /usr/include/google/protobuf or /usr/local/include/google/protobuf, though they could be in other places depending on your OS and how the program is being built. You need to update these headers to version 3.1.0 and recompile the program.
This is indeed a pretty serious problem! I get the below error similar to you:
$./ceres_single_test
[libprotobuf FATAL google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
Aborted
My workaround:
cd /usr/lib/x86_64-linux-gnu
sudo mkdir BACKUP
sudo mv libmirprotobuf.so* ./BACKUP/
Now, the executable under test works, cool. What is not cool, however, is that things like gedit no longer work without running from a shell that has the BACKUP path added to LD_LIBRARY_PATH :-(
Hopefully there's a better fix out there?
The error complains about the Protocol Buffer runtime library, which is not compatible with the installed version. This error is coming from the GTK3 library. GTK3 use Protocol Buffer 2.6.1. If you use GTK3 to support Opencv, you get this error. The easiest way to fix this, you can use QT instead of GTK3.
If you use Cmake GUI to install Opencv, just select QT support instead of using GTK3. You can install QT using the following command.
sudo apt install qtbase5-dev
rebuild libprotobuf with -Dprotobuf_BUILD_SHARED_LIBS=ON
then make install to cover the older version
Related
I have already put a question about the access violation of the TensorFlow lite c++ API. No one answered it so far, I believe the error I made is with selecting the wrong header- and library files from the Bazel build.
The steps that I have done to get the Tensorflow Lite Header and Libraries are from Youtube Tutorial and from Tensorflow.
Get Required Python (for me Python 3.9.5)
Install required Packages locally
Install Bazel (for me 3.7.2) and MSYS2 (after installation run pacman -S git patch unzip) and add it to Path
Check VS Build Tools 2019 for C++ (I have VS 19 Community with MSVC v142 & Windows 10 SDK)
Download and Unzip Tensorflow Sources from Github (Release of 2.5.3)
Inside the Tensorflow Sources, use python .\configure.py to start configure the bazel build (I only used Yes for override eigen strong inline, the rest is kept on the default value)
The I opened GitBash cmd inside the tensorflow source bazel build -c opt //tensorflow/lite:tensorflowlite
After a successful build later I get the "bazel-bin", "bazel-out", "bazel-tensorflow-2.5.3" and "bazel-testlogs" folder.
I created the following folders tensorflow/include/tensorflow/lite & core and tensorflow/include/flatbuffers for the headers and finally the tensorflow/lib for the libraries.
I copied the tensorflowlite.dll & tensorflow.dll.if.lib from the build directory (tensorflow-2.5.3\bazel-bin\tensorflow\lite) into the tensorflow/lib directory together with the flatbuffers.lib (from tensorflow-2.5.3\bazel-bin\external\flatbuffers\src)
I copied the tensorflow-2.5.3\bazel-bin\external\flatbuffers\src_virtual_includes\flatbuffers\flatbuffers headers into the tensorflow/include/flatbuffers directory
I copied the tensorflow-2.5.3\tensorflow\lite and tensorflow-2.5.3\tensorflow\core from the original sources into the tensorflow/include/tensorflow/lite & core directory.
After those steps, I could create a new VS Project and add the created linker and include information. And created the following short example to read the input layer.
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"
#define TFLITE_MINIMAL_CHECK(x) \
if (!(x)) \
{ \
fprintf(stderr, "Error at %s:%d\n", __FILE__, __LINE__); \
exit(1); \
}
int main()
{
std::string filename = "C:/project/tflitetesting/models/classification/mobilenet_v1_1.0_224_quant.tflite";
std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile(filename.c_str());
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder builder(*model, resolver);
std::unique_ptr<tflite::Interpreter> interpreter;
builder(&interpreter);
TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);
printf("=== Pre-invoke Interpreter State ===\n");
tflite::PrintInterpreterState(interpreter.get());
interpreter->SetAllowFp16PrecisionForFp32(true);
interpreter->SetNumThreads(1);
// Get Input Tensor Dimensions
unsigned char* input = interpreter->typed_input_tensor<unsigned char>(0);
}
But I am still receiving the access violation exception inside interpreter.h at
const Subgraph& primary_subgraph() const {
return *subgraphs_.front(); // Safe as subgraphs_ always has 1 entry.
}
What am I doing wrong? I dont want to build the shared library since the target (Coral Edge) has direct access to those functions (ex. interpreter->typed_input_tensor<unsigned char>(0); too.
The thing is, you cannot Debug a Release (Optimized) version.
with the command bazel build -c opt //tensorflow/lite:tensorflowlite you will create an "Release" version of the dll's and lib's.
Therefore just apply bazel build -c dbg //tensorflow/lite:tensorflowlite to get the debug tflite c++ version.
I'm trying to cross-compile Google Breakpad. I'm executing the following commands:
$ ./configure --prefix=/opt/breakpad CFLAGS="-Os" CC=PATH_ARM_COMPILER/arm-linux-gcc CXX=PATH_ARM_COMPILER/arm-linux-g++ --host=arm
$ make
$ make install
It generates and installs some files in the prefix path. In the include path it has:
|-common
|-google_breakpad
|-processor
but it should has:
|-client
|-common
|-google_breakpad
|-processor
|-third_party
It seems to be a problem related to Breakpad client. What should be the right way to cross-compile Breakpad?
My host is a Ubuntu 18.04 x86-64, target ARM-32.
I have reproduced your problem on my side. In fact, the issue is related to --host compilation flag.
Breakpad documentation shows that:
when building on Linux it will also build the client libraries.
So, In order to get the client binaries and headers, you should use the correct compiler prefix.
For example if you are using the GNU cross compiler arm-linux-gnueabihf-gcc, the --host flag value should be arm-linux-gnueabihf.
In your case (arm-linux-gcc) try to change your configure command as following:
./configure --prefix=/opt/breakpad CFLAGS="-Os" CC=PATH_ARM_COMPILER/arm-linux-gcc CXX=PATH_ARM_COMPILER/arm-linux-g++ --host=arm-linux
Yet another linux build newb here, struggling to build mariadb-client for Android using the NDK.
I have already succesfully built openssl and libiconv, which are perquisites.
Here is what I am doing:
export ANDROID_NDK_ROOT="/home/dev/android-ndk-r12b"
SR="$ANDROID_NDK_ROOT/platforms/android-16/arch-arm"
BR="$ANDROID_NDK_ROOT/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/arm-linux-androideabi-"
mkdir build && cd build
PKG_CONFIG_PATH=$SR/usr/lib/pkgconfig cmake -DCMAKE_AR="$BR"ar -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER="$BR"gcc -DCMAKE_C_FLAGS=--sysroot=$SR -DCMAKE_INSTALL_PREFIX=$SR/usr -DCMAKE_LINKER="$BR"ld -DCMAKE_NM="$BR"nm -DCMAKE_OBJCOPY="$BR"objcopy -DCMAKE_OBJDUMP="$BR"objdump -DCMAKE_RANLIB="$BR"ranlib -DCMAKE_STRIP="$BR"strip -DWITH_EXTERNAL_ZLIB=ON -DICONV_INCLUDE_DIR=$SR/usr/include -DICONV_LIBRARIES=$SR/usr/lib/libiconv.a -DZLIB_INCLUDE_DIR=$SR/usr/include -DZLIB_LIBRARY=$SR/usr/lib/libz.so ../
make install
To break down that last part so it is more readable:
PKG_CONFIG_PATH=$SR/usr/lib/pkgconfig
cmake
-DCMAKE_AR="$BR"ar
-DCMAKE_BUILD_TYPE=Release
-DCMAKE_C_COMPILER="$BR"gcc
-DCMAKE_C_FLAGS=--sysroot=$SR
-DCMAKE_INSTALL_PREFIX=$SR/usr
-DCMAKE_LINKER="$BR"ld
-DCMAKE_NM="$BR"nm
-DCMAKE_OBJCOPY="$BR"objcopy
-DCMAKE_OBJDUMP="$BR"objdump
-DCMAKE_RANLIB="$BR"ranlib
-DCMAKE_STRIP="$BR"strip
-DWITH_EXTERNAL_ZLIB=ON
-DICONV_INCLUDE_DIR=$SR/usr/include
-DICONV_LIBRARIES=$SR/usr/lib/libiconv.a
-DZLIB_INCLUDE_DIR=$SR/usr/include
-DZLIB_LIBRARY=$SR/usr/lib/libz.so
The first error I got was that program_invocation_short_name was undefined in this bit of code:
#elif defined(_GNU_SOURCE)
const char * appname = program_invocation_short_name;
#elif defined(WIN32)
I couldn't find why this is and how to fix, so I decided to cheat my way through by assigning an empty string to it. Possibly with negative repercussions, but I noticed the source doing the same thing a few lines down so I decided to give it a go nonetheless.
Another build attempt, and now I am getting undefined references for iconv functions:
CMakeFiles/mariadb_obj.dir/ma_charset.c.o:ma_charset.c:function mariadb_convert_string: error: undefined reference to 'iconv_open'
CMakeFiles/mariadb_obj.dir/ma_charset.c.o:ma_charset.c:function mariadb_convert_string: error: undefined reference to 'iconv'
CMakeFiles/mariadb_obj.dir/ma_charset.c.o:ma_charset.c:function mariadb_convert_string: error: undefined reference to 'iconv_close'
CMakeFiles/mariadb_obj.dir/ma_context.c.o:ma_context.c:function my_context_spawn_internal: error: undefined reference to 'setcontext'
CMakeFiles/mariadb_obj.dir/ma_context.c.o:ma_context.c:function my_context_continue: error: undefined reference to 'swapcontext'
The libraries are definitely there, as defined in the configuration above. Maybe that's a side effect of the above cheat?
Or maybe something else entirely is going wrong?
Once again, a complete newb in this regard, but I get a newb hunch that it may have something to do with cmake. Is it possibly using the host machine cmake but should be using some "android toolchain" cmake instead? I couldn't find much info on that either, but it could explain why it isn't picking the program_invocation_short_name thingie and the libs.
So, any ideas what is going wrong and how to fix it?
The build env should be clear by the first few lines of code but just in case, it's Ubuntu 16.04 x64, using NDK r12b and the GCC 4.9 toolchain. I am using the following versions of the libraries: libiconv 1.15, openssl 1.1.0f and mariadb_connector_c 3.0.3.
Currently MariaDB Connector/C doesn't support Android NDK, this is planned for the upcoming 3.0.3 release.
To build MariaDB Connector/C with Android NDK you need to checkout the 3.0-portable branch of MariaDB Connector/C.
Iconv support currently doesn't work, same is valid for Kerberos/GSSAPI authentication plugin.
For building MariaDB Connector/C with Android NDK you have specify additionally the following CMake parameters:
-DWITH_ICONV=OFF -DWITH_DYNCOL=OFF -DAUTH_GSSAPI_TYPE=OFF
If you don't need SSL/TLS support, you can disable it by specifying
-DWITH_SSL=OFF
I am trying to cross compile NPM Sqlite3 with sqlcipher support. I am using Ubuntu 16.04 to cross compile for linux armv7 based SOC(system on chip).
So I started with cross-compiling OpenSSL to build sqlcipher for arm. I successfully cross compiled sqlcipher to produce a static library (libsqlcipher.a).
Now I am trying to get the NodeJS side of the project. I need sqlite with sqlcipher support, compiled for arm. I am using SOC SDK to built till now.
I am using node v4.6.1 and npm v2.15.9 to cross compile. I made sure I have the same version installed on Ubuntu as the SOC.
The command I use to cross compile is as follows :
npm install sqlite3 --target_arch=arm --enable-static=yes --build-from-source --sqlite_libname=sqlcipher -fPIC --sqlite=home/onkar/Library/sqlcipher-master/.libs --verbose
I exported the location of the libsqlcipher.a to LDFLAGS. I get the following error when I try to cross compile. Can someone help me with this error?
/home/linuximage/sdk/sysroots/x86_64-angstromsdk-linux/usr/libexec/arm-angstrom-linux-gnueabi/gcc/arm-angstrom-linux-gnueabi/5.2.1/real-ld: error: /home/Library/sqlcipher-master/.libs/libsqlcipher.a(sqlite3.o): requires unsupported dynamic reloc R_ARM_THM_MOVW_ABS_NC; recompile with -fPIC
collect2: error: ld returned 1 exit status
node_sqlite3.target.mk:129: recipe for target 'Release/obj.target/node_sqlite3.node' failed
make: *** [Release/obj.target/node_sqlite3.node] Error 1
Please let me know if you require any additional information, I would be more than happy to provide you with the same.
Thanks,
Onkar
In the first instance, you should check if the -fPIC (position independent code) flag was correctly applied when the libsqlcipher.a file was originally created.
In your output above, it looks like the linker is using the file at:
/home/Library/sqlcipher-master/.libs/libsqlcipher.a
Run the command
objdump -r /home/Library/sqlcipher-master/.libs/libsqlcipher.a | more
... and check for a line close to the start of the output beginning with the text
RELOCATION RECORDS FOR
If you see this line, then the library doesn't contain position independent code.
Currently I can build my program using boost build in different platforms by setting the toolset and parameters in the command line. For example :
Linux
b2
MacOS
b2 toolset=clang cxxflags="-stdlib=libc++" linkflags="-stdlib=libc++"
Is there a way to create a rule in the Jamroot file to decide which compiler to use based on the operating system? I am looking for something along these lines:
import os ;
if [ os.on-macos ] {
using clang : <cxxflags>"-stdlib=libc++" <linkflags>"-stdlib=libc++c ;"
}
in linux it automatically decides to use gcc but in the mac if I don't specify the clang toolset it will try (without success) to compile it with gcc.
Just for reference, here is my current jamroot (any suggestions also appreciated):
# Project requirements (note, if running on a Mac you have to build foghorn with clang with libc++)
project myproject
: requirements <cxxflags>-std=c++11 <linkflags>-std=c++11 ;
# Build binaries in src
lib boost_program_options ;
exe app
: src/main.cpp src/utils src/tools boost_program_options
;
How abou using a Jamroot? I have the following in mine. It selects between two GCC versions on Linux, depending on what's in an environmen variable, and chooses vacpp on AIX.
if [ os.name ] = LINUX
{
switch [ modules.peek : ODSHOME ]
{
case *gcc-4* : using gcc : 4.4 : g++-4.4 ;
case *gcc-3.3* : using gcc : 3.3 : g++-3.3 ;
case * : error Only gcc v4 and gcc v3.3 supported. ;
}
}
else if [ os.name ] = AIX
{
using vacpp ;
}
else
{
error Only Linux and AIX supported at present. ;
}
After a long time I have found out that there is really no way (apart from very hacky) to do this. The goal of Boost.Build is to let the toolset option for the user to define.
The user has several ways to specify the toolset:
in the command line with --toolset=gcc for example
in the user configuration by setting it in the user-config.jam for all projects compiled by the user
in the site configuration by setting it in the site-config.jam for all users
the user-config.jam can be in the user's $HOME or in the boost build path.
the site-config.jam should be in the /etc directory, but could also be in the two locations above.
In summary, setup your site-config or user-config for a pleasant experience, and write a nice README file for users trying to compile your program.
Hope this helps someone else.