Tensorflow r1.2 c++ api error when using libtensorflow.so - c++

I have a code that was working perfectly fine with r1.1.
I am trying to run the code in different computer. So installed Tensorflow r1.2 from sources. I first build libtensorflow.so, and link it to my executable. Now I am getting thousand errors like:
undefined reference to 'tensorflow::internal::LogMessageFatal::
It seems like libtensorflow.so is built at all. I used the command
sudo bazel build -c opt --config=cuda --copt="-mtune=native" --copt="-O3" tensorflow:libtensorflow.so --genrule_strategy=standalone --spawn_strategy=standalone
to build libtensorflow.so. I also tried r1.1 with the same computer and it works fine. What was changed in r1.2?

Related

VS2017 Linux C++ development with Protobuf environment

I've been building my own c++ linux application in vs2017, using the built in tools that connect to my virtual ubuntu VM. I have gotten to a point where I want to integrate protobuf into my project, but I am running into issues as to how to do so.
I have installed the protobuf libraries and compiler on my VM using Google's UNIX C++ installation guide. And I am able to compiler without issue. After I compiled one message, I brought the files back over to my windows side so that I can bring them into my VS project. However that causes all sorts of errors, mainly:
undefined reference to 'google::protobuf::internal'
So I went through a previous Stack Overflow post where the user ran the following command on the VM:
pkg-config --cflags --libs protobuf
and the took the text in that output, and put it into his Linker > All Options > Library dependencies in his VS project on the windows side. For me, that was:
-pthread -lprotobuf -pthread -lpthread
However, when I compile, I get cannot find. I've been trying to find out what are the best practices when doing linux development through VS2017, and wanting to use protobuf. Any help on how to get that entire chain working would be great.

Tensorflow and Bazel c++

I'm trying to build tensorflow C++ from sources but no success. I followed different tutorials, but each time, there is a different error.
What I want to do is to create a library so I can use it with Qt. I followed this tutorial because it was exactly what I wanted:
https://tuatini.me/building-tensorflow-as-a-standalone-project/
(build on Ubuntu, not on raspberry)
It works fine until I have to use babel.
The tutorial says I have to run this command:
bazel build -c opt --verbose_failures //tensorflow:libtensorflow_cc.so
but it always fails with the error:
ERROR: /home/default/.cache/bazel/_bazel_default/045e1c5e9b482c7b029d706e128fc7e7/external/io_bazel_rules_closure/closure/stylesheets/closure_css_library.bzl:27:13: name 'set' is not defined
I have no idea where I'm supposed to define 'set' (I remove the .cache/bazel folder)
Other tutorials I followed gave me errors such as bazel needs to be > 0.4.3, found 0.13.1 as if it was strings instead of numbers...
Any idea on how to make it work?
Do you need to build Tensorflow 1.3.0? There's an old version of TF that can only be built with Bazel 0.5.1, according to the tutorial. You have Bazel 0.13.1, which doesn't support the keyword set in the build scripts. The latest version of TF is buildable with Bazel 0.13.1.
If you need to build 1.3.0, install an older version of Bazel (e.g. 0.5.4) from https://github.com/bazelbuild/bazel/tags?before=0.4.3.
To be exact, this error comes from one of the dependencies of TF, and not TF itself.

f90wrap on Windows (Python wrapper for Fortran 90)

I've got a Python program that calls Fortran routines. These Fortran routines are wrapped with f90wrap (https://github.com/jameskermode/f90wrap), and I've verified that the setup works correctly on Linux and Mac OSX. I'm now trying to get the setup to work equally well on Windows (because I collaborate with people who cannot sometimes switch to Linux).
I've got gfortran working through a MinGW installation and verified that Fortran programs compile and run without errors. I've also verified that a Python 2.7 installation works without issues, and was able to use pip to add numpy, matplotlib and scipy modules without issues. Both MinGW and Python are 64-bit, running on Windows 10. I've also got CMake to create Makefiles that compile standalone fortran programs using mingw-make, so the only part left (to get things working on Windows) is to make sure the Python wrapper for Fortran-90 works. That's where I ran into some issues.
I'm running mingw-make in Powershell (which executes in cmd.exe, I believe).
Q1: The pip installation for f90wrap failed with an absolute path/relative path error (https://github.com/jameskermode/f90wrap/issues/73)
A1: I downloaded the source and ran "python setup.py install", and that got stuck as well. I ran into a "multiple_definition" error with Windows 10, Python 2.7 and mingw-w64.
F:/Programs/mingw/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/7.2.0/../../../../x86_64-w64-mingw32/lib/../lib/libmingw32.a(lib64_libmingw32_a-atonexit.o):atonexit.c:(.text+0xc0): multiple definition of atexit' F:\Programs\Python\libs/libmsvcr90.a(deoks01081.o):(.text+0x0): first defined here F:/Programs/mingw/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/7.2.0/../../../../x86_64-w64-mingw32/lib/../lib/libmingw32.a(lib64_libmingw32_a-mingw_helpers.o):mingw_helpers.c:(.text+0x0): multiple definition of_decode_pointer'
F:\Programs\Python\libs/libmsvcr90.a(deoks00231.o):(.text+0x0): first defined here
F:/Programs/mingw/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/7.2.0/../../../../x86_64-w64-mingw32/lib/../lib/libmingw32.a(lib64_libmingw32_a-mingw_helpers.o):mingw_helpers.c:(.text+0x10): multiple definition of `_encode_pointer'
F:\Programs\Python\libs/libmsvcr90.a(deoks00241.o):(.text+0x0): first defined here
collect2.exe: error: ld returned 1 exit status
error: Command "gcc -g -shared build\temp.win-amd64-2.7\Release\f90wrap\arraydatamodule.o build\temp.win-amd64-2.7\Release\programs\python\lib\site-packages\numpy\f2py\src\fortranobject.o -LF:\Programs\Python\libs -LF:\Programs\Python\PCbuild\amd64 -lpython27 -lmsvcr90 -o build\lib.win-amd64-2.7\f90wrap\arraydata.pyd" failed with exit status 1

Cross compile google v8 library for raspberry pi

I am having a problem with cross compiling google v8 libraries for raspberry pi, and constantly getting "Illegal instruction" error when compiling official sample from site. These are the steps i followed:
Downloaded cross compile https://github.com/raspberrypi/tools/
Cloned v8 git https://chromium.googlesource.com/v8/v8.git
Exported CXX LINK point to arm-linux-gnueabihf-g++ from cross compile tools.
run make arm.release armv7=false hardfp=on snapshot=off armfpu=vfp armfloatabi=hard -j5
Copied generated executable shell and d8 from out/arm.release directory to pi (Raspbian kernel version 3.6.11) and it WORKS.
These steps prove that cross compilation toolchain is functional.
Problem occurs when trying to run other cross-compiled software that is linked to v8 libraries. For example sample code from https://developers.google.com/v8/get_started#intro.
Code is cross-compiled with this command (same as example, just changed compiler)
arm-linux-gnueabihf-g++ -I. hello_world.cc -o hello_world -Wl,--start-group out/x64.release/obj.target/{tools/gyp/libv8_{base,libbase,snapshot,libplatform},third_party/icu/libicu{uc,i18n,data}}.a -Wl,--end-group -lrt -pthread
When i copy that code to pi and run it i get SIGILL (Illegal instruction).
Note: cross compiled software that doesn't use v8 libraries works fine. Also x64 v8 libraries on host computer work fine.
On newer kernel versions shell and d8 were also throwing SIGILL but than i switched to older version 3.6.11 (problems with newer kernel https://groups.google.com/forum/#!topic/v8-users/IPT9EeYK9bg) and they started working, but compiled sample code is still showed same issues.
Did anyone have similar experience? Any suggestion on how to overcome this problem?
I found a solution thanks to post on v8 google group. https://groups.google.com/forum/#!topic/v8-users/LTppUbqNrzI
Problem was in make arguments it should be.
make arm arm_version=6 armfpu=vfp armfloatabi=hard

OpenCV can not find library on launch. Backwards compatibility?

I am trying to build and use a piece of C++ code that uses OpenCV. I am working on Linux, working in Code::Blocks (and the code was originally also developed on a Linux platform using C::B).
I followed this to install OpenCV (Ubuntu 12.04 & OpenCV 2.4.3). The project compiles fine, but when I try to execute it, it crashes on launch, with the following message about how it can not find the library:
(file_address): error while loading shared libraries: libopencv_core.so.2.3:
cannot open shared object file: No such file or directory
Process returned 127 (0x7F) execution time : 0.017 s Press ENTER to continue.
I set all the parameters for the linker according to several Code::Blocks install tutorials.
I also checked in /usr/local/lib/ for my libraries (it is the folder I gave to Code::Blocks' compiler); and while I do have a libopencv_core.so, a libopencv_core.so.2.4 and a libopencv_core.2.4.3, I do not have a libopencv_core.so.2.3.
So I'm wondering what the issue is. Is it about backwards compatibility, i.e. do I have to install the exact same version of OpenCV used to develop the original code? (This would be a bit concerning, since I am trying to make a widely-usable library).
Could I force it to use libopencv_core.so.2.4 instead?
EDIT: I managed to make it work by removing everything and reinstalling with a simple apt-get. Sometimes it's the simplest method that works the best! From now on I'll try to apt-get before following installation tutorials. ;)
Have a nice day!