Can't build a project package using boost/iostream from bazel - c++

I am using https://github.com/nelhage/rules_boost in a bazel project, everything is working fine except when I try to use boost/iostream.
The problem occurs on windows 10, and not on linux. boost/iostream depends on zlib and the file that is downloaded is https://zlib.net/zlib-1.2.11.tar.gz
The error I get is:
ERROR: .../external/net_zlib_zlib/BUILD.bazel:6:1: in cc_library rule #net_zlib_zlib//:zlib: Expected action_config for 'preprocess-assemble' to be configured
ERROR: Analysis of target '.../storage:storage' failed; build abo
rted: Analysis of target '#net_zlib_zlib//:zlib' failed; build aborted
This is the BUILD file:
cc_library(
name = "storage",
srcs = [
"blobstore.cc",
"blobstore.h",
],
hdrs = [
"blobstore.h",
],
deps = [
"#boost//:iostreams",
],
defines = ["BOOST_ALL_NO_LIB"],
)
Does anyone have idea what the problem might be.

This is unfortunately a bug in our MSVC crosstool. What needs to be done is to add the missing action_config and make sure other compilation flags are compatible. Would you mind creating a github issue?

Related

Cross-compiling Bazel Docker Rust image on MacOS to Linux: C++ toolchain not found

I am trying to cross-compile my Bazel Docker Rust image on MacOS to Linux. Unfortunately, I keep getting an error that the C++ toolchain cannot be found.
While resolving toolchains for target //service1:service1: No matching toolchains found for types #bazel_tools//tools/cpp:toolchain_type. Maybe --incompatible_use_cc_configure_from_rules_cc has been flipped and there is no default C++ toolchain added in the WORKSPACE file? See https://github.com/bazelbuild/bazel/issues/10134 for details and migration instructions.
Unfortunately the ticket mentioned doesn't provide much useful information.
I think I am missing a default C++ toolchain but I can't find an easy way to add this.
Here are the relevant snippets in my project
WORKSPACE.bazel
rust_repository_set(
name = "rust_darwin_linux_cross",
exec_triple = "x86_64-apple-darwin",
extra_target_triples = ["x86_64-unknown-linux-gnu-musleabihf"],
iso_date = "2021-06-09",
version = "nightly",
)
service1/BUILD.bazel
platform(
name = "linux-x86_64",
constraint_values = [
"#platforms//os:linux",
"#platforms//cpu:x86_64",
],
)
rust_image(
name = "image",
srcs = ["src/main.rs"],
)
The command that I run: bazel build --platforms //service1:linux-x86_64 //service1:image
When I run this command with --toolchain_resolution_debug=#bazel_tools//tools/cpp:toolchain_type I see that
Type #bazel_tools//tools/cpp:toolchain_type: target platform //:linux-x86_64: No toolchains found.
I really hope someone can point me in the right direction as this is a quite confusing topic and there are almost no clear guides or examples on this 🙏

Node-gyp Library not loaded: /usr/local/lib/libmtp.9.dylib

I have been attempting to make a nodejs-native-addon which uses libmtp to carry out certain functions. I have been successful in the building the app but the app is throwing Library not loaded: /usr/local/lib/libmtp.9.dylib. Referenced from: /path/build/Debug/nbind.node. Reason: image not found error when I try to run it on another macbook where the libmtp isn't installed.
This is my binding.gyp file:
{
"targets": [
{
"includes": [
"auto.gypi"
],
"sources": [
"src/native/mtp.cc"
],
"link_settings": {
"libraries": [
"-lmtp"
],
},
}
],
"includes": [
"auto-top.gypi"
],
}
I even attempted to include the dylib file in the libraries option
"link_settings": {
"libraries": [
"<(module_root_dir)/src/native/lib/libmtp.9.dylib"
]
}
but the app fails to start with the Library not loaded: /usr/local/lib/libmtp.9.dylib. Referenced from: /path/build/Debug/nbind.node. Reason: image not found error.
Any help will be appreciated.
The error is indicating that the library libmtp.9.dylib cannot be found in the standard library include path /usr/local/lib
Try setting the environment variable LD_LIBRARY_PATH to point to the location where you have the libmtp.9.dylib before running the node.
One solution would be to create a symlink in a known rpath like /usr/local/lib manually to your built library. Not ideal but it may provide a workaround for at least having successful builds in development.
ln -s <absolute_path>/src/native/lib/libmtp.9.dylib /usr/local/lib/libmtp.9.dylib
This allows the binding.gyp file to find the library without it needing to configure an rpath with whatever process is throwing the error. This is easier in my opinion than tracking down the binding.gyp trace.

C++ Tensorflow API with TensorRT

My goal is to run a tensorrt optimized tensorflow graph in a C++ application. I am using tensorflow 1.8 with tensorrt 4. Using the python api I am able to optimize the graph and see a nice performance increase.
Trying to run the graph in c++ fails with the following error:
Not found: Op type not registered 'TRTEngineOp' in binary running on e15ff5301262. Make sure the Op and Kernel are registered in the binary running in this process.
Other, non tensorrt graphs work. I had a similar error with the python api, but solved it by importing tensorflow.contrib.tensorrt. From the error I am fairly certain the kernel and op are not registered, but am unaware on how to do so in the application after tensorflow has been built. On a side note I can not use bazel but am required to use cmake. So far I link against libtensorflow_cc.so and libtensorflow_framework.so.
Can anyone help me here? thanks!
Update:
Using the c or c++ api to load _trt_engine_op.so does not throw an error while loading, but fails to run with
Invalid argument: No OpKernel was registered to support Op 'TRTEngineOp' with these attrs. Registered devices: [CPU,GPU], Registered kernels:
<no registered kernels>
[[Node: my_trt_op3 = TRTEngineOp[InT=[DT_FLOAT, DT_FLOAT], OutT=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], input_nodes=["tower_0/down_0/conv_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer", "tower_0/down_0/conv_skip/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer"], output_nodes=["tower_0/down_0/conv_skip/Relu", "tower_0/down_1/conv_skip/Relu", "tower_0/down_2/conv_skip/Relu", "tower_0/down_3/conv_skip/Relu"], serialized_engine="\220{I\000...00\000\000"](tower_0/down_0/conv_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, tower_0/down_0/conv_skip/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer)]]
Another way to solve the problem with the error "Not found: Op type not registered 'TRTEngineOp'" on Tensorflow 1.8:
1) In the file tensorflow/contrib/tensorrt/BUILD, add new section with following content :
cc_library(
name = "trt_engine_op_kernel_cc",
srcs = [
"kernels/trt_calib_op.cc",
"kernels/trt_engine_op.cc",
"ops/trt_calib_op.cc",
"ops/trt_engine_op.cc",
"shape_fn/trt_shfn.cc",
],
hdrs = [
"kernels/trt_calib_op.h",
"kernels/trt_engine_op.h",
"shape_fn/trt_shfn.h",
],
copts = tf_copts(),
visibility = ["//visibility:public"],
deps = [
":trt_logging",
":trt_plugins",
":trt_resources",
"//tensorflow/core:gpu_headers_lib",
"//tensorflow/core:lib_proto_parsing",
"//tensorflow/core:stream_executor_headers_lib",
] + if_tensorrt([
"#local_config_tensorrt//:nv_infer",
]) + tf_custom_op_library_additional_deps(),
alwayslink = 1, # buildozer: disable=alwayslink-with-hdrs
)
2) Add //tensorflow/contrib/tensorrt:trt_engine_op_kernel_cc as dependency to the corresponding BAZEL project you want to build
PS: No need to load library _trt_engine_op.so with TF_LoadLibrary
Here are my findings (and some kind of solution) for this problem (Tensorflow 1.8.0, TensorRT 3.0.4):
I wanted to include the tensorrt support into a library, which loads a graph from a given *.pb file.
Just adding //tensorflow/contrib/tensorrt:trt_engine_op_kernel to my Bazel BUILD file didn't do the trick for me. I still got a message indicating that the Ops where not registered:
2018-05-21 12:22:07.286665: E tensorflow/core/framework/op_kernel.cc:1242] OpKernel ('op: "TRTCalibOp" device_type: "GPU"') for unknown op: TRTCalibOp
2018-05-21 12:22:07.286856: E tensorflow/core/framework/op_kernel.cc:1242] OpKernel ('op: "TRTEngineOp" device_type: "GPU"') for unknown op: TRTEngineOp
2018-05-21 12:22:07.296024: E tensorflow/examples/tf_inference_lib/cTfInference.cpp:56] Not found: Op type not registered 'TRTEngineOp' in binary running on ***.
Make sure the Op and Kernel are registered in the binary running in this process.
The solution was, that I had to load the Ops library (tf_custom_op_library) within my C++ Code using the C_API:
#include "tensorflow/c/c_api.h"
...
TF_Status status = TF_NewStatus();
TF_LoadLibrary("_trt_engine_op.so", status);
The shared object _trt_engine_op.so is created for the bazel target //tensorflow/contrib/tensorrt:python/ops/_trt_engine_op.so:
bazel build --config=opt --config=cuda --config=monolithic \
//tensorflow/contrib/tensorrt:python/ops/_trt_engine_op.so
Now I only have to make sure, that _trt_engine_op.so is available whenever it is needed, e.g. by LD_LIBRARY_PATH.
If anybody has an idea, how to do this in a more elegant way (why do we have 2 artefacts which have to be build? Can't we just have one?), I'm happy for every suggestion.
tldr
add //tensorflow/contrib/tensorrt:trt_engine_op_kernel as dependency to the corresponding BAZEL project you want to build
Load the ops-library _trt_engine_op.so in your code using the C-API.
For Tensorflow r1.8, the additions shown below in two BUILD files and building libtensorflow_cc.so with the monolithic option worked for me.
diff --git a/tensorflow/BUILD b/tensorflow/BUILD
index cfafffd..fb8eb31 100644
--- a/tensorflow/BUILD
+++ b/tensorflow/BUILD
## -525,6 +525,8 ## tf_cc_shared_object(
"//tensorflow/cc:scope",
"//tensorflow/cc/profiler",
"//tensorflow/core:tensorflow",
+ "//tensorflow/contrib/tensorrt:trt_conversion",
+ "//tensorflow/contrib/tensorrt:trt_engine_op_kernel",
],
)
diff --git a/tensorflow/contrib/tensorrt/BUILD b/tensorflow/contrib/tensorrt/BUILD
index fd3582e..a6566b9 100644
--- a/tensorflow/contrib/tensorrt/BUILD
+++ b/tensorflow/contrib/tensorrt/BUILD
## -76,6 +76,8 ## cc_library(
srcs = [
"kernels/trt_calib_op.cc",
"kernels/trt_engine_op.cc",
+ "ops/trt_calib_op.cc",
+ "ops/trt_engine_op.cc",
],
hdrs = [
"kernels/trt_calib_op.h",
## -86,6 +88,7 ## cc_library(
deps = [
":trt_logging",
":trt_resources",
+ ":trt_shape_function",
"//tensorflow/core:gpu_headers_lib",
"//tensorflow/core:lib_proto_parsing",
"//tensorflow/core:stream_executor_headers_lib",
As you mentioned, it should work when you add //tensorflow/contrib/tensorrt:trt_engine_op_kernel to the dependency list. Currently the Tensorflow-TensorRT integration is still in progress and may work well only for the python API; for C++ you'll need to call ConvertGraphDefToTensorRT() from tensorflow/contrib/tensorrt/convert/convert_graph.h for the conversion.
Let me know if you have any questions.
Solution: add import
from tensorflow.python.compiler.tensorrt import trt_convert as trt
link discuss: https://github.com/tensorflow/tensorflow/issues/26525
here is my solution, tensorflow is 1.14.
in your BUILD file,exp,tensorflow/examples/your_workspace/BUILD:
in tf_cc_binary:
scrs= [...,"//tensorflow/compiler/tf2tensorrt:ops/trt_engine_op.cc"]
deps=[...,"//tensorflow/compiler/tf2tensorrt:trt_op_kernels"]

Conflict Protobuf version when using Opencv and Tensorflow c++

I am currently trying to use Tensorflow's shared library in a non-bazel project, so I creat a .so file from tensorflow using bazel.
but when I launch a c++ program that uses both Opencv and Tensorflow, it makes me the following error :
[libprotobuf FATAL external/protobuf/src/google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
Abandon (core dumped)
Can you help me?
Thank you
You should rebuild TensorFlow with a linker script to avoid making third party symbols global in the shared library that Bazel creates. This is how the Android Java/JNI library for TensorFlow is able to coexist with the pre-installed protobuf library on the device (look at the build rules in tensorflow/contrib/android for a working example)
Here's a BUILD file that I adapted from the Android library to do this:
package(default_visibility = ["//visibility:public"])
licenses(["notice"]) # Apache 2.0
exports_files(["LICENSE"])
load(
"//tensorflow:tensorflow.bzl",
"tf_copts",
"if_android",
)
exports_files([
"version_script.lds",
])
# Build the native .so.
# bazel build //tensorflow/contrib/android_ndk:libtensorflow_cc_inference.so \
# --crosstool_top=//external:android/crosstool \
# --host_crosstool_top=#bazel_tools//tools/cpp:toolchain \
# --cpu=armeabi-v7a
LINKER_SCRIPT = "//tensorflow/contrib/android:version_script.lds"
cc_binary(
name = "libtensorflow_cc_inference.so",
srcs = [],
copts = tf_copts() + [
"-ffunction-sections",
"-fdata-sections",
],
linkopts = if_android([
"-landroid",
"-latomic",
"-ldl",
"-llog",
"-lm",
"-z defs",
"-s",
"-Wl,--gc-sections",
"-Wl,--version-script", # This line must be directly followed by LINKER_SCRIPT.
LINKER_SCRIPT,
]),
linkshared = 1,
linkstatic = 1,
tags = [
"manual",
"notap",
],
deps = [
"//tensorflow/core:android_tensorflow_lib",
LINKER_SCRIPT,
],
)
And the contents of version_script.lds:
{
global:
extern "C++" {
tensorflow::*;
};
local:
*;
};
This will make everything in the tensorflow namespace global and available through the library, while hiding the reset and preventing it from conflicting with protobuf.
(wasted a ton of time on this so I hope it helps!)
The error indicates that the program was complied using headers (.h files) from protobuf 2.6.1. These headers are typically found in /usr/include/google/protobuf or /usr/local/include/google/protobuf, though they could be in other places depending on your OS and how the program is being built. You need to update these headers to version 3.1.0 and recompile the program.
This is indeed a pretty serious problem! I get the below error similar to you:
$./ceres_single_test
[libprotobuf FATAL google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
Aborted
My workaround:
cd /usr/lib/x86_64-linux-gnu
sudo mkdir BACKUP
sudo mv libmirprotobuf.so* ./BACKUP/
Now, the executable under test works, cool. What is not cool, however, is that things like gedit no longer work without running from a shell that has the BACKUP path added to LD_LIBRARY_PATH :-(
Hopefully there's a better fix out there?
The error complains about the Protocol Buffer runtime library, which is not compatible with the installed version. This error is coming from the GTK3 library. GTK3 use Protocol Buffer 2.6.1. If you use GTK3 to support Opencv, you get this error. The easiest way to fix this, you can use QT instead of GTK3.
If you use Cmake GUI to install Opencv, just select QT support instead of using GTK3. You can install QT using the following command.
sudo apt install qtbase5-dev
rebuild libprotobuf with -Dprotobuf_BUILD_SHARED_LIBS=ON
then make install to cover the older version

Detect MSVC version in GYP

I'm trying to detect a msvc version during node-gyp configure in my binding.gyp file.
Basically, I want to be able to link against particular 3rdparty library based on Visual C++ version:
['OS=="win"' and 'toolset="vc12"' , {
'libraries': [
"opencv/lib/vc12/opencv_world300.lib"
],
}],
['OS=="win"' and 'toolset="vc11"' , {
'libraries': [
"opencv/lib/vc11/opencv_world300.lib"
],
}],
['OS=="win"' and 'toolset="vc10"' , {
'libraries': [
"opencv/lib/vc10/opencv_world300.lib"
],
}]
Unfortunately, neither toolset, nor _toolset or even $(TOOLSET) variables are defined in GYP.
I wasn't able to find such variable in GYP documentation. Is it possible at all?
I couldn't figure out from the docs that how to check for toolset version, but only found the top-level settings: https://chromium.googlesource.com/external/gyp/+/master/docs/UserDocumentation.md#Skeleton-of-a-typical-executable-target-in-a-gyp-file.
However, #saper on GitHub figured it out using MSVS_VERSION instead:
['OS=="win"' and 'MSVS_VERSION=="2013"' , {
'libraries': [
"opencv/lib/vc12/opencv_world300.lib"
],
}],
['OS=="win"' and 'MSVS_VERSION=="2012"' , {
'libraries': [
"opencv/lib/vc11/opencv_world300.lib"
],
}],
['OS=="win"' and 'MSVS_VERSION=="2010"' , {
'libraries': [
"opencv/lib/vc10/opencv_world300.lib"
],
}]
(nit: in your example, although toolset token is not identified by gyp, = should be replaced with ==)
Example: https://github.com/saper/node-sass/blob/c7e9cf0f0e0098e8316bd41722fc2edf4a835d9f/src/libsass.gyp#L91-L94.
Limitation 1:
Unfortunately, these conditions are not emitted in the .targets or .vcxproj files (such as this), but it will emit the .vcxproj after post-processing the conditions separate for the given version of MSVS and hence renders the .vcxproj file incompatible with newer/older versions of VCR.
However, the MSVS version can by overridden for gyp in multiple ways, for instance, using environment variable:
In CMD:
SET GYP_MSVS_VERSION=2012
Or in PowerShell:
$env:GYP_MSVS_VERSION=2015
It can also be passed as a command line argument:
node_modules/.bin/node-gyp build --msvs_version=2012
If both env-var and command line argument are present, CLI arg will take the precedence.
This CLI argument can be supplied to npm task, for example, to enforce the constraint for all the Windows consumers of your package to use specific version of MSVCR otherwise error.
and since so forth..
Limitation 2:
From CLI arg, there is no way to specify minimum MSVS version no such flag as: --min-msvs-version.
Limitation 3:
In case of multiple version of MSBUILD installed, node-gyp's MSBUILD discovery (at present) will ignore the preferred/required version of toolset by .vcxproj, but will give precedence to the one in PATH. In this case, you may get errors, for instance if you are using C99/C++1[1/4/7] features only offered by VS2015. To remedy this situation:
either reset PATH to the desired version MSBuild bin directory.
instead of node-gyp build or rebuild, use node-gyp configure followed by "%ProgramFiles(x86)%\MSBuild\14.0\Bin\MSBuild" build/binding.sln /p:Configuration=Release (or from posh, it would become: &"${env:ProgramFiles(x86)}\MSBuild\14.0\Bin\MSBuild" build\binding.sln /p:Configuration=Release)
by sending a Pull Request for node-gyp and pangyp to fix the toolset-version-aware MSBUILD discovery, if your Windows Registry skills are not as rusty as mine. :)