I'm building a simple C++ application on macOS with Bazel. I want to use OpenCV in my application. I installed OpenCV with brew install opencv and I followed Building OpenCV code using Bazel to create my setup.
In my WORKSPACE file, I have:
new_local_repository(
name = "opencv",
path = "/usr/local/opt/opencv",
build_file = "opencv.BUILD",
)
In my opencv.BUILD file, I have:
cc_library(
name = "opencv",
srcs = glob(["lib/*.dylib"]),
hdrs = glob(["include/opencv4/opencv2/**/*.h*"]),
includes = ["include/opencv4"],
visibility = ["//visibility:public"],
linkstatic = 1,
)
In my BUILD file, I have:
cc_library(
name = "lib",
srcs = ["hello.cpp"],
deps = [
"#opencv//:opencv",
],
)
This works as long as the opencv target takes the OpenCV .dlyb files (srcs = glob(["lib/*.dylib"])).
But now I want to build the OpenCV libraries statically into my binary. When I change the opencv target to take OpenCV's .a files (srcs = glob(["lib/*.a"])), I get a bunch of Undefined symbols errors about AVFoundation classes. In my code I use the OpenCV Capture API (which I assume uses AVFoundation under the hood), so this sorta makes sense, but I'm not sure how to fix it.
How should I configure my Bazel targets to build OpenCV statically into my binary?
Related
I am working on a c++ project with bazel BUILD system in the vscode IDE environment. To illustrate, one could take one of the large open source projects such as tensorflow.
While the intellisense functionality works very well for source/header dependencies within the project folder itself, vscode seems unable to recognize headers included from third_party/external dependencies, such as protobuf headers in the tensorflow project (see below screenshot). So is there a way for vscode to recognize such headers, with the help of both c++/clang and bazel plugins?
To provide more details:
This protobuf header is included by the following BAZEL target in tensorflow/lite/toco/BUILD,
cc_library(
name = "toco_port",
srcs = [
"toco_port.cc",
],
hdrs = [
"format_port.h",
"toco_port.h",
"toco_types.h",
],
deps = [
"//tensorflow/core:framework_lite",
"//tensorflow/core:lib",
"//tensorflow/core:lib_internal",
"#com_google_absl//absl/status",
"#com_google_protobuf//:protobuf_headers",
],
)
which in turn is defined by the following workspace rule in tensorflow/workspace2.bzl
tf_http_archive(
name = "com_google_protobuf",
patch_file = ["//third_party/protobuf:protobuf.patch"],
sha256 = "cfcba2df10feec52a84208693937c17a4b5df7775e1635c1e3baffc487b24c9b",
strip_prefix = "protobuf-3.9.2",
system_build_file = "//third_party/systemlibs:protobuf.BUILD",
system_link_files = {
"//third_party/systemlibs:protobuf.bzl": "protobuf.bzl",
"//third_party/systemlibs:protobuf_deps.bzl": "protobuf_deps.bzl",
},
urls = tf_mirror_urls("https://github.com/protocolbuffers/protobuf/archive/v3.9.2.zip"),
)
The downloaded external repository is usually stored in a local directory ~/.cache/_bazel, specified by the --output_user_root flag in bazel.
I would just like to know if someone has tried doing this?
I am currently using nelhage/rules_boost for my boost dependencies(just to make some things compile for the meantime), but since the code I'm working with is only 100% compatible with 1.55 I cannot use his rules for long.
I could also try adapting his code to work with boost 1.55, but I think it would make it a lot easier if I just make Bazel depend on an installation of boost since I am also working with containers.
I usually use boost as pre-built external dependency with Bazel. I just reference the local installation in my WORKSPACE file and then create a BUILD file for it, e.g.:
# WORKSPACE file
new_local_repository(
name = "boost",
path = "/your/path/to/boost",
build_file = "third_party/boost.BUILD",
)
In the BUILD file you can choose to split headers and libs into separate rules or combine them together. In the following example I keep all the headers as a rule and separate libraries into different rules:
# third_party/boost.BUILD
cc_library(
name = "boost-headers",
hdrs = glob(["include/boost/**"]),
visibility = ["//visibility:public"],
includes = ['include'],
)
cc_library(
name = "boost-atomic",
srcs = ["lib/libboost_atomic.a"],
visibility = ["//visibility:public"],
)
cc_library(
name = "boost-chrono",
srcs = ["lib/libboost_chrono.a"],
visibility = ["//visibility:public"],
)
...
Then in my binary/library I pick-up the dependencies:
cc_binary(
name = 'main',
srcs = ['main.cc'],
deps = [
'#boost//:boost-headers',
'#boost//:boost-regex',
]
)
This should also work is you have boost installed into /usr/include / /usr/lib, but I haven't tried to be honest.
Hope this helps.
I would like to create a shared c++ library with Bazel using a soname.
With cmake I could set properties like:
set_target_properties(my-library
PROPERTIES
SOVERSION 3
VERSION 3.2.0
)
which would then generate
libmy-library.so -> libmy-library.so.3
libmy-library.so.3 -> libmy-library.so.3.2.0
libmy-library.so.3.2.0
However in bazel documentation I cannot find anything that would allow me to do so easily. I know that I could define the soname and version directly and pass some linkopts in the build file:
cc_binary(
name = "libmy-library.so.3.2.0",
srcs = ["my-library.cpp", "my-library.h"],
linkshared = 1,
linkopts = ["-Wl,-soname,libmy-library.so.3"],
)
which does produce libmy-library.so.3.2.0 with the correct soname, but not the .so file so it would require a whole lot of hacks around to:
create libmy-library.so.3 symlink
create libmy-library.so symlink
create some import rules such that I can build binaries that link with this library.
This does not feel like the right way. What would be the right way to solve such problem?
I'm building a program with gRPC library using bazel. My WORKSPACE file:
http_archive(
name = "com_github_grpc_grpc",
urls = ["https://github.com/grpc/grpc/archive/v1.8.3.zip"],
sha256 = "57a2c67abe789ce9e80d49f473515c7479ae494e87dba84463b10bbd0990ad62",
strip_prefix = "grpc-1.8.3",
)
load("#com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps")
grpc_deps()
BUILD file:
proto_library(
name = "test_proto",
srcs = ["test.proto"],
)
cc_proto_library(
name = "test_cc_proto",
deps = [":test_proto"],
)
cc_binary(
name = "hello",
srcs = ["hello.cc"],
deps = [
":test_cc_proto",
"#com_github_grpc_grpc//:grpc++",
],
)
Compiling this throws error:
every rule of type proto_library implicitly depends upon the target '#com_google_protobuf_cc//:cc_toolchain', but this target could not be found because of: no such package '#com_google_protobuf_cc//': The repository could not be resolved.
If I include com_google_protobuf_cc repository manually, the version doesn't match and I get error saying test.pb.h was generated using a newer version of protoc.
How do I make gRPC load right version of com_google_protobuf_cc?
How are you including Protobuf manually, what did you put in your BUILD and/or WORKSPACE file/s to achieve this? It's hard to comment on what could be wrong without knowing exactly what you have tried.
As far as I know you can include it by downloading the version you require then adding something like the following to your WORKSPACE file:
local_repository(
name = "com_google_protobuf",
path = "../protobuf-3.4.1",
)
local_repository(
name = "com_google_protobuf_cc",
path = "../protobuf-3.4.1",
)
Of course change the paths to match the version and location of your downloaded copy of Protobuf. Alternatively you can probably use http_archive to point it directly to where it should be downloaded, in the same way as you have done for gRPC.
Environment: MAC machine, running my code inside virtual machine with guest OS: Ubuntu 14.4 LTS.
I am compiling openCV within tensorflow workspace under examples. My WORKSPACE and opencv.BUILD file look similar to the one mentioned here
My BUILD file for the opencv + tensorflow project looks like following:
package(default_visibility = ["//tensorflow:internal"])
licenses(["notice"]) # Apache 2.0
exports_files(["LICENSE"])
cc_binary(
name = "label_image",
srcs = [
"main.cc",
],
linkopts = ["-lm"],
copts = ["-DWITH_FFMPEG=OFF"],
deps = [
"//tensorflow/cc:cc_ops",
"//tensorflow/core:framework_internal",
"//tensorflow/core:tensorflow",
"#opencv//:opencv"
],
)
filegroup(
name = "all_files",
srcs = glob(
["**/*"],
exclude = [
"**/METADATA",
"**/OWNERS",
"bin/**",
"gen/**",
],
),
visibility = ["//tensorflow:__subpackages__"],
)
If i disable tensorflow dependences (and also comment the tensorflow related code). I can see that the webcam is captures properly. like this:
deps = [
#"//tensorflow/cc:cc_ops",
#"//tensorflow/core:framework_internal",
#"//tensorflow/core:tensorflow",
"#opencv//:opencv"
],
But if i still keep the code commented/uncommented and also keep the tensorflow dependences my webcam hangs at VideoCapture::read()
By default, opencv use FFMPEG codec and i tried enabling and disabling FFMPEG. Can someone please help me why when tensorflow library is compiled in the project makes my openCV read() hangs?
The OpenCV Bazel build configuration you linked above appears to just glob all the .so files that CMake built. Maybe you need to pass the -DWITH_FFMPEG=OFF cflag to CMake? If you pass it to Bazel as you did above, it's only going to apply to the compilation of main.cc.