I'm writing a simple Bazel BUILD file but I have to include MKL library.
My main.c include this library:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>
#include "omp.h"
#include "mkl.h"
#include "mkl_types.h"
#include "mkl_dfti.h"
The last 3 libs that are located in $MKLROOT set by a module environment.
My bazel build file is:
load("#rules_cc//cc:defs.bzl", "cc_binary", "cc_library")
cc_library(
name = "mkl_headers",
srcs = glob(["include/*(.cc|.cpp|.cxx|.c++|.C|.c|.h|.hh|.hpp|.ipp|.hxx|.inc|.S|.s|.asm|.a|.lib|.pic.a|.lo|.lo.lib|.pic.lo|.so|.dylib|.dll|.o|.obj|.pic.o)"]),
includes = ["include"],
visibility = ["//visibility:public"],
)
cc_library(
name = "mkl_libs_linux",
srcs = [
"lib/libiomp5.so",
"lib/libmklml_intel.so",
],
visibility = ["//visibility:public"],
)
cc_binary(
name = "mklfft",
srcs = ["main.c"],
deps = [
":mkl_libs_linux"
],
)
I tried to take an example from the BUILD tensorflow mkl file but it's very complicated.
The bazel build command return:
INFO: Analyzed target //mklfft:mklfft (2 packages loaded, 8 targets configured).
INFO: Found 1 target...
ERROR: missing input file 'mklfft/mkl.h', owner: '//mklfft:mkl.h'
ERROR: missing input file 'mklfft/mkl_dfti.h', owner: '//mklfft:mkl_dfti.h'
ERROR: missing input file 'mklfft/mkl_types.h', owner: '//mklfft:mkl_types.h'
ERROR: /C/mklfft/BUILD:6:1: //mklfft:mkl: missing input file '//mklfft:mkl.h'
ERROR: /C/mklfft/BUILD:6:1: //mklfft:mkl: missing input file '//mklfft:mkl_dfti.h'
ERROR: /C/mklfft/BUILD:6:1: //mklfft:mkl: missing input file '//mklfft:mkl_types.h'
ERROR: missing input file 'mklfft/readFile.c', owner: '//mklfft:readFile.c'
Target //mklfft:mklfft failed to build
Use --verbose_failures to see the command lines of failed build steps.
ERROR: /C/mklfft/BUILD:6:1 3 input file(s) do not exist
INFO: Elapsed time: 0.342s, Critical Path: 0.03s
INFO: 0 processes.
Can you clarify the method to link external shared libraries with bazel?
linking with "lib/libiomp5.so",
"lib/libmklml_intel.so",
it's not enough. You need to add libmkl_intel_thread.so and -libmkl_core.so also.
Please check mkl linker adviser to see what mkl suggests to use:
https://software.intel.com/content/www/us/en/develop/articles/intel-mkl-link-line-advisor.html
Related
By default, Bazel supports #include'ing files relative to the WORKSPACE directory. However, as soon as a relative #include path from the workspace root contains a symlink, this seems to no longer hold. For example, suppose we have some directory structure as follows:
WORKSPACE
libs/foo/src/foo.cpp
libs/foo/itf/foo.h
libs/foo/BUILD
foo -> libs/foo
Where foo in the root directory is a symlink to libs/foo. Suppose we have:
foo.h:
#pragma once
void foo();
foo.cpp:
#include "foo/itf/foo.h"
void foo() {}
BUILD:
load("#rules_cc//cc:defs.bzl", "cc_binary", "cc_library")
cc_library(
name = "foo",
srcs = ["src/foo.cpp"],
hdrs = ["itf/foo.h"],
)
In this case, bazel build //... gives the following error message:
Starting local Bazel server and connecting to it...
INFO: Analyzed 2 targets (38 packages loaded, 166 targets configured).
INFO: Found 2 targets...
ERROR: [...]/foo/libs/foo/BUILD:3:11: Compiling libs/foo/src/foo.cpp failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer '-std=c++0x' -MD -MF ... (remaining 16 arguments skipped)
Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
libs/foo/src/foo.cpp:1:10: fatal error: foo/itf/foo.h: No such file or directory
1 | #include "foo/itf/foo.h"
| ^~~~~~~~~~~~~~~
Is this a known issue or known behavior?
Bazel is not aware of the symlink. Therefore this does not work - you have to change your include path to libs/foo/itf/foo.h. You can add includes = ["libs"] to the cc_library to get around this.
I am trying to switch to meson and evaluate. I setup a small project and created this meson.build file
project('utils', 'cpp')
json_dep = dependency('jsoncpp')
boost_dep = dependency('boost', modules : [ 'filesystem' ])
occ_dep = dependency('OpenCASCADE', method: 'cmake')
utils_deps = [ occ_dep, json_dep, boost_dep ]
utils_lib = library('utils', dependencies: utils_deps)
If I use utils_deps = [ json_dep, boost_dep ] then the compilation works. However adding occ_dep to the list and compiling produces the following error
FAILED: src/libs/utils/libutils.dylib
c++ -o src/libs/utils/libutils.dylib -Wl,-dead_strip_dylibs -Wl,
-headerpad_max_install_names -Wl,-undefined,error -shared -install_name
#rpath/libutils.dylib -Wl,-rpath,/opt/homebrew/Cellar/jsoncpp/1.9.5/lib
/opt/homebrew/lib/libTKernel.7.6.2.dylib /opt/homebrew/opt/tbb/lib/libtbb.dylib
/opt/homebrew/opt/tbb/lib/libtbbmalloc.dylib /opt/homebrew/lib/libTKMath.7.6.2.dylib
/opt/homebrew/lib/libTKG2d.7.6.2.dylib /opt/homebrew/lib/libTKG3d.7.6.2.dylib
/Library/Developer/CommandLineTools/SDKs/
MacOSX12.sdk/System/Library/Frameworks/AppKit.framework
/Library/Developer/CommandLineTools/SDKs/
MacOSX12.sdk/System/Library/Frameworks/IOKit.framework
/opt/homebrew/lib/libTKService.7.6.2.dylib
/opt/homebrew/opt/freeimage/lib/libfreeimage.dylib
/opt/homebrew/opt/freetype/lib/libfreetype.dylib
/Library/Developer/CommandLineTools/SDKs/
MacOSX12.sdk/System/Library/Frameworks/AppKit.framework
/opt/homebrew/lib/libTKQADraw.7.6.2.dylib
/opt/homebrew/Cellar/jsoncpp/1.9.5/lib/libjsoncpp.dylib
/opt/homebrew/Cellar/boost/1.78.0_1/lib/libboost_filesystem-mt.dylib
ld: can't map file, errno=22 file '/Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk/System/Library/Frameworks/AppKit.framework' for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
ninja: build stopped: subcommand failed.
I am working on a Macbook Pro M1 cheap.
Thanks
it seems that in the linking file it search a file instead of Library
/Developer/CommandLineTools/SDKs/MacOSX12.sdk/System/Library/Frameworks/AppKit.framework
your dependency against OpenCascade uses cmake method, try to specify also the lib needed using modules method
occ_dep = dependency('OpenCASCADE', method: 'cmake', modules : ['openCascade lib ......'])
see also https://mesonbuild.com/Dependencies.html
I was able to install glog with:
brew install glog
Then I can successfully compile and use it using g++:
g++ src/main/main_logger.cc -std=c++17 -lglog
How can I do this with bazel?
I get this error:
fatal error: 'glog/logging.h' file not found
#include <glog/logging.h>
^~~~~~~~~~~~~~~~
1 error generated.
UPDATE:
Instead of installing and building glog locallay, I ended up referencing it as a git repo in the WORKSPACE file:
git_repository(
name = "glog",
remote = "https://github.com/google/glog.git",
tag = "v0.5.0",
)
Now I can depend on it in my cc_binary rules like this:
cc_binary(
name = "main_logger",
srcs = ["main_logger.cc"],
deps = [
"//src/lib:CPPLib",
"#com_github_gflags_gflags//:gflags",
"#glog",
],
)
Complete example here.
There is already a doc about to use glog within a project which uses the Bazel build tool. link
Then you can create a BUILD file, and use bazel build //src/main:main_logger to build it.
cc_binary(
name = "main_logger",
srcs = ["main_logger.cc"],
deps = ["#com_github_google_glog//:glog"],
)
System: LMDE4, 64bit, gcc-8.3.0, VS Code
Target File: https://github.com/opencv/opencv/blob/master/samples/cpp/videocapture_camera.cpp
Now as the title says, this starts pissing me off. Nothing is working to fix such a simple issue. And NO I don't want to always use "-I" to tell pretty obvious things to the compiler. Here is what I've done so far.
in c_cpp_properties.json of VS Code:
{
"configurations": [
{
"name": "Linux",
"includePath": [
"${workspaceFolder}/**",
"/usr/include/**"
],
"defines": [],
"compilerPath": "/usr/bin/gcc",
"cStandard": "c11",
"cppStandard": "gnu++14",
"intelliSenseMode": "clang-x64",
"browse": {
"path": [
"/usr/include/"
]
}
}
],
"version": 4
}
in .bashrc:
#C Include
export C_INCLUDE_PATH="/usr/include"
export C_INCLUDE_PATH=$C_INCLUDE_PATH:"/usr/include/opencv2"
#C++ Include
export CPLUS_INCLUDE_PATH="/usr/include"
export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:"/usr/include/c++/8/"
export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:"/usr/include/opencv2"
#C/C++ Include
export CPATH="/usr/include"
I am pretty sure that all the .bashrc exports are already a dirty workaround and still I get the following message on compile:
In file included from /usr/include/c++/8/bits/stl_algo.h:59,
from /usr/include/c++/8/algorithm:62,
from /usr/include/opencv2/core/base.hpp:55,
from /usr/include/opencv2/core.hpp:54,
from ~/LearnDummy/helloworld.cpp:1:
/usr/include/c++/8/cstdlib:75:15: fatal error: stdlib.h: Datei oder Verzeichnis nicht gefunden
#include_next <stdlib.h>
^~~~~~~~~~
compilation terminated.
Well fine... stdlib.h is unknown (Jesus!)... find /usr -name stdlib.h gives me
/usr/include/stdlib.h
/usr/include/c++/8/stdlib.h
/usr/include/c++/8/tr1/stdlib.h
/usr/include/x86_64-linux-gnu/bits/stdlib.h
/usr/include/i386-linux-gnu/bits/stdlib.h
In addition VS Code already knows(!) where the file is once I click on "Go to Definition" and still gcc is blind. How do I get realiably rid of this?
Here is a minimal repro of your problem on Ubuntu 20.04.
$ g++ --version
g++ (Ubuntu 9.3.0-10ubuntu2) 9.3.0
...
$ cat main.cpp
#include <cstdlib>
int main ()
{
return EXIT_SUCCESS;
}
$ export CPLUS_INCLUDE_PATH="/usr/include"; g++ -c main.cpp
In file included from main.cpp:1:
/usr/include/c++/9/cstdlib:75:15: fatal error: stdlib.h: No such file or directory
75 | #include_next <stdlib.h>
| ^~~~~~~~~~
compilation terminated.
Note that export CPLUS_INCLUDE_PATH="/usr/include" here has the same effect as your
identical setting in your .bashrc.
The error does not occur if we remove that environment setting:
$ export CPLUS_INCLUDE_PATH=; g++ -c main.cpp; echo Done
Done
The effect of that environment setting, as per the GCC Manual: 3.21 Environment Variables Affecting GCC
is the same as:
$ g++ -isystem /usr/include -c main.cpp
In file included from main.cpp:1:
/usr/include/c++/9/cstdlib:75:15: fatal error: stdlib.h: No such file or directory
75 | #include_next <stdlib.h>
| ^~~~~~~~~~
compilation terminated.
which accordingly reproduces the error.
The -isystem option is documented in the GCC Manual: 3.16 Options for Directory Search
The general solution to your problem is: Don't run a g++ compilation in any way
that has the effect of g++ ... -isystem /usr/include ...
You can avoid running a g++ command in such a way because the option
-isystem /usr/include is unnecessary. /usr/include is a default search directory
for the preprocessor. You don't need to tell it to look for system header files there -
either via environment settings, or via a VS Code configuration, or any other way.
See the preprocessor's default search order for C++:-
$ echo | g++ -x c++ -E -Wp,-v -
ignoring duplicate directory "/usr/include/x86_64-linux-gnu/c++/9"
ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/include-fixed"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/../../../../x86_64-linux-gnu/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/include/c++/9
/usr/include/x86_64-linux-gnu/c++/9
/usr/include/c++/9/backward
/usr/lib/gcc/x86_64-linux-gnu/9/include
/usr/local/include
/usr/include/x86_64-linux-gnu
/usr/include ### <- There it is ###
End of search list.
...
So your comment:
I am pretty sure that all the .bashrc exports are already a dirty workaround
is on the money1. But what's worse, the .bashrc setting:
export CPLUS_INCLUDE_PATH="/usr/include"
turns the problem into a persistent feature of your bash profile.
How does the error happen?
The difference that is made to the preprocessor's search order by -isystem /usr/include
can be seen here:
$ echo | g++ -x c++ -isystem /usr/include -E -Wp,-v -
ignoring duplicate directory "/usr/include/x86_64-linux-gnu/c++/9"
ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/include-fixed"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/9/../../../../x86_64-linux-gnu/include"
ignoring duplicate directory "/usr/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/include ### <- Was previously last, now is first ###
/usr/include/c++/9
/usr/include/x86_64-linux-gnu/c++/9
/usr/include/c++/9/backward
/usr/lib/gcc/x86_64-linux-gnu/9/include
/usr/local/include
/usr/include/x86_64-linux-gnu
End of search list.
...
As you see, /usr/include is detected now as a duplicated directory in the <...> search
order; the second occurrence - which was last, previously - is deleted and the first occurrence is
retained, coming first in the search order.
Now recall the diagnostic:
/usr/include/c++/9/cstdlib:75:15: fatal error: stdlib.h: No such file or directory
75 | #include_next <stdlib.h>
| ^~~~~~~~~~
The preprocessor directive #include_next is not a standard directive, it is
a GCC extension, documented in the GCC manual: 2.7 Wrapper Headers
Whereas #include <stdlib.h> means:
Include the first file called stdlib.h discovered in the <...> search order, starting from the start
#include_next <stdlib.h> means:
Include the next file called stdlib.h discovered in the <...> search order, starting from the
directory right after that of the file being processed now.
The only directory in the <...> search order that contains stdlib.h is /usr/include. So,
if #include_next <stdlib.h> is encountered by the preprocessor in any file in any directory dir in the <...>
search order, while /usr/include is first in the <...> search order, there can be no directory
later than dir in the <...> search order where <stdlib.h> will be found. And so the error.
#include_next <foobar.h> can only work if the <...> search order places the directory containing
<foobar.h> after the one that contains the file that contains the directive. As a rule of thumb,
just don't mess with the <...> search order.
The problem just discussed was the subject of a regression bug-report raised against GCC 6.0.
As you can see there, the resolution was WONTFIX.
[1] All of your .bashrc exports as posted are, as you suspect, poor practice.
It isn't necessary to tell the preprocessor about any search directories in
its default search order. You can only make things wrong.
Directories that will not be found by default should be specified by
-I dir options specified on the commandline (typically injected via parameters
of the build configuration), so that these non-default options are visible in build logs
for trouble shooting. "Invisible hands" are to be avoided in build systems to the
utmost practical extent.
I have a C++ project where I want to use the -Wsign-conversion compile option. As build tool Bazel 0.28.1 is used. My self-written code does not generate warning of that type. The project uses Googletest. Unfortunately, Googletest generates this kind of warning, which breaks my build. Here are my files:
.bazelrc
build --cxxopt=-Werror # Every warning is treated as an error.
build --cxxopt=-std=c++14
build --cxxopt=-Wsign-conversion # Warn for implicit conversions that may change the sign of an integer value
gtest.BUILD
cc_library(
name = "main",
srcs = glob(
["src/*.cc"],
exclude = ["src/gtest-all.cc"]
),
hdrs = glob([
"include/**/*.h",
"src/*.h"
]),
copts = ["-Iexternal/gtest/include"],
linkopts = ["-pthread"],
visibility = ["//visibility:public"],
)
WORKSPACE
workspace(name = "GTestDemo")
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
git_repository(
name = "googletest",
remote = "https://github.com/google/googletest",
#tag = "release-1.8.1",
commit = "2fe3bd994b3189899d93f1d5a881e725e046fdc2",
shallow_since = "1535728917 -0400",
)
BUILD
cc_test(
name = "tests",
srcs = ["test.cpp"],
copts = ["-Iexternal/gtest/include"],
deps = [
"#googletest//:gtest_main",
],
)
test.cpp
#include <iostream>
#include "gtest/gtest.h"
TEST(sample_test_case, sample_test)
{
EXPECT_EQ(1, 1);
}
When I try to build the code the following error is shown:
Starting local Bazel server and connecting to it...
INFO: Analyzed target //:tests (21 packages loaded, 540 targets configured).
INFO: Found 1 target...
INFO: Deleting stale sandbox base /mnt/ramdisk/bazel-sandbox.be60b2910864108c1e29c6fce8ad6ea4
ERROR: /home/admin/.cache/bazel/_bazel_admin/cc9b56275ffa85d1a0fca263d1d708e4/external/googletest/BUILD.bazel:55:1: C++ compilation of rule '#googletest//:gtest' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer '-std=c++0x' -MD -MF ... (remaining 36 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
external/googletest/googletest/src/gtest-filepath.cc: In member function 'testing::internal::FilePath testing::internal::FilePath::RemoveFileName() const':
external/googletest/googletest/src/gtest-filepath.cc:168:45: error: conversion to 'std::__cxx11::basic_string<char>::size_type {aka long unsigned int}' from 'long int' may change the sign of the result [-Werror=sign-conversion]
dir = std::string(c_str(), last_sep + 1 - c_str());
~~~~~~~~~~~~~^~~~~~~~~
cc1plus: all warnings being treated as errors
Target //:tests failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 7.225s, Critical Path: 0.66s
INFO: 0 processes.
FAILED: Build did NOT complete successfully
Is there a possibility in Bazel to ignore warnings in 3rd-party-library such as googletest?
This is more of a GCC question.
"System headers" are immune from this kind of thing:
-Wsystem-headers: Print warning messages for constructs found in system header files. Warnings from system headers are normally suppressed, on the assumption that they usually do not indicate real problems and would only make the compiler output harder to read.
So, you can just pretend that the 3rd-party lib is a system header, using GCC's -isystem flag (instead of -I):
copts = ["-isystem external/gtest/include"],
Bazel's --per_file_copt allows setting flags for all files except ones matching some regular expression. Something more like this in your .bazelrc should do what you're looking for:
# Warn for implicit conversions that may change the sign of an integer value,
# for C++ files not in googletest
build --per_file_copt=.*\.(cc|cpp),-googletest/.*#-Wsign-conversion
You'll need to update that to match any extensions you use for C++ files other than .cc and .cpp. The documentation for cc_library.srcs lists all the extensions Bazel uses, for reference.
I couldn't figure out how to match #googletest// in the flag, because I don't see a way to escape an # there... However, I'm pretty sure it's matching against #googletest and not external/googletest or something, because /googletest doesn't match anything. Probably won't matter, but it is something to keep in mind if you have any other filenames with googletest in them.