I have the following BUILD file:
load("#com_google_protobuf//:protobuf.bzl", "cc_proto_library")
### Protos ###
cc_proto_library(
name = "homework_cc_proto",
protoc = "#com_google_protobuf//:protoc",
default_runtime = "#com_google_protobuf//:protobuf",
)
proto_library(
name = "homework_proto",
srcs = [
"protos/complexity.proto",
"protos/example.proto",
"protos/problem.proto",
"protos/solution.proto",
],
)
### End Protos ###
### Binaries ###
cc_binary(
name = "main",
srcs = ["main.cc"],
deps = [":homework_cc_proto"],
)
and main.cc:
#include <iostream>
#include "example.pb.h"
int main() {
std::cout << "Hello!" << std::endl;
}
If I invoke bazel build :homework_cc_proto, the build is successful. However, when I run bazel build :main I get an error saying that example.pb.h cannot be found. How can I import my built protobufs?
Your cc_proto_library needs to depend on homework_proto.
cc_proto_library(
name = "homework_cc_proto",
protoc = "#com_google_protobuf//:protoc",
default_runtime = "#com_google_protobuf//:protobuf",
deps = [ ":homework_proto" ],
)
Related
I'm having trouble configuring dap in NeoVim - after executing :lua require’dap’.continue() I get this error:
Path to executable: /home/user/Projects/C++/app/E5108: Error executing lua ...l/share/nvim/site/pack/packer/start/nvim-dap/lua/dap.lua:385: ...nvim/site/pack/packer/start/nvim-dap/lua/dap/session.lua:1295: Error running /hom
e/user/Devtools/vscode-lldb: EACCES: permission denied
stack traceback:
[C]: in function 'trigger_run'
...l/share/nvim/site/pack/packer/start/nvim-dap/lua/dap.lua:385: in function 'run'
...l/share/nvim/site/pack/packer/start/nvim-dap/lua/dap.lua:319: in function 'cb'
...hare/nvim/site/pack/packer/start/nvim-dap/lua/dap/ui.lua:34: in function 'pick_if_many'
...l/share/nvim/site/pack/packer/start/nvim-dap/lua/dap.lua:313: in function 'select_config_and_run'
...l/share/nvim/site/pack/packer/start/nvim-dap/lua/dap.lua:688: in function 'continue'
[string ":lua"]:1: in main chunk
Directory with vscode-llb - user have full access for it.
drwxrwxr-x 17 user user 4,0K dec 30 07:12pm vscode-lldb
debugging.lua
require("dap").adapters.lldb = {
type = "executable",
command = "/home/user/Devtools/vscode-lldb",
name = "lldb",
}
local lldb = {
name = "Launch lldb",
type = "lldb", -- matches the adapter
request = "launch", -- could also attach to a currently running process
program = function()
return vim.fn.input(
"Path to executable: ",
vim.fn.getcwd() .. "/",
"file"
)
end,
cwd = "${workspaceFolder}",
stopOnEntry = false,
args = {},
runInTerminal = false,
}
local dap = require('dap')
dap.configurations.cpp = {
{
name = 'Launch',
type = 'lldb',
request = 'launch',
program = function()
return vim.fn.input('Path to executable: ', vim.fn.getcwd() .. '/', 'file')
end,
cwd = '${workspaceFolder}',
stopOnEntry = false,
args = {},
},
}
dap.configurations.c = dap.configurations.cpp
I can't find a similar problem - so I'm doing something wrong - but what? Anyone have any suggestions.
Change of folder access permissions to application, to vscode-lldb, update of plugins.
I am using Visual Studio to build and debug my project but while debugging I am getting error that the breakpoint not currently be hit. No symbols have been loaded.
I have the following things in my project.
Main.cpp:-
int GetNumber()
{
int a=0,b=1;
return a+b;
}
int main()
{
std::string str="Hello";
std::cout<<str;
std::cout<< GetNumber();
getchar();
}
Also, I have been using the toolchain from gn website- https://gn.googlesource.com/gn/+/HEAD/examples/simple_build
BUILD.gn:-
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
config("compiler_defaults") {
if (current_os == "linux") {
cflags = [
"-fPIC",
"-pthread",
]
}
}
config("executable_ldconfig") {
if (!is_mac) {
ldflags = [
"-Wl,-rpath=\$ORIGIN/",
"-Wl,-rpath-link=",
]
}
}
BUILDCONFIG.gn
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
if (target_os == "") {
target_os = host_os
}
if (target_cpu == "") {
target_cpu = host_cpu
}
if (current_cpu == "") {
current_cpu = target_cpu
}
if (current_os == "") {
current_os = target_os
}
is_linux = host_os == "linux" && current_os == "linux" && target_os == "linux"
is_mac = host_os == "mac" && current_os == "mac" && target_os == "mac"
# All binary targets will get this list of configs by default.
_shared_binary_target_configs = [ "//build:compiler_defaults" ]
# Apply that default list to the binary target types.
set_defaults("executable") {
configs = _shared_binary_target_configs
# Executables get this additional configuration.
configs += [ "//build:executable_ldconfig" ]
}
set_defaults("static_library") {
configs = _shared_binary_target_configs
}
set_defaults("shared_library") {
configs = _shared_binary_target_configs
}
set_defaults("source_set") {
configs = _shared_binary_target_configs
}
set_default_toolchain("//build/toolchain:gcc")
BUILD.gn
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
toolchain("gcc") {
tool("cc") {
depfile = "{{output}}.d"
command = "gcc -MMD -MF $depfile {{defines}} {{include_dirs}} {{cflags}} {{cflags_c}} -c {{source}} -o {{output}}"
depsformat = "gcc"
description = "CC {{output}}"
outputs =
[ "{{source_out_dir}}/{{target_output_name}}.{{source_name_part}}.o" ]
}
tool("cxx") {
depfile = "{{output}}.d"
command = "g++ -MMD -MF $depfile {{defines}} {{include_dirs}} {{cflags}} {{cflags_cc}} -c {{source}} -o {{output}}"
depsformat = "gcc"
description = "CXX {{output}}"
outputs =
[ "{{source_out_dir}}/{{target_output_name}}.{{source_name_part}}.o" ]
}
tool("alink") {
command = "rm -f {{output}} && ar rcs {{output}} {{inputs}}"
description = "AR {{target_output_name}}{{output_extension}}"
outputs =
[ "{{target_out_dir}}/{{target_output_name}}{{output_extension}}" ]
default_output_extension = ".a"
output_prefix = "lib"
}
tool("solink") {
soname = "{{target_output_name}}{{output_extension}}" # e.g. "libfoo.so".
sofile = "{{output_dir}}/$soname"
rspfile = soname + ".rsp"
if (is_mac) {
os_specific_option = "-install_name #executable_path/$sofile"
rspfile_content = "{{inputs}} {{solibs}} {{libs}}"
} else {
os_specific_option = "-Wl,-soname=$soname"
rspfile_content = "-Wl,--whole-archive {{inputs}} {{solibs}} -Wl,--no-whole-archive {{libs}}"
}
command = "g++ -shared {{ldflags}} -o $sofile $os_specific_option #$rspfile"
description = "SOLINK $soname"
# Use this for {{output_extension}} expansions unless a target manually
# overrides it (in which case {{output_extension}} will be what the target
# specifies).
default_output_extension = ".so"
# Use this for {{output_dir}} expansions unless a target manually overrides
# it (in which case {{output_dir}} will be what the target specifies).
default_output_dir = "{{root_out_dir}}"
outputs = [ sofile ]
link_output = sofile
depend_output = sofile
output_prefix = "lib"
}
tool("link") {
outfile = "{{target_output_name}}{{output_extension}}"
rspfile = "$outfile.rsp"
if (is_mac) {
command = "g++ {{ldflags}} -o $outfile #$rspfile {{solibs}} {{libs}}"
} else {
command = "g++ {{ldflags}} -o $outfile -Wl,--start-group #$rspfile {{solibs}} -Wl,--end-group {{libs}}"
}
description = "LINK $outfile"
default_output_dir = "{{root_out_dir}}"
rspfile_content = "{{inputs}}"
outputs = [ outfile ]
}
tool("stamp") {
command = "touch {{output}}"
description = "STAMP {{output}}"
}
tool("copy") {
command = "cp -af {{source}} {{output}}"
description = "COPY {{source}} {{output}}"
}
}
Now, to create the solution file, I am using the following commands:-
gn gen --ide=vs out/Default
ninja -C out/Default
The solution file is getting generated properly and also the project is building properly but the issue is while hitting breakpoint during debugging.
I've been trying to write a bazel rule to wrap compiling for risc-v source files, does some other stuff, etc, but I've been having some trouble with getting a CcToolchainInfo provider.
I have a rule that works that looks like
rv_cc_toolchain_config = rule(
implementation = _impl,
attrs = {},
provides = [CcToolchainConfigInfo],
)
in order to provide config info. I have the following in toolchains/BUILD:
load(":cc_toolchain_config.bzl", "rv_cc_toolchain_config")
package(default_visibility = ['//visibility:public'])
rv_cc_toolchain_config(name="rv_toolchain_cfg")
cc_toolchain(
name='rv_toolchain',
toolchain_identifier='rv-toolchain',
toolchain_config=':rv_toolchain_cfg',
all_files=':nofile',
strip_files=':nofile',
objcopy_files=':nofile',
dwp_files=':nofile',
compiler_files=':nofile',
linker_files=':nofile',
)
This seems to all work fine; I then have my custom rule to compile with riscv:
def _compile_impl(ctx):
deps = []
cc_toolchain = find_cpp_toolchain(ctx)
print(ctx.attr._cc_toolchain)
compilation_contexts = [dep[CcInfo].compilation_context for dep in deps]
print(type(cc_toolchain))
feature_configuration = cc_common.configure_features( #fails here
ctx = ctx,
cc_toolchain = cc_toolchain,
requested_features = ctx.features, #currently does nothing
unsupported_features = ctx.disabled_features,
)
rv_compile = rule(
_compile_impl,
output_to_genfiles = True,
attrs = {
"srcs": attr.label_list(
doc = "List of source files",
mandatory = False,
allow_files = [".cc", ".cpp", ".h", ".c"],
),
"hdrs": attr.label_list(
doc = "List of header files",
allow_files = [".h"],
),
"_cc_toolchain": attr.label(
#default = Label("#bazel_tools//tools/cpp:current_cc_toolchain"),
default = Label("//toolchains:rv_toolchain")
),
},
provides = [
DefaultInfo,
CcInfo,
],
toolchains = [
"#bazel_tools//tools/cpp:toolchain_type",
],
fragments = ["cpp"]
)
Where I fail when trying to configure the toolchain because cc_toolchain is of type ToolchainInfo and not the required CcToolchainInfo. Does anyone have any insight on how to provide CcToolchainInfo within a rule? Or is there a better way of doing this? Documentation seems to go dark on this.
Oops -- figured this out after trolling through github. Turns out the problem is directly referencing cc_toolchain is incorrect, and that CcToolchainInfo is provided via cc_toolchain_suite
updating toolchains/BUILD to look something like
load(":cc_toolchain_config.bzl", "rv_cc_toolchain_config")
package(default_visibility = ['//visibility:public'])
rv_cc_toolchain_config(name="rv_toolchain_cfg")
filegroup(name = 'empty')
cc_toolchain(
name='rv_toolchain',
toolchain_identifier='sanity-toolchain',
toolchain_config=':rv_toolchain_cfg',
all_files=':empty',
strip_files=':empty',
objcopy_files=':empty',
dwp_files=':empty',
compiler_files=':empty',
linker_files=':empty',
)
cc_toolchain_suite(
name='rv',
toolchains={
'darwin': ':rv_toolchain', #use whatever OS you need here...
}
)
and the rv compile rule to something like
rv_compile = rule(
_compile_impl,
output_to_genfiles = True,
attrs = {
"srcs": attr.label_list(
doc = "List of source files",
mandatory = False,
allow_files = [".cc", ".cpp", ".h", ".c"],
),
"hdrs": attr.label_list(
doc = "List of header files",
allow_files = [".h"],
),
"_cc_toolchain": attr.label(
#default = Label("#bazel_tools//tools/cpp:current_cc_toolchain"),
default = Label("//toolchains:rv")
),
},
provides = [
DefaultInfo,
CcInfo,
],
toolchains = [
"#bazel_tools//tools/cpp:toolchain_type",
],
fragments = ["cpp"]
)
works like a charm :) anyone reading this should also enable expirimental skylark cpp apis as well. if anyone knows how to make cc_toolchain_suite cpu agnostic, i'd love to hear it. cheers.
I'm trying to use Bazel to build a cpp project that use Flatbuffers.
But my map_schema_generated.h generated with flatc is not found.
My tree:
|
|_ data
| |_ maps
| |_ BUILD
| |_ map_schema.fbs
|
|_ src
| |_ map
| |_ BUILD
| |_ map.hpp
| |_ map.cpp
|
|_ tools
| |_ BUILD
| |_ generate_fbs.bzl
|
|_ WORKSPACE
tools/generate_fbs.bzl:
def _impl(ctx):
output = ctx.outputs.out
input = ctx.files.srcs
print("generating", output.basename)
ctx.action(
use_default_shell_env = True,
outputs = [output],
inputs = input,
progress_message="Generating %s with %s" % (output.path, input[0].path),
command="flatc -o %s --cpp %s" % (output.dirname, input[0].path)
)
generate_fbs = rule(
implementation=_impl,
output_to_genfiles = True,
attrs={
"srcs": attr.label_list(allow_files=True, allow_empty=False),
"out": attr.output()
},
)
data/maps/BUILD:
load("//tools:generate_fbs.bzl", "generate_fbs")
generate_fbs(
name = "schema",
srcs = ["map_schema.fbs"],
out = "map_schema_generated.h",
visibility = ["//visibility:public"]
)
src/map/BUILD:
cc_library(
name = "map",
srcs = [
"//data/maps:map_schema_generated.h",
"map.hpp",
"map.cpp"
]
)
src/map/map.cc has #include "map_schema_generated.h".
The command line I use to build: bazel build //src/map.
If I find in bazel-*, I got:
bazel-genfiles/data/maps/map_schema_generated.h
bazel-out/k8-fastbuild/genfiles/data/maps/map_schema_generated.h
bazel-my-workspace-name/bazel-out/k8-fastbuild/genfiles/data/maps/map_schema_generated.h
And if I cat these files, I can see that they are well generated.
All the information that I found are about Tensorflow, and are not really helpful.
Best regards,
The problem is that your cc_library actually doesn't really recognize your generated header as requiring any special action (like adding -I flag for the location it's in). It gets generate and lives in the build tree, but not anywhere the compiler (preprocessor) would look for it working on map.cpp. (Run build with -s for a bit more insight about what and how happened).
Now about how to address this, there might be a better way, but this would appear to work. I guess this functionality could also be rolled into generate_fbs rule.
In data/maps/BUILD I've added "header only" library as follows:
cc_library(
name = "map_schema_hdr",
hdrs = [":map_schema_generated.h"],
include_prefix = ".", # to manipulate -I of dependenices
visibility = ["//visibility:public"]
)
In src/map/BUILD I would then use this header only library as dependency of map:
cc_library(
name = "map",
srcs = [
"map.cpp"
"map.hpp"
],
deps = [
"//data/maps:map_schema_hdr",
]
)
To play a bit more with the idea of having a single rule (macro) for convenience, I've made the following changes:
tools/generate_fbs.bzl now reads:
def _impl(ctx):
output = ctx.outputs.out
input = ctx.files.srcs
print("generating", output.basename)
ctx.action(
use_default_shell_env = True,
outputs = [output],
inputs = input,
progress_message="Generating %s with %s" % (output.path, input[0].path),
command="/bin/cp %s %s" % (input[0].path, output.path)
)
_generate_fbs = rule(
implementation=_impl,
output_to_genfiles = True,
attrs={
"srcs": attr.label_list(allow_files=True, allow_empty=False),
"out": attr.output()
},
)
def generate_fbs(name, srcs, out):
_generate_fbs(
name = "_%s" % name,
srcs = srcs,
out = out
)
native.cc_library(
name = name,
hdrs = [out],
include_prefix = ".",
visibility = ["//visibility:public"],
)
With that, I could have data/maps/BUILD:
load("//tools:generate_fbs.bzl", "generate_fbs")
generate_fbs(
name = "schema",
srcs = ["map_schema.fbs"],
out = "map_schema_generated.h",
)
And src/map/BUILD contains:
cc_library(
name = "map",
srcs = [
"map.cpp",
"map.hpp",
],
deps = [
"//data/maps:schema",
]
)
And bazel build //src/map builds bazel-bin/src/map/libmap.a and bazel-bin/src/map/libmap.so.
Instead of #include "map_schema_generated.h" in src/map/map.cpp, I could have write `#include "data/maps/map_schema_generated.h".
I think it is the cleanest way to make it works.
In Buck, one might write:
exported_headers = subdir_glob([
("lib/source", "video/**/*.h"),
("lib/source", "audio/**/*.h"),
],
excludes = [
"lib/source/video/codecs/*.h",
],
prefix = "MediaLib/")
This line would make those headers available under MediaLib/. What would be the equivalent in Bazel?
I ended up writing a rule to do this. It provides something similar to the output of a filegroup, and could be combined with cc_library in a macro.
def _impl_flat_hdr_dir(ctx):
path = ctx.attr.include_path
d = ctx.actions.declare_directory(path)
dests = [ctx.actions.declare_file(path + "/" + h.basename)
for h in ctx.files.hdrs]
cmd = """
mkdir -p {path};
cp {hdrs} {path}/.
""".format(path=d.path, hdrs=" ".join([h.path for h in ctx.files.hdrs]))
ctx.actions.run_shell(
command = cmd,
inputs = ctx.files.hdrs,
outputs = dests + [d],
progress_message = "doing stuff!!!"
)
return struct(
files = depset(dests)
)
flat_hdr_dir = rule(
_impl_flat_hdr_dir,
attrs = {
"hdrs": attr.label_list(allow_files = True),
"include_path": attr.string(mandatory = True),
},
output_to_genfiles = True,
)
So I did not test it but comming from the documentation it should be similar to:
cc_library(
name = "foo",
srcs = glob([
"video/**/*.h",
"audio/**/*.h",
],
excludes = [ "lib/source/video/codecs/*.h" ]
),
include_prefix = "MediaLib/"
)
https://docs.bazel.build/versions/master/be/c-cpp.html#cc_library.include_prefix
https://docs.bazel.build/versions/master/be/functions.html#glob