Build multiple executables from same code base - fortran

I'm trying Meson/Ninja for what I otherwise do in make. I have source-files listed in the src variable, one of which is a program prog.f90 with the statement call ROUTINE, and the preprocessor inserts names like sub1, sub2... for different test-executables 1.x, 2.x.... Like this:
project('proj','fortran', version : '0')
flags = ['-cpp','-fmax-errors=3','-Wall','-fcheck=all','-fbacktrace','-Og', ...]
src = ['src/f1.f90', 'src/f2.f90', prog.f90, tools.f90, ...]
progs = [
['1.x', '-DROUTINE=sub1' ],
['2.x', '-DROUTINE=sub2' ],
['3.x', '-DROUTINE=sub3' ],
...
]
foreach p : progs
executable(p[0], src,fortran_args : flags + [p[1]])
endforeach
It is faster than using make but Meson/Ninja manages to use all cores for ca 1 second when changing a specific file, while make takes 2 seconds, but mostly runs on 1 core.
It seems like each executable gets its own build directory like build/1x#exe etc with all .mod and .o files matching src. And running ninja -v it seems like the file that changed is compiled as many times as there are executables. Meanwhile make only compiles it ones (but then compiles other files due to dependencies stated between objects instead of modules).
So what is the smarter way to do this?

The easiest way would be to first make a static_library() from all other sources files (i.e. everything that is not prog.f90), and to add it in the link_with kwarg of your executable.
This would end up something like this:
# Note that 'prog.f90' is no longer in here
src = files('src/f1.f90', 'src/f2.f90', 'tools.f90', ...)
# This is the static library which will only get compiled once
lib = static_library(src, fortran_args: flags)
progs = [
['1.x', '-DROUTINE=sub1' ],
['2.x', '-DROUTINE=sub2' ],
['3.x', '-DROUTINE=sub3' ],
...
]
foreach p : progs
executable(p[0], 'prog.f90', fortran_args: flags + [p[1]], link_with: lib)
endforeach

Related

Bazel project how to copy candidate directories during build stage

I have a cpp project with the later file struct
MyProject
WORKSPACE
-directory_a
—mylib.cpp
—interface.h
—BUILD
-directory_b
—mylib.cpp
—interface.h
—BUILD
—directoryC
—main.cpp
—BUILD
In main.cpp , the I have a simple include code as
#include directory_real/interface.h
// do something
void main(){
// say hello world
}
During building , I want to use genrule (or something like copy_directory) to rename my directory_b | directory_a into the ‘driectory_real’ based on some flags (In this case, try to move directory_b into directory_real). The build file in the root I wrote is like the code next:
load("#bazel_skylib//rules:copy_directory.bzl", "copy_directory")
copy_directory("copy_directory_into_real","directory_b","directory_real")
cc_binary(
name = "test",
srcs = [
"main.cpp",
],
visibility = ["//visibility:public"],
deps = [
"//directory_real:mylib",
],
)
However, when bazel build //directoryC:test, the error occurs
ERROR: no such package 'directory_real': BUILD file not found in any of the following directories.
Anything wrong?
How can I wrote a correct bazel file to do this

Releasing a meson package, how to specify libraries the depending code should link with?

I have a project A that uses meson as the build system and another project B that depends on A.
A runs, compiles and passes all tests. I installed A by doing meson install, which put all the headers and shared library objects where I needed them to be.
After installing A I want to compile B so I added:
A = dependency('A', include_type : 'system')
exe = executable(
'B',
src,
dependencies: [
A
],
cpp_args : '-DSHADER_PATH="' + meson.current_source_dir() + '/"',)
To the meson.build of B. meson does find A as a package and starts compiling B but fails to link. A defines a plethora of small utilities, each one as its own independent .so binary, all of which need to be linked against. Looking at the commands executed when compiling B, the directory where A's .so libraries are is added to the path using -L, but none of the libraries in that directory are listed for linking. So linking fials because the symbols in those binaries are not found (obviously they are not linked).
What do I need to specify in A to let it know a given library needs to be linked by default when the project is used as a dependency?
For example this is what one of the utilities of A looks like:
renderer_so_relative_path = \
'' + renderer_lib.full_path().replace(meson.build_root() + '/', '')
peripheral_so_relative_path = \
'' + peripheral_lib.full_path().replace(meson.build_root() + '/', '')
loader_sources = [
'ModuleStorage.cpp',
'CLI.cpp'
]
install_subdir('.', install_dir : 'include/ModuleStorage/')
loader_lib = library(
'ne_loader',
sources : loader_sources,
cpp_args : [
'-DNE_RENDERER_PATH="' + renderer_so_relative_path + '"',
'-DNE_PERIPHERAL_PATH="' + peripheral_so_relative_path + '"'
],
link_with : [],
include_directories : [],
dependencies : [core_dep, image_dep, argparse_dep, glfw],
install: true)
module_storage_dep = declare_dependency(link_with:loader_lib, include_directories: ['..'])
subdir('Imgui')
Maybe this helps:
Add to your A project:
# C compiler
ccompiler = meson.get_compiler('c')
# Where is the lib:
a_lib_dir = '/install/dir'
# Find the lib:
a_lib = ccompiler.find_library('a_lib_name', dirs: a_lib_dir)
# Create dependency
a_lib_dep = declare_dependency(
dependencies: a_lib,
include_directories: include_directories(inc_dir)
)
And link the dependency in B:
# Dependencies
b_deps = []
b_deps += dependency('alib', fallback:['alib', 'a_lib_dep'])
# Your B object
b_lib = static_library( 'blib', src,
dependencies: b_deps,
etc...)
# Your B object as someone's dependency (if needed):
b_as_dep = declare_dependency(
link_with: b_lib,
include_directories: inc_dirs,
dependencies: b_deps)

Bazel - Including all headers from a directory when importing a static library

I am a total newbie to Bazel and trying to add a static library to my build.
Lets say as a simple example say I have the following.
cc_import(
name = "my_test_lib"
static_library = "lib\my_test_lib\test.lib"
hdrs = ["lib\my_test_lib\include\headerA.h",
"lib\my_test_lib\include\headerB.h"]
visibility = ["//visibility:public"],
)
Now this works fine.
However, what if I have a huge number of includes and within the include directory there is a number of subdirectories. Do I have to individually enter each one that my main project depends on, or can I do something like the following to essentially make all headers in this directory / subdirectories available?
hdrs = [ "lib\my_test_lib\include\*"]
[This is a supplement to Sebastian's answer.]
Here's a trick I just learned (from a colleague) to use with cc_import:
Suppose you don't want your headers exposed "naked", but want them all in a subdir prefixed by your library name, so that you refer to them like this:
#include <openjpeg/openjpeg.h>
The first step is to have a directory structure that looks like this:
. <library root>
- include
- openjpeg
- openjpeg.h
- <other header files>
But now if you expose these header files via a glob, e.g., glob(["mylib/include/openjpeg/*.h"]) or some variant like glob(["mylib/include/**/*.h"]) (or even by naming them explicitly!) they're not actually exposed as #include <openjpeg/openjpeg.h> but instead as #include "openjpeg.h" or #include <include/openjpeg/openjpeg.h> or something like that.
The problem is that cc_import unaccountably does not support the includes attribute that cc_library does so you can't just name an include directory.
So, use the standard computer science workaround of adding another level of indirection and use this:
cc_library(name = "openjpeg",
includes = ["include"],
deps = ["openjpeg-internal"],
visibility = ["//visibility:public"],
)
cc_import(name = "openjpeg-internal",
hdrs = glob(["include/**/*.h"]),
static_library = ...,
visibility = ["//visibility:private"],
)
What you need is the glob function.
To use it in your above example, you would do something like this
cc_import(
name = "my_test_lib"
static_library = "lib/my_test_lib/test.lib"
hdrs = glob(["lib/my_test_lib/include/*.h"])
visibility = ["//visibility:public"],
)
which would find all files ending with .h under lib\my_test_lib\include and put them in the hdrs attribute of your cc_import.
There's more information about glob in the Bazel documentation: https://docs.bazel.build/versions/master/be/functions.html#glob
Note: Always use forward slashes on all platforms in Bazel BUILD files (even on Windows).
Multiple glob patterns
It's sometimes useful to put in more than one pattern in the glob, for example like this
cc_import(
...
hdrs = glob([
"lib/my_test_lib/include/*.h",
"lib/my_test_lib/include/*.hpp",
"lib/my_test_lib/public/*.h",
]),
...
)
Combining a glob with a hard coded list of files
Another useful thing is combining globs with hard coded paths. You might have a few files you want in there and then a directory you also want to include. You can do this by using the + operator to concatenate the hard coded list of paths with the glob results like this
cc_import(
...
hdrs = [
"lib/my_test_lib/some_header.h",
] + glob([
"lib/my_test_lib/include/*.h",
]),
...
)
Globbing a directory hierarchy (beware of massive inclusions)
The glob function also support traversing directories and their sub directories when finding files. This can be done using the ** glob pattern. So, to for example grab all .h files in the my_test_lib directory, use this glob
cc_import(
...
hdrs = glob([
"lib/my_test_lib/**/*.h",
]),
...
)
Beware: This will include all files below the specified directory, as expected. This can go out of hand since it's not explicit what files get included. Might be better to stay away from **.

How can i add default copts like '-fopenmp' for the cc_library?

https://docs.bazel.build/versions/master/be/c-cpp.html
About the copts option:
Each string in this attribute is added in the given order to COPTS before compiling the binary target. The flags take effect only for compiling this target, not its dependencies, so be careful about header files included elsewhere. All paths should be relative to the workspace, not to the current package.
cc_library(
name = 'lib1',
srcs = glob([
'src/*.cpp',
]),
hdrs = glob([
'include/*.h',
'include/**/*.h',
]),
copts = [
'-std=c++11',
'-fopenmp',
'-march=native',
],
)
cc_binary(
name = "test1",
srcs = ["tests/test1.cpp"],
deps = [
":lib1",
],
copts = [
'-std=c++11',
'-fopenmp',
'-march=native',
],
)
If i remove the copts in the test1 rule, the compilation will failed. How can i modify the lib1 rule, so that all rules depends on it can also compile.
You also need to add linkopts = ["-lgomp"] to your cc_binary rule.
If every targets need -std=c++ -fopenmp -march=native when compiling, it is better to specify the copts in build command (bazel build --copt="-std=c++" --copt="-fopenmp" --copt="-march=native" //src:hello). Then you can remove the copts in cc_* rules.
Also note that if you want to use -march=native in one file, it is always best to ensure that all other files compile with this flag too to avoid some bugs that can be caused by compiler optimization.
Solution for me was to add --linkopt='-lgomp' to my .bazelrc file.
For completeness, the .bazelrc is exactly: build --cxxopt='-std=c++17' --cxxopt='-Ofast' --cxxopt='-fopenmp' --linkopt='-lgomp'
Bazel version is bazel 3.7.0.
Since I cannot comment on #Cheng Ji answer, I'll just emphasize that his answer helped me a lot, but it didn't work at first by adding linkops = ["-lgomp"] into cc_binary (it complained no such attribute 'linkops' in 'cc_binary' rule).
Then I discovered that linkops was a typo, so I tried linkopts = ["-lgomp"] and it also worked! Thanks a lot for that!

llvm: dyld: Symbol not found: __ZN4llvm11RuntimeDyld13MemoryManager6anchorEv

I am playing with LLVM and I hit an issue when trying to use JIT. I was able to build a compiler, it can be compiled, linked and it runs correctly (it compiles my toy programs). However, when I am trying to use build a JIT, it fails.
dyld: Symbol not found: __ZN4llvm11RuntimeDyld13MemoryManager6anchorEv
Referenced from: /Users/gruszczy/Projects/shwifty/./bazel-bin/_solib_darwin//liblibjit.so
Expected in: flat namespace
in /Users/gruszczy/Projects/shwifty/./bazel-bin/_solib_darwin//liblibjit.so
Abort trap: 6
I use Bazel to build everything, these are my build rules:
new_local_repository(
name = "llvm",
path = "/opt/local/libexec/llvm-4.0",
build_file= "llvm.BUILD")
cc_library(
name = "main",
srcs = glob(["lib/*.a"]),
hdrs = glob(["include/**/*.*"]),
visibility = ["//visibility:public"],
copts = ["-Iexternal/llvm/include"],
)
I use JIT in tests (I generate IR in the test then jit it, then run the method to see if it worked).
cc_library(
name = "jit",
srcs = ["jit.cc"],
hdrs = ["jit.h"],
deps = [
":ast",
":common",
"#llvm//:main"
],
copts = GENERAL_COPTS)
cc_test(
name = "codegen_test",
srcs = ["codegen_test.cc"],
deps = [
":ast",
":jit",
":lexer",
":parser",
":codegen",
"#gtest//:main",
"#llvm//:main"
],
copts = TEST_COPTS,
data = [":examples"],
size = "small"
)
Any suggestions what I might be missing?
The source of confusion is that Bazel by default links binaries statically, but tests dynamically. This makes the test-code-refactor loop faster, because changes to the test code only trigger the rebuild of the test, not the whole application. It can be disabled by setting linkstatic = 1 on codegen_test target.
As to why the symbols are not present in codegen_test when built as a shared library, that's much harder question and would need more project-specific information. But a possible solution might be to mark targets producing VMRuntimeDyld.a and VMMCJit.a as alwayslink = 1.
For the completeness, here's the link to an issue you reported on bazel.