Is it possible to glob source code files in a meson build?
Globbing source files is discouraged and is bad practice, and not only on Meson. It causes weird errors, makes it hard to have some developement files aside for development but that you don't want to build or ship, and can cause problems with incremental builds.
Explicit is better than implicit.
2021-03-02 EDIT:
Read also Why can't I specify target files with a wildcard? in the Meson FAQ.
Meson does not support this syntax and the reason for this is simple. This can not be made both reliable and fast.
If after all the warnings, you still want to do it at your own risk, the FAQ tells you how in But I really want to use wildcards!. You just use an external script to do the globbing and return the list of files (that script is called grabber.sh in that example).
c = run_command('grabber.sh')
sources = c.stdout().strip().split('\n')
e = executable('prog', sources)
I found an example in the meson unit tests showing how to glob source, but in the comments it says this is not recommended.
if build_machine.system() == 'windows'
c = run_command('grabber.bat')
grabber = find_program('grabber2.bat')
else
c = run_command('grabber.sh')
grabber = find_program('grabber.sh')
endif
# First test running command explicitly.
if c.returncode() != 0
error('Executing script failed.')
endif
newline = '''
'''
sources = c.stdout().strip().split(newline)
e = executable('prog', sources)
The reason this is not recommended: attempting to add files by glob'ing a directory will NOT make those files automatically appear in the build. You have to manually re-invoke meson for the files to be added to the build. Re-invoking ninja or other back-ends is not sufficient, you must reinvoke meson itself.
meson.build
glob = run_command('python', 'glob')
sources = glob.stdout().strip().split('\n')
glob:
import glob
sources = glob.glob('./src/*.cpp') + glob.glob('./src/**/*.cpp')
for i in sources:
print(i)
No it's not possible. Every sources have to be explicitly stated to build a target.
Related
I have the following directory structure:
my_dir
|
--> src
| |
| --> foo.cc
| --> BUILD
|
--> WORKSPACE
|
--> bazel-out/ (symlink)
|
| ...
src/BUILD contains the following code:
cc_binary(
name = "foo",
srcs = ["foo.cc"]
)
The file foo.cc creates a file named bar.txt using the regular way with <fstream> utilities.
However, when I invoke Bazel with bazel run //src:foo the file bar.txt is created and placed in bazel-out/darwin-fastbuild/bin/src/foo.runfiles/foo/bar.txt instead of my_dir/src/bar.txt, where the original source is.
I tried adding an outs field to the foo rule, but Bazel complained that outs is not a recognized attribute for cc_binary.
I also thought of creating a filegroup rule, but there is no deps field where I can declare foo as a dependency for those files.
How can I make sure that the files generated by running the cc_binary rule are placed in my_dir/src/bar.txt instead of bazel-out/...?
Bazel doesn't allow you to modify the state of your workspace, by design.
The short answer is that you don't want the results of the past builds to modify the state of your workspace, hence potentially modifying the results of the future builds. It'll violate reproducibility if running Bazel multiple times on the same workspace results in different outputs.
Given your example: imagine calling bazel run //src:foo which inserts
#define true false
#define false true
at the top of the src/foo.cc. What happens if you call bazel run //src:foo again?
The long answer: https://docs.bazel.build/versions/master/rule-challenges.html#assumption-aim-for-correctness-throughput-ease-of-use-latency
Here's more information on the output directory: https://docs.bazel.build/versions/master/output_directories.html#documentation-of-the-current-bazel-output-directory-layout
There could be a workaround to use genrule. Below is an example that I use genrule to copy a file to the .git folder.
genrule(
name = "precommit",
srcs = glob(["git/**"]),
outs = ["precommit.txt"],
# folder contain this BUILD.bazel file is tool which will be symbol linked, we use cd -P to get to the physical path
cmd = "echo 'setup pre-commit.sh' > $(OUTS) && cd -P tools && ./path/to/your-script.sh",
local = 1, # required
)
If you're passing the name of the output file in when running, you can simply use absolute paths. To make this easier, you can use the realpath utility if you're in linux. If you're on a mac, it is included in brew install coreutils. Then running it looks something like:
bazel run my_app_dir:binary_target -- --output_file=`realpath relative/path/to.output
This has been discussed and explained in a Bazel issue. Recommendation is to use a tool external to Bazel:
As I understand the use-case, this is out-of-scope for building and in the scope of, perhaps, workspace configuration. What I'm sure of is that an external tool would be both easier and safer to write for this purpose, than to introduce such a deep design change to Bazel.
The tool would copy the files from the output tree into the source tree, and update a manifest file (also in the source tree) that lists the path-digest pairs. The sources and the manifest file would all be versioned. A genrule or a sh_test would depend on the file-generating genrules, as well as on this manifest file, and compare the file-generating genrules' outputs' digests (in the output tree) to those in the manifest file, and would fail if there's a mismatch. In that case the user would need to run the external tool, thus update the source tree and the manifest, then rerun the build, which is the same workflow as you described, except you'd run this tool instead of bazel regenerate-autogenerated-sources.
my autotools project has a couple of unit-tests.
one of these tests (filereader) needs to read a file (data/test1.bin)
Here's my filesystem layout:
- libfoo/tests/filereader.c
- libfoo/tests/data/test1.bin
and my libfoo/tests/Makefile.am:
AUTOMAKE_OPTIONS = foreign
AM_CPPFLAGS = -I$(top_srcdir)/foo
LDADD = $(top_builddir)/src/libfoo.la
EXTRA_DIST = data/file1.bin
TESTS = filereader
check_PROGRAMS= filereader
filereader_SOURCES = filereader.c
this works great, as long as i do in-tree builds.
However, when running the test-suite out-of-tree (e.g. make distcheck), the filereader test cannot find the input file anymore.
This is obviously because only the source tree contains the input file, but not the build tree.
i wonder what is the canonical way to fix this problem?
compile the directory of the test-file into the unittest (AM_CPPFLAGS+=-DSRCDIR=$(srcdir))
pass the qualified input file as a cmdline argument to the test? (e.g. $(builddir)/filereader $(srcdir)/data/file1.bin)
copy the input file from the source tree to the build tree? (cp $(srcdir)/data/file1.bin $(builddir)/data/file1.bin? how would a proper make-rule look like??)
Canonically, the solution would be to define the path to your file into the unittest, so the first option you laid out. The second one is also possible but it requires using an in-between driver script.
I would suggest avoiding the third one, but if you do want to go down that route, use $(LN_S) rather than cp; this way you reduce the I/O load of the test.
There is a way to do this with autoconf. From the netcdf-c configure.ac:
##
# Some files need to exist in build directories
# that do not correspond to their source directory, or
# the test program makes an assumption about where files
# live. AC_CONFIG_LINKS provides a mechanism to link/copy files
# if an out-of-source build is happening.
##
AC_CONFIG_LINKS([nc_test4/ref_hdf5_compat1.nc:nc_test4/ref_hdf5_compat1.nc])
AC_CONFIG_LINKS([nc_test4/ref_hdf5_compat2.nc:nc_test4/ref_hdf5_compat2.nc])
AC_CONFIG_LINKS([nc_test4/ref_hdf5_compat3.nc:nc_test4/ref_hdf5_compat3.nc])
AC_CONFIG_LINKS([nc_test4/ref_chunked.hdf4:nc_test4/ref_chunked.hdf4])
AC_CONFIG_LINKS([nc_test4/ref_contiguous.hdf4:nc_test4/ref_contiguous.hdf4])
I'm working on a C++ project that has some hand coded source files, as well as some source and header files that are generated by a command line tool.
The actual source and header files generated are determined by the contents of a JSON file that the tool reads, and so cannot be hardcoded into the scons script.
I would like to set up scons so that if I clean the project, then make it, it will know to run the command line tool to generate the generated source and header files as the first step, and then after that compile both my hand coded files and the generated source files and link them to make my binary.
Is this possible? I'm at a loss as to how to achieve this, so any help would be much appreciated.
Yes, this is possible. Depending on which tool you're using to create the header/source files, you want to check our ToolIndex at https://bitbucket.org/scons/scons/wiki/ToolsIndex , or read our guide https://bitbucket.org/scons/scons/wiki/ToolsForFools for writing your own Builder.
Based on your description you'll probably have to write your own Emitter, which parses the JSON input file and returns the filenames that will finally result from the call. Then, all you need to do is:
# creates foo.h/cpp and bar.h/cpp
env.YourBuilder('input.json')
env.Program(Glob('*.cpp'))
The Glob will find the created files, even if they don't physically exist on the hard drive yet, and add them to the overall dependencies.
If you have further questions or problems arise, please consider subscribing to our User mailing list at scons-users#scons.org (see also http://scons.org/lists.html ).
Thanks to Dirk Baechle I got this working - for anyone else interested here is the code I used.
import subprocess
env = Environment( MSVC_USE_SCRIPT = "c:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\VC\\bin\\vcvars32.bat")
def modify_targets(target, source, env):
#Call the code generator to generate the list of file names that will be generated.
subprocess.call(["d:/nk/temp/sconstest/codegenerator/CodeGenerator.exe", "-filelist"])
#Read the file name list and add a target for each file.
with open("GeneratedFileList.txt") as f:
content = f.readlines()
content = [x.strip('\n') for x in content]
for newTarget in content:
target.append(newTarget)
return target, source
bld = Builder(action = 'd:/nk/temp/sconstest/codegenerator/CodeGenerator.exe', emitter = modify_targets)
env.Append(BUILDERS = {'GenerateCode' : bld})
env.GenerateCode('input.txt')
# Main.exe depends on all the CPP files in the folder. Note that this
# will include the generated files, even though they may not currently
# exist in the folder.
env.Program('main.exe', Glob('*.cpp'))
There's an example at:
https://github.com/SCons/scons/wiki/UsingCodeGenerators
I'll also echo Dirk's suggestion to join the users mailing list.
I got sick of looking up the magic symbols in make and decided to try waf.
I'm trying to use calibre to make ebooks and I'd like to create a wscript that takes in a file, runs a program with some arguments that include that file, and produces an output. Waf should only build if the input file is newer than the output.
In make, I'd write a makefile like this:
%.epub: %.recipe
ebook-convert $ .epub --test -vv --debug-pipeline debug
Where % is a magic symbol for the basename of the file and $ a symbol for the output filename (basename.epub).
I could call make soverflow.epub and it would run ebook-convert on soverflow.recipe. If the .recipe hadn't changed since the last build, it wouldn't do anything.
How can I do something similar in waf?
(Why waf? Because it uses a real language that I already know. If this is really easy to use in scons, that's a good answer too.)
I figured out how to make a basic wscript file, but I don't know how to build targets specified on the command-line.
The Waf Book has a section on Task generators. The Name and extension-based file processing section gives an example for lua that I adapted:
from waflib import TaskGen
TaskGen.declare_chain(
rule = 'ebook-convert ${SRC} .epub --test -vv --debug-pipeline debug',
ext_in = '.recipe',
ext_out = '.epub'
)
top = '.'
out = 'build'
def configure(conf):
pass
def build(bld):
bld(source='soverflow.recipe')
It even automatically provides a clean step that removes the epub.
I'm writing a Rakefile for a C++ project. I want it to identify #includes automatically, forcing the rebuilding of object files that depend on changed source files. I have a working solution, but I think it can be better. I'm looking for suggestions for:
Suggestions for improving my function
Libraries, gems, or tools that do the work for me
Links to cool C++ Rakefiles that I should check out that do similar things
Here's what I have so far. It's a function that returns the list of dependencies given a source file. I feed in the source file for a given object file, and I want a list of files that will force me to rebuild my object file.
def find_deps( file )
deps = Array.new
# Find all include statements
cmd = "grep -r -h -E \"#include\" #{file}"
includes = `#{cmd}`
includes.each do |line|
dep = line[ /\.\/(\w+\/)*\w+\.(cpp|h|hpp)/ ]
unless dep.nil?
deps << dep # Add the dependency to the list
deps += find_deps( dep )
end
end
return deps
end
I should note that all of my includes look like this right now:
#include "./Path/From/Top/Level/To/My/File.h" // For top-level files like main.cpp
#include "../../../Path/From/Top/To/My/File.h" // Otherwise
Note that I'm using double quotes for includes within my project and angle brackets for external library includes. I'm open to suggestions on alternative ways to do my include pathing that make my life easier.
Use the gcc command to generate a Make dependency list instead, and parse that:
g++ -M -MM -MF - inputfile.cpp
See man gcc or info gcc for details.
I'm sure there are different schools of thought with respect to what to put in #include directives. I advise against putting the whole path in your #includes. Instead, set up the proper include paths in your compile command (with -I). This makes it easier to relocate files in the future and more readable (in my opinion). It may sound minor, but the ability to reorganize as a project evolves is definitely valuable.
Using the preprocessor (see #greyfade) to generate the dependency list has the advantage that it will expand the header paths for you based on your include dirs.
Update: see also the Importing Dependencies section of the Rakefile doc for a library that reads the makefile dependency format.