Makefile conditional statements - c++

Scenario :
Consider a source directory which has multiple ".cpp" files that creates a static library
consider files: XYZ.cpp & ABC.cpp (used specificly based on condition described below ) as well as PQR.cpp, JKL.cpp etc.., output library name is out.a
Here p ( is an environment variable) whose value if matches to q then,
out.a should be created using XYZ.cpp else it should be created using ABC.cpp
Ex: i.e. something like this
ifeq($p, q)
SRC = XYZ.cpp
else
SRC = ABC.cpp
endif
SRC += PQR.cpp \
JKL.cpp \
MNO.cpp
How could I do the same optimizely in Makefile ?
Thanks in advance for any help...

That's almost exactly it. You just need a space after ifeq, (and some parentheses around p, in case you want to use a variable name longer than one letter):
ifeq ($(p), q)
SRC = XYZ.cpp
else
SRC = ABC.cpp
endif
SRC += PQR.cpp \
JKL.cpp \
MNO.cpp

Although Beta's answer is quite correct, you might alternatively consider constructed variable names. In my opinion, they lead to cleaner and more readable makefiles. For example:
# if p is not set, default to "default"
p ?= default
q_SRC = XYZ.cpp
default_SRC = ABC.cpp
SRC = $($(p)_SRC) PQR.cpp JKL.cpp MNO.cpp
etc. Especially if you have lots of alternatives this can be much more understandable (again, IMO).

Related

Bazel - Including all headers from a directory when importing a static library

I am a total newbie to Bazel and trying to add a static library to my build.
Lets say as a simple example say I have the following.
cc_import(
name = "my_test_lib"
static_library = "lib\my_test_lib\test.lib"
hdrs = ["lib\my_test_lib\include\headerA.h",
"lib\my_test_lib\include\headerB.h"]
visibility = ["//visibility:public"],
)
Now this works fine.
However, what if I have a huge number of includes and within the include directory there is a number of subdirectories. Do I have to individually enter each one that my main project depends on, or can I do something like the following to essentially make all headers in this directory / subdirectories available?
hdrs = [ "lib\my_test_lib\include\*"]
[This is a supplement to Sebastian's answer.]
Here's a trick I just learned (from a colleague) to use with cc_import:
Suppose you don't want your headers exposed "naked", but want them all in a subdir prefixed by your library name, so that you refer to them like this:
#include <openjpeg/openjpeg.h>
The first step is to have a directory structure that looks like this:
. <library root>
- include
- openjpeg
- openjpeg.h
- <other header files>
But now if you expose these header files via a glob, e.g., glob(["mylib/include/openjpeg/*.h"]) or some variant like glob(["mylib/include/**/*.h"]) (or even by naming them explicitly!) they're not actually exposed as #include <openjpeg/openjpeg.h> but instead as #include "openjpeg.h" or #include <include/openjpeg/openjpeg.h> or something like that.
The problem is that cc_import unaccountably does not support the includes attribute that cc_library does so you can't just name an include directory.
So, use the standard computer science workaround of adding another level of indirection and use this:
cc_library(name = "openjpeg",
includes = ["include"],
deps = ["openjpeg-internal"],
visibility = ["//visibility:public"],
)
cc_import(name = "openjpeg-internal",
hdrs = glob(["include/**/*.h"]),
static_library = ...,
visibility = ["//visibility:private"],
)
What you need is the glob function.
To use it in your above example, you would do something like this
cc_import(
name = "my_test_lib"
static_library = "lib/my_test_lib/test.lib"
hdrs = glob(["lib/my_test_lib/include/*.h"])
visibility = ["//visibility:public"],
)
which would find all files ending with .h under lib\my_test_lib\include and put them in the hdrs attribute of your cc_import.
There's more information about glob in the Bazel documentation: https://docs.bazel.build/versions/master/be/functions.html#glob
Note: Always use forward slashes on all platforms in Bazel BUILD files (even on Windows).
Multiple glob patterns
It's sometimes useful to put in more than one pattern in the glob, for example like this
cc_import(
...
hdrs = glob([
"lib/my_test_lib/include/*.h",
"lib/my_test_lib/include/*.hpp",
"lib/my_test_lib/public/*.h",
]),
...
)
Combining a glob with a hard coded list of files
Another useful thing is combining globs with hard coded paths. You might have a few files you want in there and then a directory you also want to include. You can do this by using the + operator to concatenate the hard coded list of paths with the glob results like this
cc_import(
...
hdrs = [
"lib/my_test_lib/some_header.h",
] + glob([
"lib/my_test_lib/include/*.h",
]),
...
)
Globbing a directory hierarchy (beware of massive inclusions)
The glob function also support traversing directories and their sub directories when finding files. This can be done using the ** glob pattern. So, to for example grab all .h files in the my_test_lib directory, use this glob
cc_import(
...
hdrs = glob([
"lib/my_test_lib/**/*.h",
]),
...
)
Beware: This will include all files below the specified directory, as expected. This can go out of hand since it's not explicit what files get included. Might be better to stay away from **.

How to integrate pretty-printing as part of build in bazel

Right now, I have a really dumb pretty-print script which does a little git-fu to find files to format (unconditionally) and then runs those through clang-format -i. This approach has several shortcomings:
There are certain files which are enormous and take forever to pretty print.
The pretty printing is always done, regardless of whether or not the underlying file actually changed or not.
In the past, I was able to do things with CMake that had several nice properties which I would like to reproduce in bazel:
Only ever build code after it has gone through linting / pretty printing / etc.
Only lint / pretty print / etc. stuff that has changed
Pretty print stuff regardless of whether or not it is under VC or not
In CMake-land, I used this strategy, inspired by SCons proxy-target trickery:
Introduce a dummy target (e.g. source -> source.formatted). The action associated with this target does two things: a) run clang-format -i source, b) output/touch a file called source.formatted (this guarantees that for reasonable file systems, if source.formatted is newer than source, source doesn't need to be reformatted)
Add a dummy target (target_name.aggregated_formatted) which aggregates all the .formatted files corresponding to a particular library / executable target's sources
Make library / executable targets depend on target_name.aggregated_formatted as a pre-build step
Any help would be greatly appreciated.
#abergmeier is right. Let's take it one step further by implementing the macro and its components.
We'll use the C++ stage 1 tutorial in bazelbuild/examples.
Let's first mess up hello-world.cc:
#include <ctime>
#include <string>
#include <iostream>
std::string get_greet(const std::string& who) {
return "Hello " + who;
}
void print_localtime() {
std::time_t result =
std::time(nullptr);
std::cout << std::asctime(std::localtime(&result));
}
int main(int argc, char** argv) {
std::string who = "world";
if (argc > 1) {who = argv[1];}
std::cout << get_greet(who) << std::endl;
print_localtime();
return 0;
}
This is the BUILD file:
cc_binary(
name = "hello-world",
srcs = ["hello-world.cc"],
)
Since cc_binary doesn't know anything about clang-format or linting in general, let's create a macro called clang_formatted_cc_binary and replace cc_binary with it. The BUILD file now looks like this:
load(":clang_format.bzl", "clang_formatted_cc_binary")
clang_formatted_cc_binary(
name = "hello-world",
srcs = ["hello-world.cc"],
)
Next, create a file called clang_format.bzl with a macro named clang_formatted_cc_binary that's just a wrapper around native.cc_binary:
# In clang_format.bzl
def clang_formatted_cc_binary(**kwargs):
native.cc_binary(**kwargs)
At this point, you can build the cc_binary target, but it's not running clang-format yet. We'll need to add an intermediary rule to do that in clang_formatted_cc_binary which we'll call clang_format_srcs:
def clang_formatted_cc_binary(name, srcs, **kwargs):
# Using a filegroup for code cleaniness
native.filegroup(
name = name + "_unformatted_srcs",
srcs = srcs,
)
clang_format_srcs(
name = name + "_formatted_srcs",
srcs = [name + "_unformatted_srcs"],
)
native.cc_binary(
name = name,
srcs = [name + "_formatted_srcs"],
**kwargs
)
Note that we have replaced the native.cc_binary's sources with the formatted files, but kept the name to allow for in-place replacements of cc_binary -> clang_formatted_cc_binary in BUILD files.
Finally, we'll write the implementation of the clang_format_srcs rule, in the same clang_format.bzl file:
def _clang_format_srcs_impl(ctx):
formatted_files = []
for unformatted_file in ctx.files.srcs:
formatted_file = ctx.actions.declare_file("formatted_" + unformatted_file.basename)
formatted_files += [formatted_file]
ctx.actions.run_shell(
inputs = [unformatted_file],
outputs = [formatted_file],
progress_message = "Running clang-format on %s" % unformatted_file.short_path,
command = "clang-format %s > %s" % (unformatted_file.path, formatted_file.path),
)
return struct(files = depset(formatted_files))
clang_format_srcs = rule(
attrs = {
"srcs": attr.label_list(allow_files = True),
},
implementation = _clang_format_srcs_impl,
)
This rule goes through every file in the target's srcs attribute, declaring a "dummy" output file with the formatted_ prefix, and running clang-format on the unformatted file to produce the dummy output.
Now if you run bazel build :hello-world, Bazel will run the actions in clang_format_srcs before running the cc_binary compilation actions on the formatted files. We can prove this by running bazel build with the --subcommands flag:
$ bazel build //main:hello-world --subcommands
..
SUBCOMMAND: # //main:hello-world_formatted_srcs [action 'Running clang-format on main/hello-world.cc']
..
SUBCOMMAND: # //main:hello-world [action 'Compiling main/formatted_hello-world.cc']
..
SUBCOMMAND: # //main:hello-world [action 'Linking main/hello-world']
..
Looking at the contents of formatted_hello-world.cc, looks like clang-format did its job:
#include <ctime>
#include <string>
#include <iostream>
std::string get_greet(const std::string& who) { return "Hello " + who; }
void print_localtime() {
std::time_t result = std::time(nullptr);
std::cout << std::asctime(std::localtime(&result));
}
int main(int argc, char** argv) {
std::string who = "world";
if (argc > 1) {
who = argv[1];
}
std::cout << get_greet(who) << std::endl;
print_localtime();
return 0;
}
If all you want are the formatted sources without compiling them, you can run build the target with the _formatted_srcs suffix from clang_format_srcs directly:
$ bazel build //main:hello-world_formatted_srcs
INFO: Analysed target //main:hello-world_formatted_srcs (0 packages loaded).
INFO: Found 1 target...
Target //main:hello-world_formatted_srcs up-to-date:
bazel-bin/main/formatted_hello-world.cc
INFO: Elapsed time: 0.247s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
You might be able to use aspects for that. Being not certain, a Bazel-dev will probably point that out if it indeed is possible.
If you are familiar with Rules and Actions and the like, the quick and dirty way (which is similar to the CMake hackery) is to write a Macro. For e.g. cc_library you would do:
def clean_cc_library(name, srcs, **kwargs):
lint_sources(
name = "%s_linted" % name,
srcs = srcs,
)
pretty_print_sources(
name = "%s_pretty" % name,
srcs = ["%s_linted"],
)
return native.cc_library(
name = name,
srcs = ["%s_pretty"],
**kwargs
)
Then you of course need to replace every cc_library with clean_cc_library. And lint_sources and pretty_print_sources are rules that you have to implement yourself and need to produce the list of cleaned up files.
#abergmeier mentions maybe being able to use Aspects. You can, and I've made a prototype of a general linting system that leverages Aspects functionality so that BUILD files do not need to be modified to use Macros like clang_formatted_cc_library in-place of the core rules.
The basic idea is to have a bazel build step that is a pure function f(linter, sources) -> linted_sources_diff and a subsequent bazel run step that takes those diffs and applies them back to your source code to fix lint errors.
The prototype implementation is available at https://github.com/thundergolfer/bazel-linting-system.

Bazel & automatically generated cpp / hpp files

I am starting to use Bazel as my C++ project build system.
However I am stuck with the following problem:
I am in a scenario where I automatically generate the file.hpp file.cpp (literate programming).
To reproduce my problem one can simply use this minimal generator:
-- file.sh --
#!/bin/sh
echo "int foo();" >> file.hpp
echo "#include \"myLib/file.hpp\"\n\nint foo() { return 2017; }" >> file.cpp
My project repo is: (WORKSPACE is an empty file)
├── myLib
│   ├── BUILD
│   └── file.sh
└── WORKSPACE
The BUILD file is
genrule(
name = "tangle_file",
srcs = ["file.sh"],
outs = ["file.cpp","file.hpp"],
cmd = "./$(location file.sh);cp file.cpp $(#D);cp file.hpp $(#D);"
)
cc_library(
name = "file",
srcs = ["file.cpp"],
hdrs = ["file.hpp"],
# deps = [":tangle_file"],
visibility = ["//bin:__pkg__"],
)
I have two problems:
Question (A), dealing with the genrule() part:
The fact that I must use
cmd = "./$(location file.sh);cp file.cpp $(#D);cp file.hpp $(#D);"
is quite mysterious.
My first attempt was:
cmd = "./$(location file.sh)"
However in that case I get the following error:
declared output 'myLib/file.cpp' was not created by genrule. This is probably because the genrule actually didn't create this output, or because the output was a directory and the genrule was run remotely (note that only the contents of declared file outputs are copied from genrules run remotely)
Question (B), dealing with the cc_library() part
I do not know how to make Bazel aware of that the :file target depends on the :tangle_file target.
If I uncomment:
deps = [":tangle_file"],
I get the following error:
in deps attribute of cc_library rule //myLib:file: genrule rule '//myLib:tangle_file' is misplaced here (expected cc_inc_library, cc_library, objc_library, experimental_objc_library or cc_proto_library).
Question (A)
The error that you are seeing is because the genrule cmd is not run inside of its output directory. If you hardcoded bazel-out/local-fastbuild/genfiles/myLib/file.cpp instead of file.cpp in your file.sh script, it would work. However, the recommended approach would be for your script to takes its output directory as an argument.
For example,
genrule(
name = "tangle_file",
srcs = ["file.sh"],
outs = ["file.cpp","file.hpp"],
cmd = "./$(location file.sh) $(#D)"
)
and
#!/bin/sh
echo "int foo();" >> $1/file.hpp
echo "#include \"myLib/file.hpp\"\n\nint foo() { return 2017; }" >> $1/file.cpp
Question (B)
The fact that you have
srcs = ["file.cpp"],
hdrs = ["file.hpp"],
in your cc_library is what tells Bazel that it depends on the genrule, since the genrule creates those files. If you want to make it more explicit, you could use the label syntax, which does the same thing:
srcs = ["//myLib:file.cpp"],
hdrs = ["//myLib:file.hpp"],

How to create a qmake function that creates a custom make target?

I know that we have QMAKE_EXTRA_TARGETS to create new makefile targets, that is used as follows (as seen in http://blog.qt.io/blog/2008/04/16/the-power-of-qmake/):
conv.target=convert
conv.input=file.in
conv.output=file.out
conv.commands=convert.sh file.in file.out
QMAKE_EXTRA_TARGETS+=conv
In my case, convert.sh is used for multiple files and targets. I would like to create a method with arguments (target_name, input_file, output_file), that creates the task for me, so that I don't have to repeat the above lines.
The documentation on qmake is quite lacking, or I haven't found the correct source, but to my understanding, there are two types of functions in qmake: replace and test (http://doc.qt.io/qt-5/qmake-language.html#replace-functions) and we can create custom ones using defineReplace and defineTest.
I have tried:
defineTest(createConvertTask) {
custom.target = $$1
custom.input = $$2
custom.output = $$3
custom.commands = convert.sh $$2 > $$3
QMAKE_EXTRA_TARGETS += custom
}
but that doesn't really work, as after calling createConvertTask multiple times, QMAKE_EXTRA_TARGETS will just contain multiple copies of the string custom.
However, this
defineTest(createConvertTask) {
$$1.target = $$1
$$1.input = $$2
$$1.output = $$3
$$1.commands = convert.sh $$2 > $$3
QMAKE_EXTRA_TARGETS += $$1
}
fails with error example.pro:2: error: Left hand side of assignment must expand to exactly one word.
Any ideas on how to approach this?
1.Option: custom compiler
Use a custom compiler like this:
convert.input = LIST_OF_IN_FILES # note: no $$
convert.output = $${SOME_DIR}/${QMAKE_FILE_BASE}.ext
convert.commands = convert.sh ${QMAKE_FILE_IN} > $${SOME_DIR}/${QMAKE_FILE_BASE}.ext
convert.CONFIG += no_link target_predeps
QMAKE_EXTRA_COMPILERS += convert
The variables ${QMAKE_FILE_IN} contains the current input file, same as ${QMAKE_FILE_BASE} but without extension. Here, the output filenames are generated out of the input files. The CONFIG options tell qmake not to add the output files to the objects list and to add them as prerequisites of the main target. Additionally a make target compiler_convert_make_all will be generated.
Just add files:
LIST_OF_IN_FILES += file1 file2
and make
make compiler_convert_make_all
This option also adds all output files to the clean target (will be deleted on make clean.
2.Option: use eval() and export()
To use variables as left hand expression you can use the eval() function, that 'Evaluates the contents of the string using qmake syntax rules'.
eval($${1}.target = $$1)
Since this is done inside a function you need to export() all variables to the global scope.
eval(export($${1}.target))
Afterwards add target and export QMAKE_EXTRA_TARGETS as well:
QMAKE_EXTRA_TARGETS += $${1}
export(QMAKE_EXTRA_TARGETS)
Complete with a replace function, the return value will be added to the dependencies of the custom convert target:
convert.target = convert
defineReplace(createConvertTask) {
eval($${2}_custom.target = $$2)
eval($${2}_custom.depends = $$1)
eval($${2}_custom.commands = convert.sh $$1 > $$2)
eval(export($${2}_custom.target))
eval(export($${2}_custom.depends))
eval(export($${2}_custom.commands))
QMAKE_EXTRA_TARGETS += $${2}_custom
export(QMAKE_EXTRA_TARGETS)
return($${2}_custom)
}
convert.depends += $$createConvertTask(in_file_1, out_file_1)
convert.depends += $$createConvertTask(in_file_2, out_file_2)
QMAKE_EXTRA_TARGETS += convert
The results in the generated Makefile:
out_file_1: in_file_1
convert.sh in_file_1 > out_file_1
out_file_2: in_file_2
convert.sh in_file_2 > out_file_2
convert: out_file_1 out_file_2
This approach is more flexible and can be extended to support the a variable target parameter (here constant convert).

Python scons building

Now I am studying an open source fluid simulation called pabalos. I have some problems building my own program that links against the library.
The library is built from source using scons.
The directory of the project is :
[fred#suck palabos-v1.1r0]$ls
codeblocks/ examples/ jlabos/ pythonic/ SConstruct utility/
COPYING externalLibraries/ lib/ scons/ src/
I will refer to this as the project root directory!
The project's official building documentation says:
The library Palabos makes use of an on-demand compilation process. The
code is compiled the first time it is used by an end-user application,
and then automatically re-used in future, until a new compilation is
needed due to a modification of the code or compilation options.
In the examples directory, there are some example code directories, such as :
[fred#suck palabos-v1.1r0]$ls examples/showCases/rectangularChannel3d/*
examples/showCases/rectangularChannel3d/Makefile
examples/showCases/rectangularChannel3d/rectangularChannel3D.cpp
The Makefile of the example is:
[fred#suck rectangularChannel3d]$cat Makefile
##########################################################################
## Makefile for the Palabos example program rectangularChannel3D.
##
## The present Makefile is a pure configuration file, in which
## you can select compilation options. Compilation dependencies
## are managed automatically through the Python library SConstruct.
##
## If you don't have Python, or if compilation doesn't work for other
## reasons, consult the Palabos user's guide for instructions on manual
## compilation.
##########################################################################
# USE: multiple arguments are separated by spaces.
# For example: projectFiles = file1.cpp file2.cpp
# optimFlags = -O -finline-functions
# Leading directory of the Palabos source code
palabosRoot = ../../..
# Name of source files in current directory to compile and link with Palabos
projectFiles = rectangularChannel3D.cpp
# Set optimization flags on/off
optimize = true
# Set debug mode and debug flags on/off
debug = false
# Set profiling flags on/off
profile = false
# Set MPI-parallel mode on/off (parallelism in cluster-like environment)
MPIparallel = true
# Set SMP-parallel mode on/off (shared-memory parallelism)
SMPparallel = false
# Decide whether to include calls to the POSIX API. On non-POSIX systems,
# including Windows, this flag must be false, unless a POSIX environment is
# emulated (such as with Cygwin).
usePOSIX = true
# Path to external libraries (other than Palabos)
libraryPaths =
# Path to inlude directories (other than Palabos)
includePaths =
# Dynamic and static libraries (other than Palabos)
libraries =
# Compiler to use without MPI parallelism
serialCXX = g++
# Compiler to use with MPI parallelism
parallelCXX = mpicxx
# General compiler flags (e.g. -Wall to turn on all warnings on g++)
compileFlags = -Wall -Wnon-virtual-dtor
# General linker flags (don't put library includes into this flag)
linkFlags =
# Compiler flags to use when optimization mode is on
optimFlags = -O3
# Compiler flags to use when debug mode is on
debugFlags = -g
# Compiler flags to use when profile mode is on
profileFlags = -pg
##########################################################################
# All code below this line is just about forwarding the options
# to SConstruct. It is recommended not to modify anything there.
##########################################################################
SCons = $(palabosRoot)/scons/scons.py -j 2 -f $(palabosRoot)/SConstruct
SConsArgs = palabosRoot=$(palabosRoot) \
projectFiles="$(projectFiles)" \
optimize=$(optimize) \
debug=$(debug) \
profile=$(profile) \
MPIparallel=$(MPIparallel) \
SMPparallel=$(SMPparallel) \
usePOSIX=$(usePOSIX) \
serialCXX=$(serialCXX) \
parallelCXX=$(parallelCXX) \
compileFlags="$(compileFlags)" \
linkFlags="$(linkFlags)" \
optimFlags="$(optimFlags)" \
debugFlags="$(debugFlags)" \
profileFlags="$(profileFlags)" \
libraryPaths="$(libraryPaths)" \
includePaths="$(includePaths)" \
libraries="$(libraries)"
compile:
python $(SCons) $(SConsArgs)
clean:
python $(SCons) -c $(SConsArgs)
/bin/rm -vf `find $(palabosRoot) -name '*~'`
I know this makefile will call scons, and SConstruct file is in the project root dir as I have shown.
The SContstruct file is :
[fred#suck palabos-v1.1r0]$cat SConstruct
###########################################################
# Configuration file for the compilation of Palabos code,
# using the SConstruct library.
# IT IS NOT RECOMMENDED TO MODIFY THIS FILE.
# Compilation should be personalized by adjusting the
# Makefile in the directory of the main source files.
# See Palabos examples for sample Makefiles.
###########################################################
import os
import sys
import glob
argdict = dict(ARGLIST)
# Read input parameters
palabosRoot = argdict['palabosRoot']
projectFiles = Split(argdict['projectFiles'])
optimize = argdict['optimize'].lower() == 'true'
debug = argdict['debug'].lower() == 'true'
profile = argdict['profile'].lower() == 'true'
MPIparallel = argdict['MPIparallel'].lower() == 'true'
SMPparallel = argdict['SMPparallel'].lower() == 'true'
usePOSIX = argdict['usePOSIX'].lower() == 'true'
serialCXX = argdict['serialCXX']
parallelCXX = argdict['parallelCXX']
compileFlags = Split(argdict['compileFlags'])
linkFlags = Split(argdict['linkFlags'])
optimFlags = Split(argdict['optimFlags'])
debugFlags = Split(argdict['debugFlags'])
profileFlags = Split(argdict['profileFlags'])
libraryPaths = Split(argdict['libraryPaths'])
includePaths = Split(argdict['includePaths'])
libraries = Split(argdict['libraries'])
# Read the optional input parameters
try:
dynamicLibrary = argdict['dynamicLibrary'].lower() == 'true'
except:
dynamicLibrary = False
try:
srcPaths = Split(argdict['srcPaths'])
except:
srcPaths = []
flags = compileFlags
allPaths = [palabosRoot+'/src'] + [palabosRoot+'/externalLibraries'] + includePaths
if optimize:
flags.append(optimFlags)
if debug:
flags.append(debugFlags)
flags.append('-DPLB_DEBUG')
if profile:
flags.append(profileFlags)
linkFlags.append(profileFlags)
if MPIparallel:
compiler = parallelCXX
flags.append('-DPLB_MPI_PARALLEL')
else:
compiler = serialCXX
if SMPparallel:
flags.append('-DPLB_SMP_PARALLEL')
if usePOSIX:
flags.append('-DPLB_USE_POSIX')
env = Environment ( ENV = os.environ,
CXX = compiler,
CXXFLAGS = flags,
LINKFLAGS = linkFlags,
CPPPATH = allPaths
)
if dynamicLibrary:
LibraryGen = env.SharedLibrary
else:
LibraryGen = env.Library
sourceFiles = []
for srcDir in glob.glob(palabosRoot+'/src/*'):
sourceFiles.extend(glob.glob(srcDir+'/*.cpp'))
for srcDir in srcPaths:
sourceFiles.extend(glob.glob(srcDir+'/*.cpp'))
sourceFiles.extend(glob.glob(palabosRoot+'/externalLibraries/tinyxml/*.cpp'));
if MPIparallel:
palabos_library = LibraryGen( target = palabosRoot+'/lib/plb_mpi',
source = sourceFiles )
else:
palabos_library = LibraryGen( target = palabosRoot+'/lib/plb',
source = sourceFiles )
local_objects = env.Object(source = projectFiles)
all_objects = local_objects + palabos_library
env.Program(all_objects, LIBS=libraries, LIBPATH=libraryPaths)
My problem is:
When I changed the source file rectangularChannel3D.cpp in the example dir,
and run make, the palabos library should not be rebuilt since I didn't change
the library project's source file (in the 'src' dir of the root dir) at all. But
actually the lib file "libplb.a" had been rebuilt!! So why?
I agree with Brady. Try contacting the project you're trying to build.
Alternatively, if you get no help from them, and/or really want to fix this yourself, SCons has a flag --debug=explain which will tell you why any object is being build/rebuilt.