How do I set scons system include path - c++

Using scons I can easily set my include paths:
env.Append( CPPPATH=['foo'] )
This passes the flag
-Ifoo
to gcc
However I'm trying to compile with a lot of warnings enabled.
In particular with
env.Append( CPPFLAGS=['-Werror', '-Wall', '-Wextra'] )
which dies horribly on certain boost includes ... I can fix this by adding the boost includes to the system include path rather than the include path as gcc treats system includes differently.
So what I need to get passed to gcc instead of -Ifoo is
-isystem foo
I guess I could do this with the CPPFLAGS variable, but was wondering if there was a better solution built into scons.

There is no built-in way to pass -isystem include paths in SCons, mainly because it is very compiler/platform specific.
Putting it in the CXXFLAGS will work, but note that this will hide the headers from SCons' dependency scanner, which only looks at CPPPATH.
This is probably OK if you don't expect those headers to ever change, but could cause weird issues if you use the build results cache and/or implicit dependency cache.

If you do
print env.Dump()
you'll see _CPPINCFLAGS, and you'll see that variable used in CCCOM (or _CCCOMCOM). _CPPINCFLAGS typically looks like this:
'$( ${_concat(INCPREFIX, CPPPATH, INCSUFFIX, __env__, RDirs, TARGET, SOURCE)} $)'
From this you can probably see how you could add an "isystem" set of includes as well, like _CPPSYSTEMINCFLAGS or some such. Just define your own prefix, path var name (e.g. CPPSYSTEMPATH) and suffix and use the above idiom to concatenate the prefix. Then just append your _CPPSYSTEMINCFLAGS to CCCOM or _CCCOMCOM and off you go.
Of course this is system-specific but you can conditionally include your new variable in the compiler command line as and when you want.

According to the SCons release notes, "-isystem" is supported since version 2.3.4 for the environment's CCFLAGS.
So, you can, for example, do the following:
env.AppendUnique(CCFLAGS=('-isystem', '/your/path/to/boost'))
Still, you need to be sure that your compiler supports that option.

Expanding on the idea proposed by #LangerJan and #BenG... Here's a full cross-platform example (replace env['IS_WINDOWS'] with your windows platform checking)
from SCons.Util import is_List
def enable_extlib_headers(env, include_paths):
"""Enables C++ builders with current 'env' to include external headers
specified in the include_paths (list or string value).
Special treatment to avoid scanning these for changes and/or warnings.
This speeds up the C++-related build configuration.
"""
if not is_List(include_paths):
include_paths = [include_paths]
include_options = []
if env['IS_WINDOWS']:
# Simply go around SCons scanners and add compiler options directly
include_options = ['-I' + p for p in include_paths]
else:
# Tag these includes as system, to avoid scanning them for dependencies,
# and make compiler ignore any warnings
for p in include_paths:
include_options.append('-isystem')
include_options.append(p)
env.Append(CXXFLAGS = include_options)
Now, when configuring the use of external libraries, instead of
env.AppendUnique(CPPPATH=include_paths)
call
enable_extlib_headers(env, include_paths)
In my case this reduced the pruned dependency tree (as produced with --tree=prune) by 1000x on Linux and 3000x on Windows! It sped up the no-action build time (i.e. all targets up to date) by 5-7x
The pruned dependency tree before this change had 4 million includes from Boost. That's insane.

Related

Handling Meson build options with multiple buildtypes

Having read the Meson site pages (which are generally high quality), I'm still unsure about the intended best practice to handle different options for different buildtypes.
So to specify a debug build:
meson [srcdir] --buildtype=debug
Or to specify a release build:
meson [srcdir] --buildtype=release
However, if I want to add b_sanitize=address (or other arbitrary complex set of arguments) only for debug builds and b_ndebug=true only for release builds, I would do:
meson [srcdir] --buildtype=debug -Db_sanitize=address ...
meson [srcdir] --buildtype=release -Db_ndebug=true ...
However, it's more of a pain to add a bunch of custom arguments on the command line, and to me it seems neater to put that in the meson.build file.
So I know I can set some built in options thusly:
project('myproject', ['cpp'],
default_options : ['cpp_std=c++14',
'b_ndebug=true'])
But they are unconditionally set.
So a condition would look something like this:
if get_option('buildtype').startswith('release')
add_project_arguments('-DNDEBUG', language : ['cpp'])
endif
Which is one way to do it, however, it would seem the b_ndebug=true way would be preferred to add_project_arguments('-DNDEBUG'), because it is portable.
How would the portable-style build options be conditionally set within the Meson script?
Additionally, b_sanitize=address is set without any test whether the compiler supports it. I would prefer for it to check first if it is supported (because the library might be missing, for example):
if meson.get_compiler('cpp').has_link_argument('-fsanitize=address')
add_project_arguments('-fsanitize=address', language : ['cpp'])
add_project_link_arguments('-fsanitize=address', language : ['cpp'])
endif
Is it possible to have the built-in portable-style build options (such as b_sanitize) have a check if they are supported?
I'm still unsure about the intended best practice to handle different options for different buildtypes
The intended best practice is to use meson configure to set/change the "buildtype" options as you need it. You don't have to do it "all at once and forever". But, of course, you can still have several distinct build trees (say, "debug" and "release") to speed up the process.
How would the portable-style build options be conditionally set within the Meson script?
Talking of b_ndebug, you can use the special value: ['b_ndebug=if-release'], which does exactly what you want. Also, you should take into account, that several GNU-style command-line arguments in meson are always portable, due to the internal compiler-specific substitutions. If I remember correctly, these include: -D, -I, -L and -l.
However, in general, changing "buildtype" options inside a script (except default_options, which are meant to be overwritten by meson setup/configure), is discouraged, and meson intentionally lacks set_option() function.
Is it possible to have the built-in portable-style build options (such as b_sanitize) have a check if they are supported?
AFAIK, no, except has_argument() you've used above. However, if some build option, like b_sanitize, is not supported by the underlying compiler, then it will be automatically set to void, so using it shouldn't break anything.

Use autotools installation prefix

I am writing a C++ program using gtkmm as the window library and autotools as my build system. In my Makefile.am, I install the icon as follows:
icondir = $(datadir)/icons/hicolor/scalable/apps
icon_DATA = $(top_srcdir)/appname.svg
EDIT: changed from prefix to datadir
This results in appname.svg being copied to $(datadir)/icons/hicolor/scalable/apps when the program is installed. In my C++ code, I would like to access the icon at runtime for a window decoration:
string iconPath = DATADIR + "/icons/hicolor/scalable/apps/appname.svg";
// do stuff with the icon
I am unsure how to go about obtaining DATADIR for this purpose. I could use relative paths, but then moving the binary would break the icon, which seems evident of hackery. I figure that there should be a special way to handle icons separate from general data, since people can install 3rd party icon packs. So, I have two questions:
What is the standard way of installing and using icons with autotools/C++/gtkmm?
Edit: gtkmm has an IconTheme class that is the standard way to use icons in gtkmm. It appears that I add_resource_path() (for which I still need the installation prefix), and then I can use the library to obtain the icon by name.
What is the general method with autotools/C++ to access the autotools installation prefix?
To convey data determined by configure to your source files, the primary methods available are to write them in a header that your sources #include or to define them as macros on the compiler command line. These are handled most conveniently via the AC_DEFINE Autoconf macro. Under some circumstances, you might also consider converting source files to templates for configure to process, but except inasmuch as Autoconf itself uses an internal version of that technique to build config.h (when that is requested), I wouldn't normally recommend it.
HOWEVER, the installation prefix and other installation directories are special cases. They are not finally set until you actually run make. Even if you set them via the configure's command-line options, you can still override that by specifying different values on the make command line. Thus, it is not safe to rely on AC_DEFINE for this particular purpose, and in fact, doing so may not work at all (will not work for prefix itself).
Instead, you should specify the appropriate macro definition in a command-line option that is evaluated at make time. You can do this for all targets being built by setting the AM_CPPFLAGS variable in your Makefile.am files, as demonstrated in another answer. That particular example sets the specified symbol to be a macro that expands to a C string literal containing the prefix. Alternatively, you could consider defining the whole icon directory as a symbol. If you need it only for one target out of several then you might prefer setting the appropriate onetarget_CPPFLAGS variable.
As an aside, do note that $(prefix)/icons/hicolor/scalable/apps is a nonstandard choice for the installation directory for your icon. That will typically resolve to something like /usr/local/icons/hicolor/scalable/apps. The conventional choice would be $(datadir)/icons/hicolor/scalable/apps, which will resolve to something like /usr/local/share/icons/hicolor/scalable/apps.
In your Makefile.am, use the following
AM_CPPFLAGS = -DPREFIX='"$(prefix)"'
See Defining Directories in autoconf's manual.

How to determine which compiler was requested

My project uses SCons to manage the build process. I want to support multiple compilers, so I decided to use AddOption so the user can specify which compiler to use on the command line (with the default being whatever their current compiler is).
AddOption('--compiler', dest = 'compiler', type = 'string', action = 'store', default = DefaultEnvironment()['CXX'], help = 'Name of the compiler to use.')
I want to be able to have built-in compiler settings for various compilers (including things such as maximum warning levels for that particular compiler). This is what my first attempt at a solution currently looks like:
if is_compiler('g++'):
from build_scripts.gcc.std import cxx_std
from build_scripts.gcc.warnings import warnings, warnings_debug, warnings_optimized
from build_scripts.gcc.optimizations import optimizations, preprocessor_optimizations, linker_optimizations
elif is_compiler('clang++'):
from build_scripts.clang.std import cxx_std
from build_scripts.clang.warnings import warnings, warnings_debug, warnings_optimized
from build_scripts.clang.optimizations import optimizations, preprocessor_optimizations, linker_optimizations
However, I'm not sure what to make the is_compiler() function look like. My first thought was to directly compare the compiler name (such as 'clang++') against what the user passes in. However, this immediately failed when I tried to use scons --compiler=~/data/llvm-3.1-obj/Release+Asserts/bin/clang++.
So I thought I'd get a little smarter and use this function
cxx = GetOption('compiler')
def is_compiler (compiler):
return cxx[-len(compiler):] == compiler
This only looks at the end of the compiler string, so that it ignores directories. Unfortunately, 'clang++' ends in 'g++', so my compiler was seen to be g++ instead of clang++.
My next thought was to do a backward search and look for the first occurrence of a path separator ('\' or '/'), but then I realized that this won't work for people who have multiple compiler versions. Someone compiling with 'g++-4.7' will not register as being g++.
So, is there some simple way to determine which compiler was requested?
Currently, only g++ and clang++ are supported (and only their most recently released versions) due to their c++11 support, so a solution that only works for those two would be good enough for now. However, my ultimate goal is to support at least g++, clang++, icc, and msvc++ (once they support the required c++11 features), so more general solutions are preferred.
Compiler just are part of build process. Also you need linker tool and may be other additional programs. In Scons it's named - Tool. List of tools supported from box you can see in man page, search by statement: SCons supports the following tool specifications out of the box: ...
Tool set necessary scons environment variables, it's documented here.
Scons automatically detects compiler in OS and have some priority to choose one of them, of course autodetect will work properly if PATH variable set to necessary dirs. For example of you have msvc and mingw on windows, scons choose msvc tool. For force using tool use Tool('name')(env). For example:
env = Environment()
Tool('mingw')(env)
Now env force using mingw.
So, clang is one of tool which currently not supported from box by scons. You need to implement it, or set env vars such CC, CXX which using scons for generate build commands.
You could just simply use the Python os.path.basename() or os.path.split() functions, as specified here.
You could do what people suggested in the comments by splitting this question into 2 different issues, but I think it could be a good idea to be able to specify the path with the compiler, since you could have 2 versions of g++ installed, and if the user only specifies g++, they may not get the expected version.
There seems to be some confusion about what question is asked here.
For what I can see, this asks how to determine which compiler was chosen by default, so I'll answer that one.
From what I found out, the official way to check the compiler is to look at the construction variable TOOLS, which contains a list of all tools / programs that SCons decided / was told to use in the given construction environment.
env = Environment()
is_gcc = 'g++' in env['TOOLS']
is_clang = 'clangxx' in env['TOOLS']
TOOLS lists only the currently used tools even if SCons can find more of them.
E.g. if you have both GCC and Clang installed and SCons is able to find both, default TOOLS will still contain only GCC.
You can find the full list of predefined tools here.

Best practice for dependencies on #defines?

Is there a best practice for supporting dependencies on C/C++ preprocessor flags like -DCOMPILE_WITHOUT_FOO? Here's my problem:
> setenv COMPILE_WITHOUT_FOO
> make <Make system reads environment, sets -DCOMPILE_WITHOUT_FOO>
<Compiles nothing, since no source file has changed>
What I would like to do is have all files that rely on #ifdef statements get recompiled:
> setenv COMPILE_WITHOUT_FOO
> make
g++ FileWithIfdefFoo.cpp
What I do not want to is have to recompile everything if the value of COMPILE_WITHOUT_FOO has not changed.
I have a primitive Python script working (see below) that basically writes a header file FooDefines.h and then diffs it to see if anything is different. If it is, it replaces FooDefines.h and then the conventional source file dependency takes over. The define is not passed on the command line with -D. The disadvantage is that I now have to include FooDefines.h in any source file that uses the #ifdef, and also I have a new, dynamically generated header file for every #ifdef. If there's a tool to do this, or a way to avoid using the preprocessor, I'm all ears.
import os, sys
def makeDefineFile(filename, text):
tmpDefineFile = "/tmp/%s%s"%(os.getenv("USER"),filename) #Use os.tempnam?
existingDefineFile = filename
output = open(tmpDefineFile,'w')
output.write(text)
output.close()
status = os.system("diff -q %s %s"%(tmpDefineFile, existingDefineFile))
def checkStatus(status):
failed = False
if os.WIFEXITED(status):
#Check return code
returnCode = os.WEXITSTATUS(status)
failed = returnCode != 0
else:
#Caught a signal, coredump, etc.
failed = True
return failed,status
#If we failed for any reason (file didn't exist, different, etc.)
if checkStatus(status)[0]:
#Copy our tmp into the new file
status = os.system("cp %s %s"%(tmpDefineFile, existingDefineFile))
failed,status = checkStatus(status)
print failed, status
if failed:
print "ERROR: Could not update define in makeDefine.py"
sys.exit(status)
This is certainly not the nicest approach, but it would work:
find . -name '*cpp' -o -name '*h' -exec grep -l COMPILE_WITHOUT_FOO {} \; | xargs touch
That will look through your source code for the macro COMPILE_WITHOUT_FOO, and "touch" each file, which will update the timestamp. Then when you run make, those files will recompile.
If you have ack installed, you can simplify this command:
ack -l --cpp COMPILE_WITHOUT_FOO | xargs touch
I don't believe that it is possible to determine automagically. Preprocessor directives don't get compiled into anything. Generally speaking, I expect to do a full recompile if I depend on a define. DEBUG being a familiar example.
I don't think there is a right way to do it. If you can't do it the right way, then the dumbest way possible is probably the your best option. A text search for COMPILE_WITH_FOO and create dependencies that way. I would classify this as a shenanigan and if you are writing shared code I would recommend seeking pretty significant buy in from your coworkers.
CMake has some facilities that can make this easier. You would create a custom target to do this. You may trade problems here though, maintaining a list of files that depend on your symbol. Your text search could generate that file if it changed though. I've used similar techniques checking whether I needed to rebuild static data repositories based on wget timestamps.
Cheetah is another tool which may be useful.
If it were me, I think I'd do full rebuilds.
Your problem seems tailor-made to treat it with autoconf and autoheader, writing the values of the variables into a config.h file. If that's not possible, consider reading the "-D" directives from a file and writing the flags into that file.
Under all circumstances, you have to avoid builds that depend on environment variables only. You have no way of telling when the environment changed. There is a definitive need to store the variables in a file, the cleanest way would be by autoconf, autoheader and a source and multiple build trees; the second-cleanest way by re-configure-ing for each switch of compile context; and the third-cleanest way a file containing all mutable compiler switches on which all objects dependant on these switches depend themselves.
When you choose to implement the third way, remember not to update this file unnecessarily, e.g. by constructing it in a temporary location and copying it conditionally on diff, and then make rules will be capable of conditionally rebuilding your files depending on flags.
One way to do this is to store each #define's previous value in a file, and use conditionals in your makefile to force update that file whenever the current value doesn't match the previous. Any files which depend on that macro would include the file as a dependency.
Here is an example. It will update file.o if either file.c changed or the variable COMPILE_WITHOUT_FOO is different from last time. It uses $(shell ) to compare the current value with the value stored in the file envvars/COMPILE_WITHOUT_FOO. If they are different, then it creates a command for that file which depends on force, which is always updated.
file.o: file.c envvars/COMPILE_WITHOUT_FOO
gcc -DCOMPILE_WITHOUT_FOO=$(COMPILE_WITHOUT_FOO) $< -o $#
ifneq ($(strip $(shell cat envvars/COMPILE_WITHOUT_FOO 2> /dev/null)), $(strip $(COMPILE_WITHOUT_FOO)))
force: ;
envvars/COMPILE_WITHOUT_FOO: force
echo "$(COMPILE_WITHOUT_FOO)" > envvars/COMPILE_WITHOUT_FOO
endif
If you want to support having macros undefined, you will need to use the ifdef or ifndef conditionals, and have some indication in the file that the value was undefined the last time it was run.
Jay pointed out that "make triggers on date time stamps on files".
Theoretically, you could have your main makefile, call it m1, include variables from a second makefile called m2. m2 would contain a list of all the preprocessor flags.
You could have a make rule for your program depend on m2 being up-to-date.
the rule for making m2 would be to import all the environment variables ( and thus the #include directives ).
the trick would be, the rule for making m2 would detect if there was a diff from the previous version. If so, it would enable a variable that would force a "make all" and/or make clean for the main target. otherwise, it would just update the timestamp on m2 and not trigger a full remake.
finally, the rule for the normal target (make all ) would source in the preprocessor directives from m2 and apply them as required.
this sounds easy/possible in theory, but in practice GNU Make is much harder to get this type of stuff to work. I'm sure it can be done though.
make triggers on date time stamps on files. A dependent file being newer than what depends on it triggers it to recompile. You'll have to put your definition for each option in a separate .h file and ensure that those dependencies are represented in the makefile. Then if you change an option the files dependent on it would be recompiled automatically.
If it takes into account include files that include files you won't have to change the structure of the source. You could include a "BuildSettings.h" file that included all the individual settings files.
The only tough problem would be if you made it smart enough to parse the include guards. I've seen problems with compilation because of include file name collisions and order of include directory searches.
Now that you mention it I should check and see if my IDE is smart enough to automatically create those dependencies for me. Sounds like an excellent thing to add to an IDE.

Does "make" know how to search sub-dirs for include files?

This is a question for experienced C/C++ developpers.
I have zero knowledge of compiling C programs with "make", and need to modify an existing application, ie. change its "config" and "makefile" files.
The .h files that the application needs are not located in a single-level directory, but rather, they are spread in multiple sub-directories.
In order for cc to find all the required include files, can I just add a single "-I" switch to point cc to the top-level directory and expect it to search all sub-dirs recursively, or must I add several "-I" switches to list all the sub-dirs explicitely, eg. -I/usr/src/myapp/includes/1 -I/usr/src/myapp/includes/2, etc.?
Thank you.
This question appears to be about the C compiler driver, rather than make. Assuming you are using GCC, then you need to list each directory you want searched:
gcc -I/foo -I/foo/bar myprog.c
This is actually a compiler switch, unrelated to make itself.
The compiler will search for include files in the built-in system dirs, and then in the paths you provide with the -I switch. However, no automatic sub-directory traversal is made.
For example, if you have
#include "my/path/to/file.h"
and you give -I a/directory as a parameter, the compiler will look for a/directory/my/path/to/file.h.
If the makefiles are written in the usual way, the line that invokes the compiler will use a couple of variables that allow you to customize the details, e.g. not
gcc (...)
but
$(CC) $(CFLAGS) (...)
and if this is the case, and you're lucky, you don't even need to edit any of the makefiles; instead you can invoke make like this
make CFLAGS='-I /absolute-path/to/wherever'
to incorporate your special options into the compiler invocation.
Also check whether the Makefiles aren't generated by something else (usually, a script in the top directory called
configure
which will have options of its own to control what goes into them).
everyone answered your question correctly. but something to consider when you get to setup your own source tree.... a leaf node should only look 2 places for headers, in its own directory or up the tree. once people start going across to peers and down the tree, the build system will get gnarly, but what also happens is folks with start using private interfaces when they should be using public interfaces