Automake AM_LDADD workaround - c++

I want to set the same LDADD attribute (Unit test library) to a large number of targets (unit test C++ files). I first though that maybe automake has AM_LDADD variable to add that library to all the targets within the file but is not the case.
In some mail list I found some short discussion asking to add it:
http://gnu-automake.7480.n7.nabble.com/AM-LIBS-AM-LDADD-td3698.html
My question is, how do you deal with that? is it there any way to avoid manually adding LDADD attribute to each target?
So far my Makefile.am looks like:
test1_SOURCES = ...
test1_LDADD = -llibrary
...
...
test20_SOURCES = ...
test20_LDADD = -llibrary

The equivalent of an AM_LDADD variable is simply LDADD. e.g.,
LDADD = -llibrary
test1_SOURCES = ...
...
test20_SOURCES = ...
If you need to override LDADD for a particular program: prog, then prog_LDADD will always take precedence.
I always assumed that since there was no LDADD standard environment variable passed to configure - as you can see with configure --help - there is no real reason for an AM_LDADD. This kind of makes sense, as the configure script, and any options, e.g., --with-foo=<path> should (ideally) work out the library dependencies.
On the other hand, passing CFLAGS via configure might still need an AM_CFLAGS that combines CFLAGS and with other compiler flags determined by the configure script; or even a foo_CFLAGS override. Since configure must be informed of your custom CFLAGS.
Also, I don't know if the test<n> programs only take a single C++ source file, but if so, you can simplify the Makefile.am with:
LDADD = -llibrary
check_PROGRAMS = test1 test2 ... test20
AM_DEFAULT_SOURCE_EXT = .cc # or .cpp
as described here.
In regards to your comment, your can use a convenience library for that purpose - which is particularly useful for common code used by test programs:
noinst_LIBRARIES = libfoo.a # or noinst_LTLIBRARIES = libfoo.la
libfoo_a_SOURCES = MyClass.hh MyClass.cc # or libfoo_la_SOURCES
LDADD = ./libfoo.a -llibrary # or libfoo.la if using libtool.
... etc ...

It's a bad idea to modify LDADD in your Makefile.am, even if it seems convenient. It will make your build system very fragile.
In particular, if the user attempts to override LDADD from the make command line, then your definition of LDADD in Makefile.am will disappear. It's not unreasonable to expect that a user might override LDADD, so you should definitely protect yourself against this situation.
Your original definitions of test1_LDADD, ...,test20_LDADD are much more robust and, as far as I understand the automake manual, the recommended use.
See the remarks here for more info:
https://www.gnu.org/software/automake/manual/html_node/User-Variables.html
https://www.gnu.org/software/automake/manual/html_node/Flag-Variables-Ordering.html

Related

How to specify gcc flags (CXXFLAGS) particularly for a specific module?

I am building a new NS3 module recently. In my code, I use something new features of the C++11 (c++0x), I want to add a gcc flags (CXXFLAGS) "-std=c++0x" to the waf configuration system.
I tried to this: CXXFLAGS="-std=c++0x" waf configure, and then build it. However, it turns out that some of the exsiting modules such as ipv4-address is not compatible to c++11. Thus, I want to specify this flag particularly for my new module so that other modules won't be complied on c++11.
I tried to add this to the wscript in my new module:
def configure(conf):
conf.env.append_value('CXXFLAGS', '-std=c++0x')
It fails as the first trial.
How can I do that?
Although #drahnr's answer is correct for vanilla waf, it won't work with NS-3's build system, which is apparently what OP wants. To add CXXFLAGS to an NS-3 program, you can add them to the build object instead of in the configuration stage.
For example:
def build(bld):
obj = bld.create_ns3_program('my_app', ['core', 'other-dependencies'])
obj.source = 'MyApplication.cpp'
obj.cxxflags = ['-std=c++11']
According to the waf book 1.7.8, section 10.1.1 and 10.1.2
bld.shlib(source='main.c',
target='myshlib',
cflags = ['-O2', '-Wall'],
cxxflags = ['-O3', '-std=c++0x'],
use = 'myobjects')
bld.objects(source='ip4.c',
cflags = ['-O2', '-Wall'],
cxxflags = ['-std=somethingelse'],
target = 'myobjects')
Note #1 - this code is composed of the 2 examples provided in the wafbook and not tested at all.
Note #2 - you may need to make waf aware of 'myobjects' generated or they may not be used to build 'myshlib', as waf indexes all files before building.

How to set a define through "./configure" with Autoconf

I have one project that can generate two diferent applications based on one define.
libfoo_la_CXXFLAGS = -DMYDEFINE
I have to modify the Makefile.am to set this define, so it is not automatic.
Can I set this define somehow through the configure command?
Is there any other way to set one define using autotools?
You have to edit the file configure.ac, and before AC_OUTPUT (which is the last thing in the file) add a call to AC_DEFINE.
In a simple case like yours, it should be enough with:
AC_DEFINE(MYDEFINE)
If you want to set a value, you use:
AC_DEFINE(MYDEFINE, 123)
This last will add -DMYDEFINE=123 to the flags (DEFS = in Makefile), and #define MYDEFINE 123 in the generated autoconf header if you use that.
I recommend you read the documentation from the beginning, and work through their examples and tutorials. Also check other projects' configure files to see how they use different features.
Edit: If you want to pass flags on the command line to the make command, then you do something like this:
libfoo_la_CXXFLAGS = $(MYFLAGS)
Then you call make like this:
$ make MYFLAGS="-DMYDEFINE"
If you don't set MYFLAGS on the command line, it will be undefined and empty in the makefile.
You can also set target-specific CPPFLAGS in Makefile.am, in which case the source files will be recompiled, once for each set of flags:
lib_LTLIBRARIES = libfoo.la libbar.la
libfoo_la_SOURCES = foo.c
libfoo_la_CPPFLAGS = -DFOO
libbar_la_SOURCES = foo.c
libbar_la_CPPFLAGS = -DBAR
These days autoheader demands
AC_DEFINE([MYDEFINE], [1], [Description here])

Best practice for dependencies on #defines?

Is there a best practice for supporting dependencies on C/C++ preprocessor flags like -DCOMPILE_WITHOUT_FOO? Here's my problem:
> setenv COMPILE_WITHOUT_FOO
> make <Make system reads environment, sets -DCOMPILE_WITHOUT_FOO>
<Compiles nothing, since no source file has changed>
What I would like to do is have all files that rely on #ifdef statements get recompiled:
> setenv COMPILE_WITHOUT_FOO
> make
g++ FileWithIfdefFoo.cpp
What I do not want to is have to recompile everything if the value of COMPILE_WITHOUT_FOO has not changed.
I have a primitive Python script working (see below) that basically writes a header file FooDefines.h and then diffs it to see if anything is different. If it is, it replaces FooDefines.h and then the conventional source file dependency takes over. The define is not passed on the command line with -D. The disadvantage is that I now have to include FooDefines.h in any source file that uses the #ifdef, and also I have a new, dynamically generated header file for every #ifdef. If there's a tool to do this, or a way to avoid using the preprocessor, I'm all ears.
import os, sys
def makeDefineFile(filename, text):
tmpDefineFile = "/tmp/%s%s"%(os.getenv("USER"),filename) #Use os.tempnam?
existingDefineFile = filename
output = open(tmpDefineFile,'w')
output.write(text)
output.close()
status = os.system("diff -q %s %s"%(tmpDefineFile, existingDefineFile))
def checkStatus(status):
failed = False
if os.WIFEXITED(status):
#Check return code
returnCode = os.WEXITSTATUS(status)
failed = returnCode != 0
else:
#Caught a signal, coredump, etc.
failed = True
return failed,status
#If we failed for any reason (file didn't exist, different, etc.)
if checkStatus(status)[0]:
#Copy our tmp into the new file
status = os.system("cp %s %s"%(tmpDefineFile, existingDefineFile))
failed,status = checkStatus(status)
print failed, status
if failed:
print "ERROR: Could not update define in makeDefine.py"
sys.exit(status)
This is certainly not the nicest approach, but it would work:
find . -name '*cpp' -o -name '*h' -exec grep -l COMPILE_WITHOUT_FOO {} \; | xargs touch
That will look through your source code for the macro COMPILE_WITHOUT_FOO, and "touch" each file, which will update the timestamp. Then when you run make, those files will recompile.
If you have ack installed, you can simplify this command:
ack -l --cpp COMPILE_WITHOUT_FOO | xargs touch
I don't believe that it is possible to determine automagically. Preprocessor directives don't get compiled into anything. Generally speaking, I expect to do a full recompile if I depend on a define. DEBUG being a familiar example.
I don't think there is a right way to do it. If you can't do it the right way, then the dumbest way possible is probably the your best option. A text search for COMPILE_WITH_FOO and create dependencies that way. I would classify this as a shenanigan and if you are writing shared code I would recommend seeking pretty significant buy in from your coworkers.
CMake has some facilities that can make this easier. You would create a custom target to do this. You may trade problems here though, maintaining a list of files that depend on your symbol. Your text search could generate that file if it changed though. I've used similar techniques checking whether I needed to rebuild static data repositories based on wget timestamps.
Cheetah is another tool which may be useful.
If it were me, I think I'd do full rebuilds.
Your problem seems tailor-made to treat it with autoconf and autoheader, writing the values of the variables into a config.h file. If that's not possible, consider reading the "-D" directives from a file and writing the flags into that file.
Under all circumstances, you have to avoid builds that depend on environment variables only. You have no way of telling when the environment changed. There is a definitive need to store the variables in a file, the cleanest way would be by autoconf, autoheader and a source and multiple build trees; the second-cleanest way by re-configure-ing for each switch of compile context; and the third-cleanest way a file containing all mutable compiler switches on which all objects dependant on these switches depend themselves.
When you choose to implement the third way, remember not to update this file unnecessarily, e.g. by constructing it in a temporary location and copying it conditionally on diff, and then make rules will be capable of conditionally rebuilding your files depending on flags.
One way to do this is to store each #define's previous value in a file, and use conditionals in your makefile to force update that file whenever the current value doesn't match the previous. Any files which depend on that macro would include the file as a dependency.
Here is an example. It will update file.o if either file.c changed or the variable COMPILE_WITHOUT_FOO is different from last time. It uses $(shell ) to compare the current value with the value stored in the file envvars/COMPILE_WITHOUT_FOO. If they are different, then it creates a command for that file which depends on force, which is always updated.
file.o: file.c envvars/COMPILE_WITHOUT_FOO
gcc -DCOMPILE_WITHOUT_FOO=$(COMPILE_WITHOUT_FOO) $< -o $#
ifneq ($(strip $(shell cat envvars/COMPILE_WITHOUT_FOO 2> /dev/null)), $(strip $(COMPILE_WITHOUT_FOO)))
force: ;
envvars/COMPILE_WITHOUT_FOO: force
echo "$(COMPILE_WITHOUT_FOO)" > envvars/COMPILE_WITHOUT_FOO
endif
If you want to support having macros undefined, you will need to use the ifdef or ifndef conditionals, and have some indication in the file that the value was undefined the last time it was run.
Jay pointed out that "make triggers on date time stamps on files".
Theoretically, you could have your main makefile, call it m1, include variables from a second makefile called m2. m2 would contain a list of all the preprocessor flags.
You could have a make rule for your program depend on m2 being up-to-date.
the rule for making m2 would be to import all the environment variables ( and thus the #include directives ).
the trick would be, the rule for making m2 would detect if there was a diff from the previous version. If so, it would enable a variable that would force a "make all" and/or make clean for the main target. otherwise, it would just update the timestamp on m2 and not trigger a full remake.
finally, the rule for the normal target (make all ) would source in the preprocessor directives from m2 and apply them as required.
this sounds easy/possible in theory, but in practice GNU Make is much harder to get this type of stuff to work. I'm sure it can be done though.
make triggers on date time stamps on files. A dependent file being newer than what depends on it triggers it to recompile. You'll have to put your definition for each option in a separate .h file and ensure that those dependencies are represented in the makefile. Then if you change an option the files dependent on it would be recompiled automatically.
If it takes into account include files that include files you won't have to change the structure of the source. You could include a "BuildSettings.h" file that included all the individual settings files.
The only tough problem would be if you made it smart enough to parse the include guards. I've seen problems with compilation because of include file name collisions and order of include directory searches.
Now that you mention it I should check and see if my IDE is smart enough to automatically create those dependencies for me. Sounds like an excellent thing to add to an IDE.

Help with rake dependency mapping

I'm writing a Rakefile for a C++ project. I want it to identify #includes automatically, forcing the rebuilding of object files that depend on changed source files. I have a working solution, but I think it can be better. I'm looking for suggestions for:
Suggestions for improving my function
Libraries, gems, or tools that do the work for me
Links to cool C++ Rakefiles that I should check out that do similar things
Here's what I have so far. It's a function that returns the list of dependencies given a source file. I feed in the source file for a given object file, and I want a list of files that will force me to rebuild my object file.
def find_deps( file )
deps = Array.new
# Find all include statements
cmd = "grep -r -h -E \"#include\" #{file}"
includes = `#{cmd}`
includes.each do |line|
dep = line[ /\.\/(\w+\/)*\w+\.(cpp|h|hpp)/ ]
unless dep.nil?
deps << dep # Add the dependency to the list
deps += find_deps( dep )
end
end
return deps
end
I should note that all of my includes look like this right now:
#include "./Path/From/Top/Level/To/My/File.h" // For top-level files like main.cpp
#include "../../../Path/From/Top/To/My/File.h" // Otherwise
Note that I'm using double quotes for includes within my project and angle brackets for external library includes. I'm open to suggestions on alternative ways to do my include pathing that make my life easier.
Use the gcc command to generate a Make dependency list instead, and parse that:
g++ -M -MM -MF - inputfile.cpp
See man gcc or info gcc for details.
I'm sure there are different schools of thought with respect to what to put in #include directives. I advise against putting the whole path in your #includes. Instead, set up the proper include paths in your compile command (with -I). This makes it easier to relocate files in the future and more readable (in my opinion). It may sound minor, but the ability to reorganize as a project evolves is definitely valuable.
Using the preprocessor (see #greyfade) to generate the dependency list has the advantage that it will expand the header paths for you based on your include dirs.
Update: see also the Importing Dependencies section of the Rakefile doc for a library that reads the makefile dependency format.

How do I set scons system include path

Using scons I can easily set my include paths:
env.Append( CPPPATH=['foo'] )
This passes the flag
-Ifoo
to gcc
However I'm trying to compile with a lot of warnings enabled.
In particular with
env.Append( CPPFLAGS=['-Werror', '-Wall', '-Wextra'] )
which dies horribly on certain boost includes ... I can fix this by adding the boost includes to the system include path rather than the include path as gcc treats system includes differently.
So what I need to get passed to gcc instead of -Ifoo is
-isystem foo
I guess I could do this with the CPPFLAGS variable, but was wondering if there was a better solution built into scons.
There is no built-in way to pass -isystem include paths in SCons, mainly because it is very compiler/platform specific.
Putting it in the CXXFLAGS will work, but note that this will hide the headers from SCons' dependency scanner, which only looks at CPPPATH.
This is probably OK if you don't expect those headers to ever change, but could cause weird issues if you use the build results cache and/or implicit dependency cache.
If you do
print env.Dump()
you'll see _CPPINCFLAGS, and you'll see that variable used in CCCOM (or _CCCOMCOM). _CPPINCFLAGS typically looks like this:
'$( ${_concat(INCPREFIX, CPPPATH, INCSUFFIX, __env__, RDirs, TARGET, SOURCE)} $)'
From this you can probably see how you could add an "isystem" set of includes as well, like _CPPSYSTEMINCFLAGS or some such. Just define your own prefix, path var name (e.g. CPPSYSTEMPATH) and suffix and use the above idiom to concatenate the prefix. Then just append your _CPPSYSTEMINCFLAGS to CCCOM or _CCCOMCOM and off you go.
Of course this is system-specific but you can conditionally include your new variable in the compiler command line as and when you want.
According to the SCons release notes, "-isystem" is supported since version 2.3.4 for the environment's CCFLAGS.
So, you can, for example, do the following:
env.AppendUnique(CCFLAGS=('-isystem', '/your/path/to/boost'))
Still, you need to be sure that your compiler supports that option.
Expanding on the idea proposed by #LangerJan and #BenG... Here's a full cross-platform example (replace env['IS_WINDOWS'] with your windows platform checking)
from SCons.Util import is_List
def enable_extlib_headers(env, include_paths):
"""Enables C++ builders with current 'env' to include external headers
specified in the include_paths (list or string value).
Special treatment to avoid scanning these for changes and/or warnings.
This speeds up the C++-related build configuration.
"""
if not is_List(include_paths):
include_paths = [include_paths]
include_options = []
if env['IS_WINDOWS']:
# Simply go around SCons scanners and add compiler options directly
include_options = ['-I' + p for p in include_paths]
else:
# Tag these includes as system, to avoid scanning them for dependencies,
# and make compiler ignore any warnings
for p in include_paths:
include_options.append('-isystem')
include_options.append(p)
env.Append(CXXFLAGS = include_options)
Now, when configuring the use of external libraries, instead of
env.AppendUnique(CPPPATH=include_paths)
call
enable_extlib_headers(env, include_paths)
In my case this reduced the pruned dependency tree (as produced with --tree=prune) by 1000x on Linux and 3000x on Windows! It sped up the no-action build time (i.e. all targets up to date) by 5-7x
The pruned dependency tree before this change had 4 million includes from Boost. That's insane.