Can I have one makefile to build a hierarchical project? - build

I have several hundred files in a non-flat directory structure. My Makefile lists each sourcefile, which, given the size of the project and the fact that there are multiple developers on the project, can create annoyances when we forget to put a new one in or take out the old ones. I'd like to generalize my Makefile so that make can simply build all .cpp and .h files without me having to specify all the filenames, given some generic rules for different types of files.
My question: given a large number of files in a directory with lots of subfolders, how do I tell make to build them all without having to specify each and every subfolder as part of the path? And how do I make it so that I can do this with only one Makefile in the root directory?
EDIT: this almost answers my question, but it requires that you specify all filenames :\

I'm sure a pure-gmake solution is possible, but using an external command to modify the makefile, or generate an external one (which you include in your makefile) is probably much simpler.
Something along the lines of:
all: myprog
find_sources:
zsh -c 'for x in **/*.cpp; echo "myprog: ${x/.cpp/.o}" >> deps.mk'
include deps.mk
and run
make find_sources && make
note: the exact zsh line probably needs some escaping to work in a make file, e.g. $$ instead of $. It can also be replaced with bash + find.

One way that would be platform independent (I mean independent from shell being in Windows or Linux) is this:
DIRS = relative/path1\
relative/path2
dd = absolute/path/to/subdirectories
all:
#$(foreach dir, $(DIRS), $(MAKE) -C $(dd)$(dir) build -f ../../Makefile ;)
build:
... build here
note that spaces and also the semicolon are important here, also it is important to specify the absolute paths, and also specify the path to the appropriate Makefile at the end (in this case I am using only one Makefile on grandparent folder)
But there is a better approach too which involves PHONY targets, it better shows the progress and errors and stops the build if one folder has problem instead of proceeding to other targets:
.PHONY: subdirs $(DIRS)
subdirs: $(DIRS)
$(DIRS):
$(MAKE) -C $# build -f ../../Makefile
all : prepare subdirs
...
build :
... build here
Again I am using only one Makefile here that is supposed to be applicable to all sub-projects. For each sub-project in the grandchild folder the target "build" is created usinf one Makefile in the root.

I would start by using a combination of the wildcard function:
http://www.gnu.org/software/make/manual/make.html#Wildcard-Function
VPATH/vpath
http://www.gnu.org/software/make/manual/make.html#Selective-Search
and the file functions
http://www.gnu.org/software/make/manual/make.html#File-Name-Functions
For exclusion (ie: backups, as Jonathan Leffler mentioned), use a seperate folder not in the vpath for backups, and use good implicit rules.
You will still need to define which folders to do to, but not each file in them.

I'm of two minds on this one. On one hand, if your Make system compiles and links everything it finds, you'll find out in a hurry if someone has left conflicting junk in the source directories. On the other hand, non-conflicting junk will proliferate and you'll have no easy way of distinguishing it from the live code...
I think it depends on a lot of things specific to your shop, such as source source control system and whether you plan to ever have another project with an overlapping code base. That said, if you really want to compile every source file below a given directory and then link them all, I'd suggest simple recursion: to make objects, compile all source files here, add the resultant objects (with full paths) to a list in the top source directory, recurse into all directories here. To link, use the list.

Related

How do I define a dependency graph with unknown intermediate node names?

I'm using a tool chain where I do not know the names of all of the intermediate files.
E.g. I know that I start out with a foo.s, and go through several steps to get a foo.XXXX.sym and a foo.XXXX.hex, buried way down deep. And then running other tools on foo.XXXX.hex and foo.XXXX.sym, I eventually end up with something like final.results.
But, the trouble is that I don't know what the XXXX is. It is derived from some other parameters, but may be significantly transformed away from them.
Now, after running the tool/steps that generate foo.XXXX.{sym,hex}, I now typically scan the overall result directory looking for foo.*.{sym,hex}. I.e. I have code that can recognize the intermediate outputs, I just don't know exactly what the names will be.
I typically use make or scons - actually, I prefer scons, but my team highly prefers make. I'm open to other build tools.
What I want to do is be able to say (1) "make final.results", or "scons final.results", (2) and have it scan over the partial tree; (3) figure out that, while it does not know the full path, it definitely knows that it has to run the first step, (4) after that first step, look for and find the foo.XXX.* files; (5) and plug those into the dependency tree.
I.e. I want to finish building the dependency tree after the build has already started.
A friend got frustrated enough with scons' limitations in this area that he wrote his own build tool. Unfortunately it is proprietary.
I guess that I can create a first build graph, say in make with many .PHONY targets, and then after I get through the first step, generate a new makefile with the new names, and have the first make invoke the newly generated second makefile. Seems clumsy. Is there any more elegant way?
GNU make has an "auto-rexec" feature that you might be able to make use of. See How Makefiles Are Remade
After make finishes reading all the makefiles (both the ones found automatically and/or on the command line, as well as all included makefiles), it will try to rebuild all its makefiles (using the rules it knows about). If any of those makefiles are automatically rebuilt, then make will re-exec itself so it can re-read the newest versions of the makefiles/included files, and starts over (including re-trying to build all the makefiles).
It seems to me that you should be able to do something with this. You can write in your main makefile and "-include foo.sym.mk" for example, and then have a rule that builds "foo.sym.mk" by invoking the tool on foo.s, then running your "recognized the next step" code and generate a "foo.sym.mk" file which defines a rule for the intermediate output that got created. Something like (due to lack of specificity in your question I can't give true examples you understand):
SRCS = foo.s bar.s baz.s
-include $(patsubst %.s,%.sym.mk,$(SRCS))
%.sym.mk: %.s
<compile> '$<'
<recognize output and generate makefile> > '$#'
Now when make runs it will see that foo.sym.mk is out of date (if it is) using normal algorithms and it will rebuild foo.sym.mk, which as a "side effect" causes the foo.s file to be compiled.
And of course, the "foo.sym.mk" file can include ANOTHER file, which can recognize the next step, if necessary.
I'm not saying this will be trivial but it seems do-able based on your description.
Make constructs the graph before running any rule, so there won't be a perfect answer. Here are some reasonably clean solutions.
1) use PHONY intermediates and wildcards in the commands. (You can't use Make wildcards because make expands them before running rules.)
final.results: middle
# build $# using $(shell ls foo.*.sym) and $(shell ls foo.*.hex)
.PHONY: middle
middle: foo.s
# build foo.XXXX.sym and foo.XXXX.hex from $<
2) Use recursive Make (which is not as bad as people say, and sometimes very useful.)
SYM = $(wildcard foo.*.sym)
HEX = $(wildcard foo.*.hex)
# Note that this is is the one you should "Make".
# I've put it first so it'll be the default.
.PHONY: first-step
first-step: foo.s
# build foo.XXXX.sym and foo.XXXX.hex from $<
#$(MAKE) -s final.results
final.results:
# build $# using $(SYM) and $(HEX)
3) Similar to 2, but have a rule for the makefile which will cause Make to run a second time.
SYM = $(wildcard foo.*.sym)
HEX = $(wildcard foo.*.hex)
final.results:
# build $# using $(SYM) and $(HEX)
Makefile: foo.s
# build foo.XXXX.sym and foo.XXXX.hex from $<
#touch $#

Setting and using path to data directory with GNU AutoTools

I am trying to use GNU AutoTools for my C++ project. I have written configure.ac, makefile.am etc. I have some files that are used by the program during execution e.g. template files, XML schema etc. So, I install/copy these files along the executable, for which I use something like:
abcdir = $(bindir)/../data/abc/
abc_DATA = ../data/knowledge/abc.cc
Now it copies the file correctly and My program installation structure looks somethings as follows:
<installation_dir>/bin/<executableFile>
<installation_dir>/data/abc/abc.cc
Now the problem is that in the source code I actually use these files (abc.cc etc.) and for that I need path of where these files resides to open them. One solution is to define (using AC_DEFINE) some variable e.g. _ABC_PATH_ that points to the path of installation but how to do that exactly?. OR is there any better way to do that. For example, in source code, I do something like:
...
ifstream input(<path-to-abc-folder> + "abc.cc"); // how to find <path-to-abc-folder>?
..
The AC_DEFINE solution is fine in principle, but requires shell-like variable expansion to take place. That is, _ABC_PATH_ would expand to "${bindir}/../data/abs", not /data/abc.
One way is to define the path via a -D flag, which is expanded by make:
myprogram_CPPFLAGS += -D_ABC_PATH='\"${abcdir}\"'
which works fine in principle, but you have to make include config.status in the dependencies of myprogram.
If you have a number of such substitution variables, you should roll out a paths.h file that is
generated by automake with a rule like:
paths.h : $(srcdir)/paths.h.in config.status
sed -e 's:#ABC_PATH#:${abcdir}:' $< > $#
As a side-note, you do know about ${prefix} and ${datarootdir} and friends, don't you? If not, better read them up; ${bindir}/.. is not necessarily equal to ${prefix} if the user did set ${exec_prefix}.

Best practice for dependencies on #defines?

Is there a best practice for supporting dependencies on C/C++ preprocessor flags like -DCOMPILE_WITHOUT_FOO? Here's my problem:
> setenv COMPILE_WITHOUT_FOO
> make <Make system reads environment, sets -DCOMPILE_WITHOUT_FOO>
<Compiles nothing, since no source file has changed>
What I would like to do is have all files that rely on #ifdef statements get recompiled:
> setenv COMPILE_WITHOUT_FOO
> make
g++ FileWithIfdefFoo.cpp
What I do not want to is have to recompile everything if the value of COMPILE_WITHOUT_FOO has not changed.
I have a primitive Python script working (see below) that basically writes a header file FooDefines.h and then diffs it to see if anything is different. If it is, it replaces FooDefines.h and then the conventional source file dependency takes over. The define is not passed on the command line with -D. The disadvantage is that I now have to include FooDefines.h in any source file that uses the #ifdef, and also I have a new, dynamically generated header file for every #ifdef. If there's a tool to do this, or a way to avoid using the preprocessor, I'm all ears.
import os, sys
def makeDefineFile(filename, text):
tmpDefineFile = "/tmp/%s%s"%(os.getenv("USER"),filename) #Use os.tempnam?
existingDefineFile = filename
output = open(tmpDefineFile,'w')
output.write(text)
output.close()
status = os.system("diff -q %s %s"%(tmpDefineFile, existingDefineFile))
def checkStatus(status):
failed = False
if os.WIFEXITED(status):
#Check return code
returnCode = os.WEXITSTATUS(status)
failed = returnCode != 0
else:
#Caught a signal, coredump, etc.
failed = True
return failed,status
#If we failed for any reason (file didn't exist, different, etc.)
if checkStatus(status)[0]:
#Copy our tmp into the new file
status = os.system("cp %s %s"%(tmpDefineFile, existingDefineFile))
failed,status = checkStatus(status)
print failed, status
if failed:
print "ERROR: Could not update define in makeDefine.py"
sys.exit(status)
This is certainly not the nicest approach, but it would work:
find . -name '*cpp' -o -name '*h' -exec grep -l COMPILE_WITHOUT_FOO {} \; | xargs touch
That will look through your source code for the macro COMPILE_WITHOUT_FOO, and "touch" each file, which will update the timestamp. Then when you run make, those files will recompile.
If you have ack installed, you can simplify this command:
ack -l --cpp COMPILE_WITHOUT_FOO | xargs touch
I don't believe that it is possible to determine automagically. Preprocessor directives don't get compiled into anything. Generally speaking, I expect to do a full recompile if I depend on a define. DEBUG being a familiar example.
I don't think there is a right way to do it. If you can't do it the right way, then the dumbest way possible is probably the your best option. A text search for COMPILE_WITH_FOO and create dependencies that way. I would classify this as a shenanigan and if you are writing shared code I would recommend seeking pretty significant buy in from your coworkers.
CMake has some facilities that can make this easier. You would create a custom target to do this. You may trade problems here though, maintaining a list of files that depend on your symbol. Your text search could generate that file if it changed though. I've used similar techniques checking whether I needed to rebuild static data repositories based on wget timestamps.
Cheetah is another tool which may be useful.
If it were me, I think I'd do full rebuilds.
Your problem seems tailor-made to treat it with autoconf and autoheader, writing the values of the variables into a config.h file. If that's not possible, consider reading the "-D" directives from a file and writing the flags into that file.
Under all circumstances, you have to avoid builds that depend on environment variables only. You have no way of telling when the environment changed. There is a definitive need to store the variables in a file, the cleanest way would be by autoconf, autoheader and a source and multiple build trees; the second-cleanest way by re-configure-ing for each switch of compile context; and the third-cleanest way a file containing all mutable compiler switches on which all objects dependant on these switches depend themselves.
When you choose to implement the third way, remember not to update this file unnecessarily, e.g. by constructing it in a temporary location and copying it conditionally on diff, and then make rules will be capable of conditionally rebuilding your files depending on flags.
One way to do this is to store each #define's previous value in a file, and use conditionals in your makefile to force update that file whenever the current value doesn't match the previous. Any files which depend on that macro would include the file as a dependency.
Here is an example. It will update file.o if either file.c changed or the variable COMPILE_WITHOUT_FOO is different from last time. It uses $(shell ) to compare the current value with the value stored in the file envvars/COMPILE_WITHOUT_FOO. If they are different, then it creates a command for that file which depends on force, which is always updated.
file.o: file.c envvars/COMPILE_WITHOUT_FOO
gcc -DCOMPILE_WITHOUT_FOO=$(COMPILE_WITHOUT_FOO) $< -o $#
ifneq ($(strip $(shell cat envvars/COMPILE_WITHOUT_FOO 2> /dev/null)), $(strip $(COMPILE_WITHOUT_FOO)))
force: ;
envvars/COMPILE_WITHOUT_FOO: force
echo "$(COMPILE_WITHOUT_FOO)" > envvars/COMPILE_WITHOUT_FOO
endif
If you want to support having macros undefined, you will need to use the ifdef or ifndef conditionals, and have some indication in the file that the value was undefined the last time it was run.
Jay pointed out that "make triggers on date time stamps on files".
Theoretically, you could have your main makefile, call it m1, include variables from a second makefile called m2. m2 would contain a list of all the preprocessor flags.
You could have a make rule for your program depend on m2 being up-to-date.
the rule for making m2 would be to import all the environment variables ( and thus the #include directives ).
the trick would be, the rule for making m2 would detect if there was a diff from the previous version. If so, it would enable a variable that would force a "make all" and/or make clean for the main target. otherwise, it would just update the timestamp on m2 and not trigger a full remake.
finally, the rule for the normal target (make all ) would source in the preprocessor directives from m2 and apply them as required.
this sounds easy/possible in theory, but in practice GNU Make is much harder to get this type of stuff to work. I'm sure it can be done though.
make triggers on date time stamps on files. A dependent file being newer than what depends on it triggers it to recompile. You'll have to put your definition for each option in a separate .h file and ensure that those dependencies are represented in the makefile. Then if you change an option the files dependent on it would be recompiled automatically.
If it takes into account include files that include files you won't have to change the structure of the source. You could include a "BuildSettings.h" file that included all the individual settings files.
The only tough problem would be if you made it smart enough to parse the include guards. I've seen problems with compilation because of include file name collisions and order of include directory searches.
Now that you mention it I should check and see if my IDE is smart enough to automatically create those dependencies for me. Sounds like an excellent thing to add to an IDE.

Keeping all libraries in the Arduino sketch directory

I know that you are supposed to place any external libraries under the "libraries" folder of the arduino install directory, but I have a project that uses several libraries that I have created for the project and mainly to keep all that code self contained and out of the main pde file. However, I have tried to place the libraries in the same directory as the main PDE file so that I can more easily keep everything synced up in subversion (I work on this on multiple computers) and I don't want to have to keep going back and syncing up the libraries separately. Also, just for the sake of being able to easily zip of the sketch folder and know that it contains everything it needs.
I've tried adding the header files to the sketch as a new tab, but that doesn't seem to work at all... don't even care if they should up in the arduino IDE.
I've also tried adding the libraries to the sketch directory in subdirectories (what I would greatly prefer) and then linking to them as:
#include "mylib/mylib.h"
and
#include <mylib/mylib.h>
But both of these result in file not found errors.
Is this possible? And, if so, how do I include them in the main file for building? Preferably in their own subdirectories.
I had the same issue. Solved it for Arduino IDE > 1.8. Seems a specialty in newer IDEs (?) according to the reference (see bottom link).
You have to add a "src" Subdirectory before creating a library folder. So essentially your project should look like this:
/SketchDir (with *.ino file)
/SketchDir/src
/SketchDir/src/yourLib (with .h and .cpp file)
and finally in your sketch you reference:
#include "src/yourLib/yourLib.h"
otherwise in my case - if I am missing the "src" folder - I get the error message that it cannot find the yourLib.cpp file.
Note: I am using a windows system in case it differs and actually VS Code as wrapper for Arduino IDE. But both IDE compile it with this structure.
References:
https://forum.arduino.cc/index.php?topic=445230.0
For the sketches I have, the "*.h" and "*.cpp" library files actually reside in the same folder as the sketch, and I call them like "someheader.h". I also noticed that if I go into sketch menu and add file... that the file doesn't appear until I close and reopen the sketch.
I agree with you; this is an intolerable way to develop software: it requires every file that you need to be in the same directory as the main program!
To get around this, I use make to put together a single .h file from my .h and .cpp sources - you can see this used in this Makefile:
PREPROCESS=gcc -E -C -x c -iquote ./src
# -E : Stop after preprocessing.
# -C : Don't discard comments.
# -x c : Treat the file as C code.
# -iquote ./src : Use ./src for the non-system include path.
TARGETS=sketches/morse/morse.h
all: $(TARGETS)
clean:
rm $(TARGETS)
%.h: %.h.in
$(PREPROCESS) $< -o $#
Arduino is very picky about file endings - if you put a .cpp or .cc file in its directory it automatically uses it in the source, and you can't include anything that's not a .cpp, .cc or .h - so this is about the only way to do it.
I use a similar trick also to put together JavaScript files here.
This requires that you run make after editing your files, but since I'm using an external editor (Emacs) anyway, this is zero hassle for me.
Unfortunately the Arduino IDE is awful and shows no signs of improving. There is no real build system so it only lets you build programs that reside in a single directory.
The only real solution is to write a makefile, then you can use a real IDE. I'm hopeful that one day someone will write an Arduino plugin for QtCreator.
Here's an example makefile:
http://volker.top.geek.nz/arduino/Makefile-Arduino-v1.8
I just had this same problem (I also like to keep the code self-contained), so I'll just jot down some notes; say I have a MyPdeSketch.pde using MyLibClass.cpp; then I have it organized like this
/path/to/skdir/MyPdeSketch/MyPdeSketch.pde
/path/to/skdir/MyPdeSketch/MyLibClass/MyLibClass.cpp
/path/to/skdir/MyPdeSketch/MyLibClass/MyLibClass.h
(In principle, /path/to/skdir/ here is equivalent to ~/sketchbook/)
What worked for me is something like:
mkdir /path/to/arduino-0022/libraries/MyLibClass
ln -s /path/to/skdir/MyPdeSketch/MyLibClass/MyLibClass.* /path/to/arduino-0022/libraries/MyLibClass/
After restart of the IDE, MyLibClass should show under ''Sketch/Import Library''.
Note that the only way I can see so far for a library class file to refer to other library files is to include them relatively (from 'current location'), assuming they are all in the same main arduino-0022/libraries folder (possibly related Stack Overflow question: Is it possible to include a library from another library using the Arduino IDE?).
Otherwise, it should also be possible to symlink the MyLibClass directory directly into arduino-0022/libraries (instead of manually making a directory, and then symlinking the files). For the same reason, symlinking to the alternate location ~/sketchbook/libraries could also be problematic.
Finally, a possibly better organization could be:
/path/to/skdir/MyLibClass/MyLibClass.cpp
/path/to/skdir/MyLibClass/MyLibClass.h
/path/to/skdir/MyLibClass/MyPdeSketch/MyPdeSketch.pde
... which, after symlinking to libraries, would force MyPdeSketch to show under the examples for the MyLibClass library in Arduino IDE (however, it may not be applicable if you want to self-contain multiple class folders under a single directory).
EDIT: or just use a Makefile - which would work directly with avr-gcc, bypassing the Arduino IDE (in which case, the sketchbook file organization can be somewhat loosened)..
Think I know what do u need exactly.
you have a project folder say MYPROJ_FOLDER and you want to include a Libraries folder that contains more children folders for your custom libraries.
you need to do the following:
1- create folders as follows:
-MyProjFolder
-MyProjFolder/MyProjFolder
and then create a file with the folder name in .ino extension
-MyProjFolder/MyProjFolder/MyProjFolder.ino
2- create libraries folder:
-MyProjFolder/libraries <<<<< name is not an option should be called like that.
3- then create your own libraries
-MyProjFolder/libraries/lib1
-MyProjFolder/libraries/lib1/lib1.cpp
-MyProjFolder/libraries/lib1/examples <<<< this is a folder
-MyProjFolder/libraries/lib1/examples/example1
repeat step 3 as much as you want
also check http://arduino.cc/en/Guide/Libraries
I did it a little differently. Here is my setup.
Visually this is the directory layout
~/Arduino/Testy_app/ <- sketch dir
/Testy_app.ino <- has a #include "foo.h"
/foo <- a git repo
/foo/foo.h
/foo/foo.cpp
Here is how I build:
~/Arduino/Testy_App/$ arduino-cli compile --library "/home/davis/Arduino/Testy_app/foo/" --fqbn arduino:samd:mkrwan1310 Testy_app
If you wish to be more elaborate and specify libs and src dirs, this also works
~/Arduino/Testy_app/ <- sketch dir
/Testy_app.ino <- has a #include "foo.h"
/lib <- a git repo
/lib/foo/src/foo.h
/lib/foo/src/foo.cpp
and the build method is:
~/Arduino/Testy_App/$ arduino-cli compile --library "/home/davis/Arduino/Testy_app/lib/foo/src" --fqbn arduino:samd:mkrwan1310 Testy_app
One more bit of tweaking needs to be done to include files from the lib dirs to main dir. If you need to do that, this is the work around:
~/Arduino/Testy_app/ <- sketch dir
/Testy_app.ino <- has a #include
"foo.h"
/inc/Testy_app.h
/foo <- a git repo
/foo/foo.h
/foo/foo.cpp < has a "include testy_app.h"
Then do the compile like this
~/Arduino/Testy_App/$ arduino-cli compile \
--library "/home/davis/Arduino/Testy_app/inc" \
--library "/home/davis/Arduino/Testy_app/foo/src" \
--fqbn arduino:samd:mkrwan1310 Testy_app
What has worked for me is to create a dir, for example "src" under the sketch dir, and under that a dir for each personal library.
Example:
I have a project called ObstacleRobot, under that a folder for my sketch, named obstaclerobot (automatically created by the IDE) and there my sketch "obstacleRobot.ino"
Up to now we have:
/ObstacleRobot
/obstaclerobot
obstacleRobot.ino
Then I wanted to include a personal library that was fully related with this project and made no sense in including it in the IDE libraries, well in fact I want to do this for each part of the robot but I'm still working on it.
What in the end worked for me was:
/ObstacleRobot
/obstaclerobot
obstacleRobot.ino
/src
/Sonar
Sonar.h
Sonar.cpp
Then what you have to do in the main sketch is to write the include as follows:
#include "src/Sonar/Sonar.h"
And thats all.
Following the lines of Hefny, make your project an example for your library.
For example (Unix env), let's say the libraries are in ~arduino/libraries
Your create your project ~arduino/libraries/MyProject, your libraries go there (for example ~/arduino/libraries/MyProject/module1.h ~/arduino/libraries/MyProject/module1.cpp ~/arduino/libraries/MyProject/module2.h ~/arduino/libraries/MyProject/module2.cpp
Now:
mkdir -p ~arduino/libraries/MyProject/examples/myproj
edit ~arduino/libraries/MyProject/examples/myproj/myproj.ino
(note that this is not examples/myproj.ino but examples/myproj/myproj.ino)
Restart the IDE, you should find your project in the menu File/Example/MyProject.
Also note that you do the include with #include
Why dont we just write a script with a single copy command, copying our libs from wherever our library is located into the arduino IDE library folder?
This way we keep the file structure we want and use the IDE library requirements without fuss.
Something like this works for me:
cp -r mylibs/* ~/Documents/programs/arduino-1.5.8/libraries/.
Note that the paths are relative to my own file structure.
Hope this helps someone. This includes my future self that I bet will be reading this in a near future... as usual!
J

Help with rake dependency mapping

I'm writing a Rakefile for a C++ project. I want it to identify #includes automatically, forcing the rebuilding of object files that depend on changed source files. I have a working solution, but I think it can be better. I'm looking for suggestions for:
Suggestions for improving my function
Libraries, gems, or tools that do the work for me
Links to cool C++ Rakefiles that I should check out that do similar things
Here's what I have so far. It's a function that returns the list of dependencies given a source file. I feed in the source file for a given object file, and I want a list of files that will force me to rebuild my object file.
def find_deps( file )
deps = Array.new
# Find all include statements
cmd = "grep -r -h -E \"#include\" #{file}"
includes = `#{cmd}`
includes.each do |line|
dep = line[ /\.\/(\w+\/)*\w+\.(cpp|h|hpp)/ ]
unless dep.nil?
deps << dep # Add the dependency to the list
deps += find_deps( dep )
end
end
return deps
end
I should note that all of my includes look like this right now:
#include "./Path/From/Top/Level/To/My/File.h" // For top-level files like main.cpp
#include "../../../Path/From/Top/To/My/File.h" // Otherwise
Note that I'm using double quotes for includes within my project and angle brackets for external library includes. I'm open to suggestions on alternative ways to do my include pathing that make my life easier.
Use the gcc command to generate a Make dependency list instead, and parse that:
g++ -M -MM -MF - inputfile.cpp
See man gcc or info gcc for details.
I'm sure there are different schools of thought with respect to what to put in #include directives. I advise against putting the whole path in your #includes. Instead, set up the proper include paths in your compile command (with -I). This makes it easier to relocate files in the future and more readable (in my opinion). It may sound minor, but the ability to reorganize as a project evolves is definitely valuable.
Using the preprocessor (see #greyfade) to generate the dependency list has the advantage that it will expand the header paths for you based on your include dirs.
Update: see also the Importing Dependencies section of the Rakefile doc for a library that reads the makefile dependency format.