Import scons variables from file - build

Does scons support including variables from an external file?
In short, I have a bunch of settings, variables, etc; that I want to make available to a bunch of Makefiles in a large project of mine. It's easy enough for the root-level makefile to simply source/include the file.
However, some sub-projects use scons, and the only documentation I found on the topic notes that a top-level sconscript needs to create an pass down the variables to sub-projects.
My goal is to have a simple file full of variables (mainly paths to compilers), and tell scons to just import the variable key/value pairs. The one SO post I've found on this topic notes that the file must be python code, rather than a Makefile, so I'd potentially need to write a script to convert the Makefile to python code.

If you check the scons manpage ( http://scons.org/doc/production/HTML/scons-man.html search for "variables(" )
You'll see Variables() can take a file (python). Though its won't likely solve your question. It's possible to have that file be a format which is both valid python and valid make, highly dependent on the contents of course.
Excluding that, it should be fairly simple to parse a simple makefile assuming it's contents are something like:
xyz = some values
abc := some other values
If it contains
xyz = some values
abc := $(xyz) and more
It will get far more complicated.

Related

How can I make a C++ program to read predefined files after installation on linux

My project folder has the following structure
-Project/
/src
-Main.cpp
-MyReader.cpp
/headers
-MyReader.h
/DataFiles
-File.dat
-File1.dat
My class Object.cpp has a couple of methods which reads from File.dat and File1.dat and parse the information to Map objects. My problem is that I am using Autotools (in which I'm very very newbie) for generating config and installer files and I don't know how to make all the DataFiles files accessible for the program after installation. The program doesn't work properly because of the code fails when trying to read those files through relative paths. Locally, the program runs perfectly after executing in terminal make && ./program.
How can I solve this issue? Thanks in advance for your help!
A platform independent way to do this with Autotools is using the $(datadir) variable to locate the system data directory and work relative to that.
So in your Makefile.am file you can create a name like this:
myprog_infodir = $(datadir)/myprog
# Set a macro for your code to use
myprog_CXXFLAGS = -DDATA_LOCATION=\"$(datadir)/myprog\"
# This will install it from the development directories
myprog_info_DATA = $(top_srcdir)/DataFiles/File.dat $(top_srcdir)/DataFiles/File1.dat
# make sure it gets in the installation package
extra_DIST = $(top_srcdir)/DataFiles/File.dat $(top_srcdir)/DataFiles/File1.dat
Then in your program you should be able to refer to the data like this:
std::ifstream ifs(DATA_LOCATION "/File.dat");
Disclaimer: Untested code
I figured out one method and will give my example here:
In my Makefile.am
AM_CPPFLAGS = -D MATRIXDIR="\"$(pkgdatadir)/matrix\""
nobase_dist_pkgdata_DATA = matrix/AAcode.txt \
matrix/BLOSUM50 matrix/BLOSUM70.50 matrix/BLOSUM100 matrix/BLOSUM50.50 \
matrix/BLOSUM75 matrix/BLOSUM100.50 matrix/BLOSUM55 matrix/BLOSUM75.50 \
... more not shown
I put quite some number of datafiles in the matrix directory, just show a few of them. In my source file, I simply use the macro MATRIXDIR:
scorematrix.cpp:string MatrixScoreMethod::default_path=MATRIXDIR;
This seems to work well for me. You can use other versions of the data automake variable, such as dist_data_DATA instead of pkgdata. It is a good idea to use pkgdata this way your data will not be mixed with other packages. The nobase_ is to tell automake not to strip the matrix directory during install. Those escaped double quotes seems to be needed for string type so that you don't get compiler errors.

Setting and using path to data directory with GNU AutoTools

I am trying to use GNU AutoTools for my C++ project. I have written configure.ac, makefile.am etc. I have some files that are used by the program during execution e.g. template files, XML schema etc. So, I install/copy these files along the executable, for which I use something like:
abcdir = $(bindir)/../data/abc/
abc_DATA = ../data/knowledge/abc.cc
Now it copies the file correctly and My program installation structure looks somethings as follows:
<installation_dir>/bin/<executableFile>
<installation_dir>/data/abc/abc.cc
Now the problem is that in the source code I actually use these files (abc.cc etc.) and for that I need path of where these files resides to open them. One solution is to define (using AC_DEFINE) some variable e.g. _ABC_PATH_ that points to the path of installation but how to do that exactly?. OR is there any better way to do that. For example, in source code, I do something like:
...
ifstream input(<path-to-abc-folder> + "abc.cc"); // how to find <path-to-abc-folder>?
..
The AC_DEFINE solution is fine in principle, but requires shell-like variable expansion to take place. That is, _ABC_PATH_ would expand to "${bindir}/../data/abs", not /data/abc.
One way is to define the path via a -D flag, which is expanded by make:
myprogram_CPPFLAGS += -D_ABC_PATH='\"${abcdir}\"'
which works fine in principle, but you have to make include config.status in the dependencies of myprogram.
If you have a number of such substitution variables, you should roll out a paths.h file that is
generated by automake with a rule like:
paths.h : $(srcdir)/paths.h.in config.status
sed -e 's:#ABC_PATH#:${abcdir}:' $< > $#
As a side-note, you do know about ${prefix} and ${datarootdir} and friends, don't you? If not, better read them up; ${bindir}/.. is not necessarily equal to ${prefix} if the user did set ${exec_prefix}.

Best practice for dependencies on #defines?

Is there a best practice for supporting dependencies on C/C++ preprocessor flags like -DCOMPILE_WITHOUT_FOO? Here's my problem:
> setenv COMPILE_WITHOUT_FOO
> make <Make system reads environment, sets -DCOMPILE_WITHOUT_FOO>
<Compiles nothing, since no source file has changed>
What I would like to do is have all files that rely on #ifdef statements get recompiled:
> setenv COMPILE_WITHOUT_FOO
> make
g++ FileWithIfdefFoo.cpp
What I do not want to is have to recompile everything if the value of COMPILE_WITHOUT_FOO has not changed.
I have a primitive Python script working (see below) that basically writes a header file FooDefines.h and then diffs it to see if anything is different. If it is, it replaces FooDefines.h and then the conventional source file dependency takes over. The define is not passed on the command line with -D. The disadvantage is that I now have to include FooDefines.h in any source file that uses the #ifdef, and also I have a new, dynamically generated header file for every #ifdef. If there's a tool to do this, or a way to avoid using the preprocessor, I'm all ears.
import os, sys
def makeDefineFile(filename, text):
tmpDefineFile = "/tmp/%s%s"%(os.getenv("USER"),filename) #Use os.tempnam?
existingDefineFile = filename
output = open(tmpDefineFile,'w')
output.write(text)
output.close()
status = os.system("diff -q %s %s"%(tmpDefineFile, existingDefineFile))
def checkStatus(status):
failed = False
if os.WIFEXITED(status):
#Check return code
returnCode = os.WEXITSTATUS(status)
failed = returnCode != 0
else:
#Caught a signal, coredump, etc.
failed = True
return failed,status
#If we failed for any reason (file didn't exist, different, etc.)
if checkStatus(status)[0]:
#Copy our tmp into the new file
status = os.system("cp %s %s"%(tmpDefineFile, existingDefineFile))
failed,status = checkStatus(status)
print failed, status
if failed:
print "ERROR: Could not update define in makeDefine.py"
sys.exit(status)
This is certainly not the nicest approach, but it would work:
find . -name '*cpp' -o -name '*h' -exec grep -l COMPILE_WITHOUT_FOO {} \; | xargs touch
That will look through your source code for the macro COMPILE_WITHOUT_FOO, and "touch" each file, which will update the timestamp. Then when you run make, those files will recompile.
If you have ack installed, you can simplify this command:
ack -l --cpp COMPILE_WITHOUT_FOO | xargs touch
I don't believe that it is possible to determine automagically. Preprocessor directives don't get compiled into anything. Generally speaking, I expect to do a full recompile if I depend on a define. DEBUG being a familiar example.
I don't think there is a right way to do it. If you can't do it the right way, then the dumbest way possible is probably the your best option. A text search for COMPILE_WITH_FOO and create dependencies that way. I would classify this as a shenanigan and if you are writing shared code I would recommend seeking pretty significant buy in from your coworkers.
CMake has some facilities that can make this easier. You would create a custom target to do this. You may trade problems here though, maintaining a list of files that depend on your symbol. Your text search could generate that file if it changed though. I've used similar techniques checking whether I needed to rebuild static data repositories based on wget timestamps.
Cheetah is another tool which may be useful.
If it were me, I think I'd do full rebuilds.
Your problem seems tailor-made to treat it with autoconf and autoheader, writing the values of the variables into a config.h file. If that's not possible, consider reading the "-D" directives from a file and writing the flags into that file.
Under all circumstances, you have to avoid builds that depend on environment variables only. You have no way of telling when the environment changed. There is a definitive need to store the variables in a file, the cleanest way would be by autoconf, autoheader and a source and multiple build trees; the second-cleanest way by re-configure-ing for each switch of compile context; and the third-cleanest way a file containing all mutable compiler switches on which all objects dependant on these switches depend themselves.
When you choose to implement the third way, remember not to update this file unnecessarily, e.g. by constructing it in a temporary location and copying it conditionally on diff, and then make rules will be capable of conditionally rebuilding your files depending on flags.
One way to do this is to store each #define's previous value in a file, and use conditionals in your makefile to force update that file whenever the current value doesn't match the previous. Any files which depend on that macro would include the file as a dependency.
Here is an example. It will update file.o if either file.c changed or the variable COMPILE_WITHOUT_FOO is different from last time. It uses $(shell ) to compare the current value with the value stored in the file envvars/COMPILE_WITHOUT_FOO. If they are different, then it creates a command for that file which depends on force, which is always updated.
file.o: file.c envvars/COMPILE_WITHOUT_FOO
gcc -DCOMPILE_WITHOUT_FOO=$(COMPILE_WITHOUT_FOO) $< -o $#
ifneq ($(strip $(shell cat envvars/COMPILE_WITHOUT_FOO 2> /dev/null)), $(strip $(COMPILE_WITHOUT_FOO)))
force: ;
envvars/COMPILE_WITHOUT_FOO: force
echo "$(COMPILE_WITHOUT_FOO)" > envvars/COMPILE_WITHOUT_FOO
endif
If you want to support having macros undefined, you will need to use the ifdef or ifndef conditionals, and have some indication in the file that the value was undefined the last time it was run.
Jay pointed out that "make triggers on date time stamps on files".
Theoretically, you could have your main makefile, call it m1, include variables from a second makefile called m2. m2 would contain a list of all the preprocessor flags.
You could have a make rule for your program depend on m2 being up-to-date.
the rule for making m2 would be to import all the environment variables ( and thus the #include directives ).
the trick would be, the rule for making m2 would detect if there was a diff from the previous version. If so, it would enable a variable that would force a "make all" and/or make clean for the main target. otherwise, it would just update the timestamp on m2 and not trigger a full remake.
finally, the rule for the normal target (make all ) would source in the preprocessor directives from m2 and apply them as required.
this sounds easy/possible in theory, but in practice GNU Make is much harder to get this type of stuff to work. I'm sure it can be done though.
make triggers on date time stamps on files. A dependent file being newer than what depends on it triggers it to recompile. You'll have to put your definition for each option in a separate .h file and ensure that those dependencies are represented in the makefile. Then if you change an option the files dependent on it would be recompiled automatically.
If it takes into account include files that include files you won't have to change the structure of the source. You could include a "BuildSettings.h" file that included all the individual settings files.
The only tough problem would be if you made it smart enough to parse the include guards. I've seen problems with compilation because of include file name collisions and order of include directory searches.
Now that you mention it I should check and see if my IDE is smart enough to automatically create those dependencies for me. Sounds like an excellent thing to add to an IDE.

How to create a makefile without having to include every single source file in the project?

I'm taking some C++ classes and have send the teacher 9 exercises, each exercise is a simple directory with the name 'ex$' where $ is the number. Each directory has a single source file named 'ex$.cpp. I want to create a makefile that will allow me to type:
make ex$
And it will build a executable that corresponds to the compiled source file inside 'ex$' directory. The catch is that I want to do that without creating a target for each exercise(Some kind of 'generic target'). I also need to have an 'all' target that will go into each directory starting with 'ex' and build the executable there. How can I do that?
If all your C++ targets can be built with essentially the same command, you can do this fairly easily. Read this. Look for $#, in particular. Since this is part of an education, I'll leave the rest vague.
Can I also suggest looking at CMake which will make better makefiles for you to use IMO. Initial high learning curve for major long term gain. :)

How to bundle C/C++ code with C-shell-script?

I have a C shell script that calls two
C programs - one after the another
with some file handling before,
in-between and afterwards.
Now, as such I have three different files - one C shell script and 2 .c files.
I need to give this script to other users. The problem is that I have to distribute three files - which the users must keep in the same folder and then execute the script.
Is there some better way to do this?
[I know I can make one C code file out of those two... but I will still be left with a shell script and a C code. Actually, the two C codes do entirely different things... so I want them to be separate]
Sounds like you're worried that your users aren't savy enough to figure out how to resolve issues like command not found errors and the like. If absolutely MUST hide "complexity" of a collection of files you could have your script create the other files. In most other circumstances I would suggest that this approach is only going to increase your support workload since semi-experienced users are less likely to know how to troubleshoot the process.
If you choose to rely on the presence of a compiler on the system that you are running on you can store the C code as a collection of cat $STRING >> file.c commands to to create your two C files, which you then compile and use.
If you would want to use pre-compiled programsn instead then the same basic process can be used except instead use xxd to both generate the strings in your script and reverse the conversion process to give you working binaries. Note: Remember to chmod the binary so that it is executable.
use shar command to create self-extracting archive.
or better yet use unzipsfx with AUTORUN option.
This provides users with ONE file, and only ONE command to execute (as opposed to one for untarring and one for execution).
NOTE: The unzip command to run should use "-n" option, that way only the first run would extract the files and the subsequent would skip the extraction.
Use a zip or tar file? And you do realize that .c files aren't executable, you need to compile & link them first?
You can include the c code inside the shell script as a here document:
#!/bin/bash
cat > code.c << EOF
line #1
line #2
...
EOF
# compile
# execute
If you want to get fancy, you can test for the existence of the executable and skip compiling them if they exists.
If you are doing much shell programming, the rest of the Advanced Bash-Scripting Guide is worth looking at as well.