dynamic linking:change of the linking path - c++

Normally it happens that when ever the path of the library that has to be linked dynamically is defined in LD_LIBRARY_PATH or it it will be mentioned with -L flag while creating the binary.
In actual scenario if ,lets say the binary has been built and deployed at the client place.
Now if there is a change in the path of one of the dynamic link library path.
then we need to supply a new make file to all the clients where the binary was deployed.
is there any other method where we need not tell all the clients to change their makefiles and can something can be done in the code itself?
if yes...could anybody please suggest how?
This was ironically an interview question that was asked to me and i didnot have the answer for it.
EDIT:: I was specifically asked about what can be done in the code without touching the makefile.

Usually you should only change the LD_LIBRARY_PATH, unless this might be related to a compilation with a hard-coded search path: rpath.

Maybe the interviewers wanted to know about dlopen and dlsym?
http://linux.die.net/man/3/dlsym

Use environment variable, like MYLIBPATH, and use this variable in your makefile, rather than hard-coded value.
So every client can have own directory structure, and as long as they correctly specify MYLIBPATH, your program builds ok.
Alternatively you can look for library in your makefile, like this
LIBPATH = $(shell find / -name libmylib.a -exec dirname {} ";" -quit)
myprog: myprog.c
$(CC) myprog.c -lmylib -L$(LIBPATH)
EDIT: locate replaced with find that returns just first match with -quit option

Related

How to find "my" lib directory?

I'm developing a C++ program under Linux. I want to put some stuff (to be specific, LLVM bitcode files, but that's not important) in libraries, so I want the following directory structure:
/somewhere/bin/myBin
/somewhere/lib/myLib.bc
How do I find the lib directory? I tried to compute a relative part from argv[0], but if /somewhere is in my PATH, argv[0] will just contain myBin. Is there some way to get this path? Or do I have to set it at compile time?
How do GNU autotools deal with this? What happens exactly if I supply the --prefix option to ./configure?
Edit: The word library is a bit misleading in my case. My library consist of LLVM bitcode, so it's not an actual (shared) object file, just a file I want to open from my program. You can think of it as an image or text file.
maybe what you want is :
/usr/lib
unix directory reference: http://www.comptechdoc.org/os/linux/usersguide/linux_ugfilestruct.html
Assume your lib directory is "../lib" relative to executable
First you need to identify where myBin located, You can get it by reading /proc/self/exe
Then concat your binary file path with "../lib" will give you the lib directory.
You will have to use a compiler flag to tell the program. For example, if you have a plugin dir:
# Makefile.am
AM_CPPFLAGS = -DPLUGIN_DIR=\"${pkglibdir}\"
bin_PROGRAMS = awesome_prog
pkglib_LTLIBRARIES = someplugin.la
The list of directories to be searched is stored in the file /etc/ld.so.conf.
In Linux, the environment variable LD_LIBRARY_PATH is a colon-separated set of directories where libraries should be searched for first, before the standard set of directories; this is useful when debugging a new library or using a nonstandard library for special purposes.
LD_LIBRARY_PATH is handy for development and testing:
$ export LD_LIBRARY_PATH=/path/to/mylib.so
$ ./myprogram
[read more]
Addressing only the portion of the question "how to GNU autotools deal with this?"...
When you assign a --prefix to configure, basically two things happen: 1) it instructs the build system that everything is to be installed in ${prefix}, and 2) it looks in ${prefix}/share/config.site for any additional information about how the system is set up (it is common for that file not to exist.) It does absolutely nothing to help find libraries, but depends on the user having set up the tool chain properly. If you want to use a library in /foo/lib, you must have your toolchain set up to look there (eg, by putting /foo/lib in /etc/ld.so.conf, or by putting -L/foo/lib in LDFLAGS and "/foo/lib" in LD_LIBRARY_PATH)
The configure script relies on you to have the environment set up. It does not help you set up that environment, but does help by alerting you that you have not done so.
You could use the readlink system call on /proc/self/exe to get the path of your executable. You might then use realpath etc.

CXXSources-- what are they?

I'm new to compiling C/C++ with the aid of make. I downloaded an open source project and noticed that there is in the make file CXXSources and CXXObjects. I think I understand roughly what the make file is doing with them but...
I don't have any of the source files listed under CXXSources. Are these like dependences I'm supposed to know how to find? Is there any custom as to what CXXSource is versus just Source?
Added link to project: http://www.fim.uni-passau.de/en/fim/faculty/chairs/theoretische-informatik/projects.html
More specifically, the GML parser, eg. http://www.fim.uni-passau.de/fileadmin/files/lehrstuhl/brandenburg/projekte/gml/gml-parser.tar.gz
It seems to be getting stuck on the line:
gml_to_graph : $(CXXOBJECTS) gml_scanner.o gml_parser.o
$(CXX) -o gml_to_graph_demo $(CXXOBJECTS) gml_parser.o gml_scanner.o -L$(LEDADIR)/lib -lG -lL -lm
The $CXXObjects is defined by
CXXSOURCES = gml_to_graph.cc gml_to_graph_demo.cc
CXXOBJECTS = $(CXXSOURCES:.cc=.o)
So I need gml_to_graph.cc, it seems. Or maybe I'm wrong?
Usually, the variables are set before the point where you see them. This could be
(a) via the environment
(b) before including the quoted makefile
(c) in the quoted makefile, but preceding the location quoted
To see (verbosely) what GNU make takes into account, do:
make -Bn
(it will show everything that _would get executed)
Even more verbose:
make -p all
It will show you all the internal variable expansions.
If you post a link or more information, we will be able to come up with less generic (and hence possibly less confusing) answers

Best practice for dependencies on #defines?

Is there a best practice for supporting dependencies on C/C++ preprocessor flags like -DCOMPILE_WITHOUT_FOO? Here's my problem:
> setenv COMPILE_WITHOUT_FOO
> make <Make system reads environment, sets -DCOMPILE_WITHOUT_FOO>
<Compiles nothing, since no source file has changed>
What I would like to do is have all files that rely on #ifdef statements get recompiled:
> setenv COMPILE_WITHOUT_FOO
> make
g++ FileWithIfdefFoo.cpp
What I do not want to is have to recompile everything if the value of COMPILE_WITHOUT_FOO has not changed.
I have a primitive Python script working (see below) that basically writes a header file FooDefines.h and then diffs it to see if anything is different. If it is, it replaces FooDefines.h and then the conventional source file dependency takes over. The define is not passed on the command line with -D. The disadvantage is that I now have to include FooDefines.h in any source file that uses the #ifdef, and also I have a new, dynamically generated header file for every #ifdef. If there's a tool to do this, or a way to avoid using the preprocessor, I'm all ears.
import os, sys
def makeDefineFile(filename, text):
tmpDefineFile = "/tmp/%s%s"%(os.getenv("USER"),filename) #Use os.tempnam?
existingDefineFile = filename
output = open(tmpDefineFile,'w')
output.write(text)
output.close()
status = os.system("diff -q %s %s"%(tmpDefineFile, existingDefineFile))
def checkStatus(status):
failed = False
if os.WIFEXITED(status):
#Check return code
returnCode = os.WEXITSTATUS(status)
failed = returnCode != 0
else:
#Caught a signal, coredump, etc.
failed = True
return failed,status
#If we failed for any reason (file didn't exist, different, etc.)
if checkStatus(status)[0]:
#Copy our tmp into the new file
status = os.system("cp %s %s"%(tmpDefineFile, existingDefineFile))
failed,status = checkStatus(status)
print failed, status
if failed:
print "ERROR: Could not update define in makeDefine.py"
sys.exit(status)
This is certainly not the nicest approach, but it would work:
find . -name '*cpp' -o -name '*h' -exec grep -l COMPILE_WITHOUT_FOO {} \; | xargs touch
That will look through your source code for the macro COMPILE_WITHOUT_FOO, and "touch" each file, which will update the timestamp. Then when you run make, those files will recompile.
If you have ack installed, you can simplify this command:
ack -l --cpp COMPILE_WITHOUT_FOO | xargs touch
I don't believe that it is possible to determine automagically. Preprocessor directives don't get compiled into anything. Generally speaking, I expect to do a full recompile if I depend on a define. DEBUG being a familiar example.
I don't think there is a right way to do it. If you can't do it the right way, then the dumbest way possible is probably the your best option. A text search for COMPILE_WITH_FOO and create dependencies that way. I would classify this as a shenanigan and if you are writing shared code I would recommend seeking pretty significant buy in from your coworkers.
CMake has some facilities that can make this easier. You would create a custom target to do this. You may trade problems here though, maintaining a list of files that depend on your symbol. Your text search could generate that file if it changed though. I've used similar techniques checking whether I needed to rebuild static data repositories based on wget timestamps.
Cheetah is another tool which may be useful.
If it were me, I think I'd do full rebuilds.
Your problem seems tailor-made to treat it with autoconf and autoheader, writing the values of the variables into a config.h file. If that's not possible, consider reading the "-D" directives from a file and writing the flags into that file.
Under all circumstances, you have to avoid builds that depend on environment variables only. You have no way of telling when the environment changed. There is a definitive need to store the variables in a file, the cleanest way would be by autoconf, autoheader and a source and multiple build trees; the second-cleanest way by re-configure-ing for each switch of compile context; and the third-cleanest way a file containing all mutable compiler switches on which all objects dependant on these switches depend themselves.
When you choose to implement the third way, remember not to update this file unnecessarily, e.g. by constructing it in a temporary location and copying it conditionally on diff, and then make rules will be capable of conditionally rebuilding your files depending on flags.
One way to do this is to store each #define's previous value in a file, and use conditionals in your makefile to force update that file whenever the current value doesn't match the previous. Any files which depend on that macro would include the file as a dependency.
Here is an example. It will update file.o if either file.c changed or the variable COMPILE_WITHOUT_FOO is different from last time. It uses $(shell ) to compare the current value with the value stored in the file envvars/COMPILE_WITHOUT_FOO. If they are different, then it creates a command for that file which depends on force, which is always updated.
file.o: file.c envvars/COMPILE_WITHOUT_FOO
gcc -DCOMPILE_WITHOUT_FOO=$(COMPILE_WITHOUT_FOO) $< -o $#
ifneq ($(strip $(shell cat envvars/COMPILE_WITHOUT_FOO 2> /dev/null)), $(strip $(COMPILE_WITHOUT_FOO)))
force: ;
envvars/COMPILE_WITHOUT_FOO: force
echo "$(COMPILE_WITHOUT_FOO)" > envvars/COMPILE_WITHOUT_FOO
endif
If you want to support having macros undefined, you will need to use the ifdef or ifndef conditionals, and have some indication in the file that the value was undefined the last time it was run.
Jay pointed out that "make triggers on date time stamps on files".
Theoretically, you could have your main makefile, call it m1, include variables from a second makefile called m2. m2 would contain a list of all the preprocessor flags.
You could have a make rule for your program depend on m2 being up-to-date.
the rule for making m2 would be to import all the environment variables ( and thus the #include directives ).
the trick would be, the rule for making m2 would detect if there was a diff from the previous version. If so, it would enable a variable that would force a "make all" and/or make clean for the main target. otherwise, it would just update the timestamp on m2 and not trigger a full remake.
finally, the rule for the normal target (make all ) would source in the preprocessor directives from m2 and apply them as required.
this sounds easy/possible in theory, but in practice GNU Make is much harder to get this type of stuff to work. I'm sure it can be done though.
make triggers on date time stamps on files. A dependent file being newer than what depends on it triggers it to recompile. You'll have to put your definition for each option in a separate .h file and ensure that those dependencies are represented in the makefile. Then if you change an option the files dependent on it would be recompiled automatically.
If it takes into account include files that include files you won't have to change the structure of the source. You could include a "BuildSettings.h" file that included all the individual settings files.
The only tough problem would be if you made it smart enough to parse the include guards. I've seen problems with compilation because of include file name collisions and order of include directory searches.
Now that you mention it I should check and see if my IDE is smart enough to automatically create those dependencies for me. Sounds like an excellent thing to add to an IDE.

Does "make" know how to search sub-dirs for include files?

This is a question for experienced C/C++ developpers.
I have zero knowledge of compiling C programs with "make", and need to modify an existing application, ie. change its "config" and "makefile" files.
The .h files that the application needs are not located in a single-level directory, but rather, they are spread in multiple sub-directories.
In order for cc to find all the required include files, can I just add a single "-I" switch to point cc to the top-level directory and expect it to search all sub-dirs recursively, or must I add several "-I" switches to list all the sub-dirs explicitely, eg. -I/usr/src/myapp/includes/1 -I/usr/src/myapp/includes/2, etc.?
Thank you.
This question appears to be about the C compiler driver, rather than make. Assuming you are using GCC, then you need to list each directory you want searched:
gcc -I/foo -I/foo/bar myprog.c
This is actually a compiler switch, unrelated to make itself.
The compiler will search for include files in the built-in system dirs, and then in the paths you provide with the -I switch. However, no automatic sub-directory traversal is made.
For example, if you have
#include "my/path/to/file.h"
and you give -I a/directory as a parameter, the compiler will look for a/directory/my/path/to/file.h.
If the makefiles are written in the usual way, the line that invokes the compiler will use a couple of variables that allow you to customize the details, e.g. not
gcc (...)
but
$(CC) $(CFLAGS) (...)
and if this is the case, and you're lucky, you don't even need to edit any of the makefiles; instead you can invoke make like this
make CFLAGS='-I /absolute-path/to/wherever'
to incorporate your special options into the compiler invocation.
Also check whether the Makefiles aren't generated by something else (usually, a script in the top directory called
configure
which will have options of its own to control what goes into them).
everyone answered your question correctly. but something to consider when you get to setup your own source tree.... a leaf node should only look 2 places for headers, in its own directory or up the tree. once people start going across to peers and down the tree, the build system will get gnarly, but what also happens is folks with start using private interfaces when they should be using public interfaces

Can I have one makefile to build a hierarchical project?

I have several hundred files in a non-flat directory structure. My Makefile lists each sourcefile, which, given the size of the project and the fact that there are multiple developers on the project, can create annoyances when we forget to put a new one in or take out the old ones. I'd like to generalize my Makefile so that make can simply build all .cpp and .h files without me having to specify all the filenames, given some generic rules for different types of files.
My question: given a large number of files in a directory with lots of subfolders, how do I tell make to build them all without having to specify each and every subfolder as part of the path? And how do I make it so that I can do this with only one Makefile in the root directory?
EDIT: this almost answers my question, but it requires that you specify all filenames :\
I'm sure a pure-gmake solution is possible, but using an external command to modify the makefile, or generate an external one (which you include in your makefile) is probably much simpler.
Something along the lines of:
all: myprog
find_sources:
zsh -c 'for x in **/*.cpp; echo "myprog: ${x/.cpp/.o}" >> deps.mk'
include deps.mk
and run
make find_sources && make
note: the exact zsh line probably needs some escaping to work in a make file, e.g. $$ instead of $. It can also be replaced with bash + find.
One way that would be platform independent (I mean independent from shell being in Windows or Linux) is this:
DIRS = relative/path1\
relative/path2
dd = absolute/path/to/subdirectories
all:
#$(foreach dir, $(DIRS), $(MAKE) -C $(dd)$(dir) build -f ../../Makefile ;)
build:
... build here
note that spaces and also the semicolon are important here, also it is important to specify the absolute paths, and also specify the path to the appropriate Makefile at the end (in this case I am using only one Makefile on grandparent folder)
But there is a better approach too which involves PHONY targets, it better shows the progress and errors and stops the build if one folder has problem instead of proceeding to other targets:
.PHONY: subdirs $(DIRS)
subdirs: $(DIRS)
$(DIRS):
$(MAKE) -C $# build -f ../../Makefile
all : prepare subdirs
...
build :
... build here
Again I am using only one Makefile here that is supposed to be applicable to all sub-projects. For each sub-project in the grandchild folder the target "build" is created usinf one Makefile in the root.
I would start by using a combination of the wildcard function:
http://www.gnu.org/software/make/manual/make.html#Wildcard-Function
VPATH/vpath
http://www.gnu.org/software/make/manual/make.html#Selective-Search
and the file functions
http://www.gnu.org/software/make/manual/make.html#File-Name-Functions
For exclusion (ie: backups, as Jonathan Leffler mentioned), use a seperate folder not in the vpath for backups, and use good implicit rules.
You will still need to define which folders to do to, but not each file in them.
I'm of two minds on this one. On one hand, if your Make system compiles and links everything it finds, you'll find out in a hurry if someone has left conflicting junk in the source directories. On the other hand, non-conflicting junk will proliferate and you'll have no easy way of distinguishing it from the live code...
I think it depends on a lot of things specific to your shop, such as source source control system and whether you plan to ever have another project with an overlapping code base. That said, if you really want to compile every source file below a given directory and then link them all, I'd suggest simple recursion: to make objects, compile all source files here, add the resultant objects (with full paths) to a list in the top source directory, recurse into all directories here. To link, use the list.