C++ linking to libraries with makefile (newbe) - c++

I'm trying to understand how to use non standard libraries in my C++ projects.
I have a few questions.
Lets say I want to use POCO library. So I downloaded it and build it using make (static build). Now I have bunch of .o files and .h files.
There is a Path.h file and a Path.o file in different directories.
Now I want to use this module in my code. So i include the file using #include "Poco/Path.h". Do I have to modify makefile and add Path.o to my target ?
What happens when I use standard library ? Are those available only in header files ? I know that template code cannot be precompiled. What about the rest ?

Besides the .h and .o files, you will probably also have one or more libXXX.a and/or libXXX.so files. These are the actual library files that your application should link against.
To use the library, you include the relevant headers in your source file, and you change your makefile to tell the linker that it should also link your application to the XXX library.
The typical linker-command for that is -lXXX and the linker will look for both libXXX.a and libXXX.so and use whichever seems most appropriate.
The standard library is not really different from external libraries, except that you don't have to specify it explicitly to the linker.

Your question seems to imply that you already have a makefile for your own code. If that's the case, then yes, you should modify the rule for your executable in that makefile. As Bart van Ingen Schenau points out, the POCO makefile probably assembled the objects files into libraries such as Poco/Libraries/libPoco.a, so you should use them instead of trying to pick out the object files you need. For instance, if right now your rule reads:
foo: foo.o bar.o
g++ -lSomeLibrary $^ -o $#
you should change it to
foo: foo.o bar.o
g++ -lSomeLibrary -LPoco/Libraries -lPoco $^ -o $#
(The second part of your question, "What happens... What about the rest?" is unclear to me.)
Note: It's a bad idea to #include "Poco/Path.h". This makes your code dependent on a directory structure, something it should not care about. It is much better to #include "Path.h" and tell the compiler where to find it: g++ -c -IPoco ....

Related

Alternate to "`--whole-archive`" in bazel

I want to link an external static lib in one of my bazel based c++ project. I need "whole-archive" option for linking the library like gcc or g++ build:
g++ main.cc -Wl,--whole-archive -lhttp -Wl,--no-whole-archive
Can anybody suggest what is the alternate to "--whole-archive" in bazel?
Sadly, alwayslink doesn't work with precompiled libraries, only with cc_library compiled and linked by Bazel. There is one undocumented hack (I guess I'm just documenting it by mentioning it here), and it's to rename .a file to .lo file. Then Bazel will link it as whole archive.
Beware that this is a hack, and will stop working without warning. We have plans for some variation of cc_import rule exactly for this use case, to import a precompiled binary into the workspace with the ability to set whole archiveness on it. It's just not there yet.
https://bazel.build/versions/master/docs/be/c-cpp.html#cc_library.alwayslink
alwayslink
Boolean; optional; nonconfigurable; default is 0
If 1, any binary that depends (directly or indirectly) on this C++
library will link in all the object files for the files listed in
srcs, even if some contain no symbols referenced by the binary. This
is useful if your code isn't explicitly called by code in the binary,
e.g., if your code registers to receive some callback provided by some
service.

Undefined reference when trying to use external library

I am trying to incorporate a C library into some Rcpp code.
I can use the C library in a C++ program easily. I 'make' the C library, which creates the .a and .dll files in the /lib folder. I can then use the package by including the header in the program and running something like this from command line:
cc myfile.cpp -o myfile -Ipath.to.header path.to.lib.a -lz
This essentially tells the compiler to take the .cpp program, include headers from -I, and to link to two libraries.
Getting this to work with Rcpp shouldn't be overly difficult if I understand makevars correctly (which I unfortunately don't seem to).
I add the library to a folder in my package, and in src I add a makevars and makevars.win that look like this:
PKG_CFLAGS=
# specify header location
PKG_CPPFLAGS=-Ipath.to.lib/include
# specify libs to link to
PKG_LIBS=path.to.lib/lib/file.a -lz
# make library
path.to.lib/lib/file.a:
cd path.to.lib;$(MAKE)
This correctly 'makes' the .a and .dll files for the library, however none of the Rcpp magic runs (i.e. in the build I never see the g++ system call that compiles the files in src), so "no Dll was created".
I am fairly certain this is a problem in my makevars target that makes the library. When I remove that portion from the makevars, and 'make' the library from the command line myself before building the package, I get the proper g++ calls with my -I and -l statements, but I get errors about undefined references.
I notice that the -l statements are only included in the final g++ call where the final .dll is made, but isn't included in the earlier g++ calls where the files with the library headers are compiled.
So I have two problems:
How do I fix my makevars so that it 'makes' the library, but doesn't stop Rcpp from compiling the files in src?
How do I deal with the undefined references? The library is clearly not header-only, so I am guessing it needs the -l statement in the earlier g++ calls, but that may not even be possible.
The best approach is to avoid complicated src/Makevars file altogether.
One easy-ish approach around this: use configure to build your static library, then once you actually build just refer to it in src/Makevars.
I use that scheme in Rblpapi (where we copy an externally supplied library in) and in nloptr where we download nlopt sources and build it 'when needed' (ie when no libnlopt is on the system).

Make G++ use my lib automatically

I have an already built library made of this files:
A bunch of headers.
A .so file (libmylib.so).
I want to compile a c++ program (sample.cpp), where I included the headers, and where I need to use the library. This is what I've done, and it's working:
Put the headers in usr/local/include.
Put the .so file in usr/local/lib.
Compile the program in this way: g++ sample.cpp -lmylib.
My question is: why is it not working if I omit -lmylib from the last line?
Is there a way to install the library such that I don't need to put it every time in the g++ command?
Thank you.
What libs are used by default depends on some setting in the compiler/linker,
but it´s not "every lib in usr/local/lib" or any directory, just some specific names
(or even just a single one). Call g++ -v or g++ -dumpspecs to list it (and more stuff)
So, either rebuild your compiler with your own lib list, or specify it manually everytime.

Generalizing include statements in c++ files when building with make

Hello (I am using Windows, mingw g++ compiler and mingw32-make)
To generalize my question I would like to learn how to write a c++ source file as follows:
Assuming that foo.cpp depends on foo.h where foo.cpp is in src\ and foo.h is in include\
// foo.cpp
#include "foo.h"
Normally I would just write it like this
//foo.cpp
#include "..\include\foo.h"
but I have found that as my project grows, and I begin to need more organization, that this method isn't dynamic enough. Reason being I have to change every include for every file if I want to move foo.h to a new directory (say include\bar\foo.h). Is there a way for make to achieve this. If so can it be done for header file dependencies as well.
As a side note I am new to makefiles. I am not even sure that it knows these includes are there since they are within the code (in fact from what I understand it doesn't). That would lead me to an unfortunate secondary question, which is can make see these includes? If not is it possible to change it so that it can? Feel free to answer how you would approach this problem because I have a feeling I am going about this the wrong way by putting the includes in the file rather than linking them in the makefile.
The compiler is always looking into some default paths to look for .h-files. You can add your path.
For example gcc takes multiple -I arguments which contain a path. In your foo.cpp you do:
#include "foo.h"
and when compiling you say:
g++ -I../include foo.cpp -c [other options]
.
Regarding the second part of your question: The makefile and the call to make does not normally know anything about the files to be compiled and about your project. However there are several default variables and directives in make which lead to that impression: It could be, that in your environment you only need to change the CFLAGS or CPPFLAGS variable to add the -I-argument and it will work.
Patrick B has answered very well on how to make the compiler know where to include from, but not the following bit:
As a side note I am new to makefiles. I am not even sure that it knows
these includes are there since they are within the code (in fact from
what I understand it doesn't). That would lead me to an unfortunate
secondary question, which is can make see these includes? If not is it
possible to change it so that it can?
No, make doesn't understand what your source files contain, or how they depend on other files [make also doesn't really care if you are programming in C, C++, Fortran, Pascal, ADA, Lisp, Cobol or Haskell - as long as there is a "If you have a file like this, and want a file like that by doing something" relationship between files, make will sort it for you.
There are several ways to do this. You can manually add:
foo.cpp: foo.h
Or you can use a dependency file for your include-file, and let make built it automatically, by adding this, for example:
SOURCES = foo.cpp # Add any further source files here.
INCLUDES = -I../includes # Add other include directories if needed.
CFLAGS += ${INCLUDES}
TARGET = foo.exe # in Windows. Just foo in linux/MacOS.
all: ${TARGET} deps.mk
${TARGET}: ${SOURCES}
gcc -O $# $^
desp.mk: ${SOURCES}
gcc -MM ${INCLUDES} $^ > $#
include deps.mk
Note that makefiles are RELYING on indentation being tabs. This post uses spaces, so you will need to "tabify" the recepies. Also note that in a "proper" makefile, you'd make foo.o from foo.cpp, etc, and link all the different .o files together. That way, the compile is a fair bit quicker for large projects. I've simplified it for readability.
Maybe I should expand a little bit:
gcc -MM gives a list (to standard out) of the files that are being "compiled" and all of it's dependencies. It doesn't actually compile the code (and as long as the code is at least SOMEWHAT) close to being compileable, it will happily process your files.
For more details on gcc -MM and related, have a look at the GCC invocation documentation.
The $# and $&^ are what make calls "Automatic variables" - they expand to the "target" (easy to remember, as it looks sort of like a target for shooting arrows at or similar) and "all dependencies" (no visual clue here, I'm afraid - and every now and again, I have to remind myself) respectively. Check out here for more details.

How do I get make to figure out the correct dependencies to link in the correct downstream object files?

I'm going to use a small example for reference. Consider a project with:
inner_definitions.o : inner_definitions.cpp inner_definitions.h
gcc $^ -o $#
inner_class_1.o : inner_class_1.cpp inner_class_1.h inner_definitions.h
gcc $^ -o $#
inner_class_2.o : inner_class_2.cpp inner_class_2.h inner_definitions.h
gcc $^ -o $#
outer_class.o : outer_class.cpp outer_class.h inner_class_1.h inner_class_2.h
gcc $^ -o $#
executable.o : executable.cpp executable.h outer_class.h
gcc $^ -o $#
executable : __?1__
__?2__
But filling in the blanks __?1__ for the linker dependencies and __?2__ for the linker command isn't easy.
In this small example, one could argue that its easy to see that __?1__ = inner_definitions.o inner_class_1.o inner_class_2.o outer_class.o executable.o . However, this is clearly not a scalable solution as it forces each developer to understand all the dependencies of the code they are working with so they can figure out the dependencies by hand rather than by using the make utility.
Another solution would be to have a different variable for each object file that listed all its downstream dependencies: i.e __?1__ = executable.o $(executable_dependencies). This is not a desired solution because it forces the makefile to be compiled in the specific way so the variables are only used when they are fully defined. Also, for really large applications these variables might exceed the maximum variable length.
Yet another solution is to use archive .a files for linking. In this case, we could construct an inner_class_1.a that included both inner_defintions.o and inner_class_1.o, so it could be linked with any object file that needed inner_class_1.o without forcing the developer to reconstruct the dependencies. This approach seems promising, but involves having many duplicate files. Also, it doesn't appear that the gcc linker can handle nested archive files.
Is there another approach? What is the best approach? Can the gcc linker handle nested archive files?
The job you're trying to automate (picking the right object files to satisfy all references) is usually left to the linker, using static libraries (".a" files) to group the candidate object files, just as you suggest.
An important detail you may be missing: If you pass the linker an archive, it will only link in those files from the archive that are actually needed. So you can create archives at a fairly coarse level of granularity without necessarily bloating all your executables -- the linker will pick just what it needs -- although can easily end up with needlessly slow builds if you take this approach too far.
The GNU linker will not pull objects out of nested libraries. If you want to make one big library by merging many small ones, you can do that with the "addlib" command in an ar script. That will give you a single archive containing all of the object files without any nested library structure.
If the duplication of having .o files and .a files containing the same object code lying around bothers you, the fine manual describes a way to have make update the archives "directly".
Your makefile must have a list of objects to link together, like so:
OBJ_FILES = inner_definitions.o inner_class_1.o inner_class_2.o \
outer_class.o executable.o
executable : $(OBJ_FILES)
gcc $^ -o $#
Someone must write this; gcc can't do it for you, Make can't do it for you. Not every developer on the project needs to know how to construct that list, only the one who writes that part of the makefile. All the others will use that makefile, and a developer who adds a new dependency (e.g. inner_class_3) can add it to the list.
And if your makefile is lost in a fire and the only developer who knows all the dependencies is hit by a bus, it really isn't hard to reconstruct the list: when you try to make executable, the linker complains that foo::bar() is undefined, you grep around and discover that foo::bar() is defined in inner_class_2.cpp, you add inner_class_2.o to the list. Repeat until the linker stops complaining.
P.S. Once that's in order, you can simplify the rest of the makefile quite a lot:
%.o: %.cpp %.h
gcc -c $< -o $#
inner_class_1.o inner_class_2.o : inner_definitions.h
outer_class.o : inner_class_1.h inner_class_2.h
executable.o : outer_class.h
EDIT:
The method I suggested does not require listing every object file that can be made, just the ones that are actually needed to build `executable`; I inferred the list from your question.
Passing extra object files to the linker makes no difference to the final executable, but it does lead to unnecessary rebuilding. For example, suppose you add `alien.o` to `OBJ_FILES`. Then if you modify `alien.cpp` and run `make executable`, it will rebuild `alien.o` and `executable` even though there's no real need to do so. Correction (thanks to slowdog): unnecessary object files go into the final executable as dead code-- but I'm still right about unnecessary rebuilding.
Organizing object files into archives and shared libraries is often convenient, but doesn't really change anything here.
I know of no robust way to automatically construct the object list -- that is, a way that could deal with problem cases such as when the same function is defined in two different source files. This could become a real problem if unit tests are involved. But you could do it within your makefile if you follow a simple naming convention.
The trick for doing it within your makefile is a pretty advanced one. I honestly think you'd be better off doing this the simple way until you're more comfortable with the tools.
EDIT:
All right, here's an outline of the advanced technique.
First, consider all of the #included header files. It would be nice to have Make handle the dependencies instead of putting them in by hand, as in the makefile above. And this is a straightforward task: if X.cpp #includes Y.h (either directly or through some chain of #included header files), then X.o will depend on Y.h. This has already been worked out as "Advanced Auto-Dependency Generation". But if you follow a strict naming convention, you can take it a step further: if everything declared but not defines in X.h is defined in X.cpp, then by following the same tree of #include statements we should be able to construct a list of the needed object files, which will then be the dependencies of executable.
This really is a lot to absorb at once, so I won't try to work through an example. I suggest you look over the document and see how it can generate the Y.h dependencies, then try applying it to the example makefile, then think about what the "step further" should do.
Later you can apply it to the test harness, where the object files are, e.g., outer_class.o, stub_inner_class_1.o, stub_inner_class_2.o and test_outer_class.o.