Lib Files and Defines - c++

I'm using a couple of external libraries and I'd rather not have to include all their source and header files in my main source directory or in my project file. One option would be to compile the libraries as lib files and link them like that. However I'm not sure the defines get evaluated before or after the lib file gets created (which one is it?). If it's before then obviously I can't just pack them because they might not work properly on different compilers or systems.
So if I can't pack the libraries as lib files, is there any way for me to link in the c or cpp source files? Probably not, since they would have to be compiled first, but maybe I'm wrong.
Edit: Here's a follow-up question, based on answers. Do you think it'd be too much of a hassle to have a makefile that creates the lib files? I'd still rather not add the sources to my project or in my source directory.

Library is a binary file, so all defines obviously already in.
Just to make order, defines are evaluated as 1st stage of compilation process - the step is called preprocess. At this stage, for each cpp files created one file containing all #include'ed in it files recursively and all macros are evaluated.
Any way 3rd party should not depend on your compilation flags with one exception - release/build lib. Only in this case you need 2 versions of 3rd lib.
As regarding to question if to compile 3rd party libs once or each time while compiling your code it depends. If you are doing it only for itself than do what looks an easies way for you, but if we're talking about development team and the project to be maintain for a long time, than more things are to be considered.
SO we're talking about some solid solution for a team and we want to compile library several times.
In this case I personally strive to compile 3rd part library once and use it many times. This reduces compilation times for each build for each developers, which means faster development.
Nice, but where you hold these libs. I like phisycal separation - 3rd party library and my code not in same tree. This can avoid some not intentional errors. A good build system, and most of time it's mandatory, should be re-buildable. This means that if you checkout your code after year, you can compile and receive exactly same binaries.
Once I used some external read only tree on my machine. This tree was managed only by me.
To make my sources re-buildable, each next version of 3rd party library put in direcoty containing it's version and my source tree was updated to point to this point. If you build on several machines, than the read only tree should be visible on all these machines.
Additioanal solution is to check if your SCM tool (I suppose you use one) gives you some ability to combine several sub-tries from repository in one checkhout. For each 3rd party library there's one sub-tree. This way 3rd party libraries are available on all machines your build. I currenly use these method on subversion - it's called svn:external. On CVS AFAIK it's called cvs modules. Additional advantage that the libraries are managed by source control system, so you can track all changes done to 3rd party libs.

defines get evaluated even before compiling. They are dealt with by the pre-processor, that prepares the code for the compiler to use. So yes, they are evaluated before the libraries are created.
You can't link against source code. You can only link with object files, static libraries, or dynamic libraries (shared object files/DLLs).
Using dynamic linking can be a good option, especially if the externals are large and/or you'll be using them in many executables.

Related

How do you package all link dependencies into a single Linux static library?

I'm publishing a multi-platform library and it's working everywhere except Linux. There's probably a way to do what I need, but I'm not seeing it and hoping someone here can help.
My library consists of two projects, 'subLibrary.lib' and 'library.lib' on Windows (for example). I build 'subLibrary' then link it into 'library' for distribution. Consumers only have to link with 'library' to get everything, including what's in 'subLibrary'.
Every other platform works the same -- I publish one giant 'library.a' that contains everything and consumers link with it, not having to know about any of the lower-level dependencies.
When building a standard executable on Linux, my link command must specify not only 'library.a' but also every dependency of it. This is because intermediate libraries leave symbols unresolved until later.
What I want to do is make it the same as the other platforms so the consuming executable only has to link with 'library.a' and that library contains everything it needs.
I know this will make the library larger, but it's the only way to ensure dependency resolution and build time for everyone.
On unix a library is simply a collection of all the object files. You can add more files to the library with AR, but you do need to be careful of file name conflicts.

C++ import library by source

I'm new to C++ and wonder if it is good practice to include a library by source code. If it is, what would be the best way to achieve this? Just copying in a subfolder and using include?
In my special case, I have written a small library and I'm going to use it on two different microprocessors. Compiling the library separately, copying all headers and using this "package" seems to be overkill for me.
Compiling the library separately is what should be done.
It's not that overkill either : you're just compiling the .o files for your library, then wrapping them in an archive and handling that archive around.
Normally libraries are used as libraries because it is much easier and comfortable that way. If you are using dynamic libraries (.dll or .so) things get even better because you can replace libraries on the fly and things should continue to work smoothly.
You decided to use code repositories instead of libraries which means probably more work for you. If you are happy this way that's OK, but just make sure you do not break any license, some lgpl packages (like Qt) clearly
require their libraries to be linked dynamically.
The best way to do this: hard to say but in your place I would probably use git and include the libraries as submodules.
Just #includeing source code is a bad idea since it means just to copy the code into your own, things can go wrong that way. For example if there is a static variable somewhere in the library code and the same named static variable in your code you will have a conflict.
Instead you should probably compile the library separately and link it, possibly the same way as you would do anyway (ie you build the library and then you link with that library). But the light weight alternative would be just to compile the additional C++ files and then link the object files together to an executable. Details on how you do that is compiler specific.
There's valid reasons for including the library source in this way, for example if your project needs to modify the library during development it would be easier to do so if the rebuilding of the library is done as a part of the build process of the project. With a well designed build process the library shouldn't have to be rebuilt unless there are actual changes to it.
The value of a library is in part that you link it more often than you compile it, leading to a net saving.
If you control all the source, then whatever build process works best for you is fine.
I agree with πάντα ῥεῖ but I'll also add that the reason it is bad practice is because the compiled library can be stored in your computer in a common location and used by tons of different programs, thereby reducing the amount of data your computer has to store, in memory as well as RAM(if more than one running program uses the same library). An example is openGL which is a library that many games use and is probably already in your system somewhere. If you use windows, software installers link up these libraries to their programs and add them if you don't have them. If you use linux, you will be notified if libraries are missing and prompted to install them. All of that aside, you can, technically use un-compiled libraries but that introduces a number of potential licensing problems as well as additional problems with THEIR dependencies.
By copying source code to other projects and "mixing" it with other source code will stop this library from being a "library". Later on you will be tempted to make a small change in one copy (for CPU) or fix a bug and forget to do the same in the other copy.
There might be additional consideration but you should try to keep the code in one place. Do not Repeat Yourself (DRY) is a very strong and fundamental principal of software engineering with many benefits.

Compile and link many source files from different folders with g++ and make

What is proper way to compile and link many .cpp files which comes from different folders into one exacutable using makefile?
For example I have following file structure:
./Foo/Wow/Bar/example1.cpp
./Foo/Bar/example2.cpp
./Foo/example3.cpp
./main.cpp
Now I want to compile and link all of these files into one executable. Which is proper way to do this with makefile?
Thanks,
S.
There is no one correct way to do this.
One possibility would be for to separate the code into libraries, one per subdirectory. Each of those libraries would have its own makefile.
Then the project root would have a makefile that invoked make recursively to ensure those libraries were all up to date, then used the libraries to build the main executable(s).
Others object (sometimes vociferously) to that whole notion. It might be all right to use existing libraries (and their existing makefiles), but they often object to the basic idea of turning code in the subdirectories into libraries, just for the sake of having a single result file to link into the final executable (OTOH, few seem to have a solid explanation of why you'd put code into a subdirectory to start with, if it didn't embody some logical concept that would probably qualify it as a meaningful library).

Organizing library dependencies

I've noticed that there seem to be two approaches to linking: a flat and a hierarchical way.
Let me illustrate with two examples:
A Visual Studio solution can contain multiple projects with one of them being the main project. My usual approach is then to add all required libs to the main project and none to the other projects. I like this because that way I don't get "multiple definition"-linker errors. I don't know if this approach has an official name so I'll call it the flat way.
On the job our team works on a Linux-based traffic generation application. The build system uses Automake. When looking through the makefiles I noticed that each library specifies the libraries it requires (in the noinst_LIBRARIES variable). Each of these libraries can in turn specify their dependencies. This leads to a tree-like structure. So I call it the "hierarchical approach".
What are the best practices for this? It doesn't seem often discussed.
It usually comes down to static vs shared libraries.
If you use static libraries, then the main program must usually (always?) be linked against every library that it depends on, either directly or indirectly, hence a flat linking structure.
For shared libraries (or DLLs in Windows), each library bundles up its own dependencies, so the main program only needs to be linked against the libraries it depends on directly, leading to a graph structure.

How to include only BOOST smart pointer codes into a project?

What are best practices to include boost smart pointer library only without adding all boost libraries into the project?
I only want boost smart pointer library in my project and I don't want to check in/commit 200 MB source codes (boost 1.42.0) into my project repository just for that. What more, my windows mobile project itself doesn't even reach 10% of that size!
For just the smart pointer library, you have two options.
Copy the headers you include in your source files (shared_ptr.hpp, etc.). Then copy over additional files until the project builds (make sure to maintain the directory structure).
Use the boost bcp utility. For larger subsets, this tool saves a ton of time.
The former will make sure the fewest number of files possible gets added your project. The latter is much faster for any substantial subset of boost, but it will likely include many files you don't need (compatibility headers for platforms your program doesn't support).
Just check in the folder containing the code you want? Try deleting/moving/renaming "everything else" and see what external dependencies the smart pointer library has, probably not many. I'm almost positive it doesn't require any built code (i.e. libraries), so just checking in all of the headers that get included seems like the way to go.