CMake find_package not handling multi-configurations - c++

We're using Jenkins 2.60.2 and CMake 3.9.1 to automate our build system. This all works well for multiple versions of build tools, architectures and debug/release targets (if ALL configurations have been built and installed, so both Debug AND Release).
A Debug-only configuration that uses find_package() typically ignores the CMAKE_BUILD_TYPE at discovery. Internally the scripts search for file and libraries and store the locations in variables. At the end of the script, the variables are scanned for _NOTFOUND strings, which is the result of a file or library not found in all the reference paths/hints. So essentially a find_package() will fail if the Release lib can not be found, and mark the whole package as not installed properly, even though the build is only strictly interested in the Debug target.
Typically the XXXConfig.cmake files use a call to find_package_handle_standard_args(.. PATH_TO_LIB) that scans for _NOTFOUND strings in the path variables to the libraries. These variables typically get set to _NOTFOUND by earlier calls to find_library(PATH_TO_LIB libname ..). For more information I refer to the CMake docs.
The user can indeed tag debug libraries with 'debug' and release libs with 'optimized', but this does not seem to help during the lib discovery and is only used during linking.
Anyone knows how to handle this properly?
Kind regards

This is one of the unfortunate shortcomings of the classic use of find_package.
Note that find_package also allows a different mode of operation, based on config file packages, which is well-suited to address this particular problem, but will require some changes to your build system. You will need config scripts for all your libraries (CMake can generate them for you if the libraries are themselves also built by CMake; if not, this can be a bit of a hassle), and depending targets will refer to those libraries via imported targets instead of variables (which usually makes things way easier for those depending targets). I would strongly recommend you adopt this as the long-term solution.
If for some reason you cannot do this, you will have to modify your find scripts. A common technique is to search for debug and release binaries separately, but then combine the find libraries from those calls into a single variable (together with the debug and optimized specifiers) and then have that variable as an argument to find_package_handle_standard_args. That way, as long as one of the two is found, your find script will be happy, although you might not be able to build all possible configurations in the end. Alternatively, you can also skip the call to find_package_handle_standard_args altogether and manually implement your own logic for detecting whether the library was found. As you can see from the manpage for that function, it does mostly boilerplate stuff and can be easily replaced by a more flexible, handwritten implementation if necessary.

Related

Multiple Project Configurations C++

I use Visual Studio 2017 (but applies to any version from 2010+) and I've been trying to come up with a way to organize my Debug/Release libraries in such a way as to avoid all these linking errors we get, when mixing different versions of the Runtime libraries. My goal seems simple, conceptually, but I have not yet figured out a way to achieve all I want.
Here's what I have, and what I'd like to do:
Common Libraries:
ComLib1
ComLib2
...
Exe1:
ComLib1
ComLib2
...
Exe1Lib1
Exe1Lib2
...
Exe1
Exe2:
ComLib1
ComLib2
...
Exe2Lib1
Exe2Lib2
...
Exe2
So 2 different executable, using a set of common libraries and Exe-specific libraries.
I want to create 4 different build configurations.
Cfg1:
This would contain debugging info/non-optimized code for all libraries, including the Common Libraries.
Cfg2:
This would contain debugging info/non-optimized code for all Exe-specific libraries, but NOT for the Common Libraries.
Cfg3:
This would contain a combination of debugging info/non-optimized code libraries for some libraries, and non-debugging info/optimized libraries for the remaining ones.
Cfg4:
You guessed it. This would contain non-debugging info and optimized code for all.
My first attempt was to basically create 2 sets of binaries for each library; one compiled in Debug Mode (with /MTd /Od) and another one compiled in Release Mode with (/MT /O2). Then pick one or the other version in my various configurations. This was fine for Cfg1 & Cfg4 (since all Runtime libraries are consistent throughout), but ran into those those linking errors for Cfg2 & Cfg3.
I understand why I get these errors. I'm just not sure how one goes about resolving these things, in what I would think would be a common scenario. Maybe Cfg3 is uncommon, but I would think Cfg1,2 & 4 are.
Thanks for your inputs.
EDIT
I didn't really think I needed to add this information because I wanted to keep my question short(er). But if it can help clarify my goal, I'll add this up.
This is for a Realtime simulator. I just can't run every single library in a typical Debug configuration, as I would not be able to maintain Realtime. I seldom need to Debug the Common Libraries because they're mostly related to Server/IO tasks. The Exe libs mostly contain math/thermodynamics and is where I mostly spend my time. However, 1 Exe lib contains reactor neutronics, which involved heavy calculations. We typically treat that one as a black-box (cryptic vendor-provided code) and I almost always want to run it using Optimized code (typical Release settings).
You can not use different runtime libraries in the same process without some special considerations (e.g. using a DLL or so with no CRT object in the interface to make them entirely seperate) without either link errors or risking runtime issues if CRT objects are passed between.
You can mix most of the general optimisation options within a module with the notable exception with link time code generation that must be the same for all objects. The release runtime libraries are also generally usable for debugging as long as your own code is not optimised.
To easily switch you will want a solution configuration for each case you want (so 4). You can make one project configuration be used by multiple solution configurations if you do not want some that are duplicates but it must follow the previously mentioned limitations, and can confuse things like output directory. You can also use property sheets to share settings between multiple projects and configurations.
I've done similar using predefined macros for either the output directory path or the target filename.
For example, I use $(Platform)_$(Configuration) which expands to Win32_Debug or Win32_Release.
You can use environment variables as well. I haven't tried using preprocessor macros yet.
Search the internet for "MSDN Visual Studio predefined macros $(Platform)".
So this is how I ended up getting what I wanted.
Assuming I'm using the static Runtime libraries, I think I'll keep the typical Debug/Release (/MTd and /MT, respectively) libraries for my Common Libraries and create 3 sets of libraries for my Exe's:
Exe1Lib1Release: Typical Release Configuration
Exe1Lib1Debug: Typical Debug Configuration
Exe1Lib1DebugMT: Non-optimized code with debugging info, but using the MT Runtime libraries
Cfg1:
Will use the typical Debug libraries all around
Cfg2 & Cfg3:
Will use the typical Release libraries for the Common Libraries, and the Exe1Lib1DebugMT for the Exe's libraries
Cfg4:
Will use the typical Release libraries all around.
EDIT
Actually, Cfg2 & Cfg3 settings are more accurately represented by:
Cfg2:
Will use the typical Release libraries for the Common Libraries, and the Exe1Lib1DebugMT for the Exe's libraries
Cfg3:
Will use the typical Release libraries for the Common Libraries, and a combination of Release and Exe1Lib1DebugMT for the Exe's libraries

How to use separate CMake targets for host application and any of the used libraries?

When I'm using CMake and library which also uses CMake, I add the library directory in my CMake project to allow building the library alongside my project. For example:
# add SFML library dependencies
add_subdirectory("third_party/lib/SFML")
include_directories("third_party/lib/SFML/include")
target_link_libraries(${CMAKE_PROJECT_NAME} sfml-system sfml-window sfml-graphics)
Then CMake automatically matches project Debug builds to use library Debug build, and project Release builds to use library Release build. In some cases it is useful build targets to be controlled separately for the main project and every one of the libraries used by it. For example, if I'm not interested in debugging inside the library code I will want to build in Debug mode only my code and to link it against Release version of the library, because I don't want to sacrifice additional performance. In other cases maybe I want to debug only inside one of the used libraries, if I have suspicious for bug inside it, but again for performance reasons I want to link release versions for all other libraries. Is it possible and what is the best way to achieve this behavior?
With both imported targets and dependent targets from the same build tree, you will always get the behavior you described, that each configuration uses its own matching build of the library. Messing with this means fiddling with CMake's internals, so I'd advise against it.
If you want to link against a specific version of the library, the most robust way is to use find_library. Note that this will only work if the library dependency is already available in its binary form at configure time. That is, you can no longer build the dependency as part of the dependent project.
If that is not an option, consider using ExternalProject_Add to build the dependency and specify the location of the dependency binary manually.
All in all, your current approach is the most convenient one, so only change this if performance of the dependency's debug build is a real problem. Also note that while mixing debug and release builds is mostly fine for C libraries, it can easily break for C++ libraries, especially if you have standard library types on the interfaces.

exporting cmake build options to external project

I have got a C++ Library A. A can be installed in a multitude of ways depending on which external dependencies are used. This also changes depending on whether the library is build in debug or release mode. This means that some features might not be available or some types/defines need to be changed in order to link to the library.
Now I want to link A to a local project B. I have set up a ProjectConfig.cmake file for A which is located at /path/lib/CMake/A/AConfig.cmake which is found and works fine in a minimal build. However as soon as I add definitions to the compilation or include some packages, this information is not automatically exported. This makes linking to A hard as for example I need to know that OpenMP was used to have a coherent build.
Is there a way to export this information the same way the ProjectConfig.cmake does it?
Generate the ProjectConfig.cmake file to contain what you need it to contain.
http://www.cmake.org/cmake/help/v3.0/manual/cmake-packages.7.html
Note that if you set the usage requirements of the targets, you have less need to generate the file.
http://www.cmake.org/cmake/help/v3.0/manual/cmake-buildsystem.7.html

Using -rpath and $ORIGIN with libtool-based projects?

I am trying to incorporate a libtool-based package into a project of my own, perhaps in a non-standard way. Here is my goal:
Build external project:
./configure --prefix=$HOME/blah --etcetera && make && make install
Build my own project which depends upon the external project's shared libraries and executables at runtime:
gcc -I$HOME/blah/include -L$HOME/blah/lib -o $HOME/blah/bin/program
Package everything into a single "localized" tarball... that is, while I have everything in $HOME/blah on the build host I want the ability to extract the tarball to any arbitrary directory (on some other host) without having to futz with my environment. The intent is to allow multiple versions of my project to coexist side-by-side without any nasty "cross-pollination".
I know that I can use -rpath '$ORIGIN/../lib' for my project to ensure that the right shared libraries always get loaded at runtime. However, it seems that libtool insists on assigning its own -rpath setting based on the exact path of $HOME/blah/lib, which breaks if I happen to untar everything into a different directory (say, for example, $HOME/blah.2011-06-02).
Is there a way around this limitation? I see a rather lengthy rpath discussion between debian and libtool folks on the topic, but it's somewhat old and inconclusive beyond "we disagree".
Among the options presented here on Rpathissue on the debian Wiki, using chrpath in your 'install' step or some post-processing script sounds like a viable option. (It's available on a bunch of distros via your favorite package manager.)
It doesn't require patching libtool which is a plus IMO.
Note that it has some limitations: can only save the new rpath if it's shorter (or same length) as the original one.
The other (pragmatic) option is to remove the rpath (chrpath can do that), and just have a wrapper script that sets LD_LIBRARY_PATH to whatever is necessary for your app. That has a chance of being slightly more portable too (if you handle the other shared library path environment vars some OSes have).

Compile the Python interpreter statically?

I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. libc.a not libc.so).
I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using Freeze.py, but is there an alternative so that it can be done in one step?
I found this (mainly concerning static compilation of Python modules):
http://bytes.com/groups/python/23235-build-static-python-executable-linux
Which describes a file used for configuration located here:
<Python_Source>/Modules/Setup
If this file isn't present, it can be created by copying:
<Python_Source>/Modules/Setup.dist
The Setup file has tons of documentation in it and the README included with the source offers lots of good compilation information as well.
I haven't tried compiling yet, but I think with these resources, I should be successful when I try. I will post my results as a comment here.
Update
To get a pure-static python executable, you must also configure as follows:
./configure LDFLAGS="-static -static-libgcc" CPPFLAGS="-static"
Once you build with these flags enabled, you will likely get lots of warnings about "renaming because library isn't present". This means that you have not configured Modules/Setup correctly and need to:
a) add a single line (near the top) like this:
*static*
(that's asterisk/star the word "static" and asterisk with no spaces)
b) uncomment all modules that you want to be available statically (such as math, array, etc...)
You may also need to add specific linker flags (as mentioned in the link I posted above). My experience so far has been that the libraries are working without modification.
It may also be helpful to run make with as follows:
make 2>&1 | grep 'renaming'
This will show all modules that are failing to compile due to being statically linked.
CPython CMake Buildsystem offers an alternative way to build Python, using CMake.
It can build python lib statically, and include in that lib all the modules you want. Just set CMake's options
BUILD_SHARED OFF
BUILD_STATIC ON
and set the BUILTIN_<extension> you want to ON.
Using freeze doesn't prevent doing it all in one run (no matter what approach you use, you will need multiple build steps - e.g. many compiler invocations). First, you edit Modules/Setup to include all extension modules that you want. Next, you build Python, getting libpythonxy.a. Then, you run freeze, getting a number of C files and a config.c. You compile these as well, and integrate them into libpythonxy.a (or create a separate library).
You do all this once, for each architecture and Python version you want to integrate. When building your application, you only link with libpythonxy.a, and the library that freeze has produced.
You can try with ELF STATIFIER. I've been used it before and it works fairly well. I just had problems with it in a couple of cases and then I had to use another similar program called Ermine. Unfortunately this one is a commercial program.