exporting cmake build options to external project - c++

I have got a C++ Library A. A can be installed in a multitude of ways depending on which external dependencies are used. This also changes depending on whether the library is build in debug or release mode. This means that some features might not be available or some types/defines need to be changed in order to link to the library.
Now I want to link A to a local project B. I have set up a ProjectConfig.cmake file for A which is located at /path/lib/CMake/A/AConfig.cmake which is found and works fine in a minimal build. However as soon as I add definitions to the compilation or include some packages, this information is not automatically exported. This makes linking to A hard as for example I need to know that OpenMP was used to have a coherent build.
Is there a way to export this information the same way the ProjectConfig.cmake does it?

Generate the ProjectConfig.cmake file to contain what you need it to contain.
http://www.cmake.org/cmake/help/v3.0/manual/cmake-packages.7.html
Note that if you set the usage requirements of the targets, you have less need to generate the file.
http://www.cmake.org/cmake/help/v3.0/manual/cmake-buildsystem.7.html

Related

Multiple Project Configurations C++

I use Visual Studio 2017 (but applies to any version from 2010+) and I've been trying to come up with a way to organize my Debug/Release libraries in such a way as to avoid all these linking errors we get, when mixing different versions of the Runtime libraries. My goal seems simple, conceptually, but I have not yet figured out a way to achieve all I want.
Here's what I have, and what I'd like to do:
Common Libraries:
ComLib1
ComLib2
...
Exe1:
ComLib1
ComLib2
...
Exe1Lib1
Exe1Lib2
...
Exe1
Exe2:
ComLib1
ComLib2
...
Exe2Lib1
Exe2Lib2
...
Exe2
So 2 different executable, using a set of common libraries and Exe-specific libraries.
I want to create 4 different build configurations.
Cfg1:
This would contain debugging info/non-optimized code for all libraries, including the Common Libraries.
Cfg2:
This would contain debugging info/non-optimized code for all Exe-specific libraries, but NOT for the Common Libraries.
Cfg3:
This would contain a combination of debugging info/non-optimized code libraries for some libraries, and non-debugging info/optimized libraries for the remaining ones.
Cfg4:
You guessed it. This would contain non-debugging info and optimized code for all.
My first attempt was to basically create 2 sets of binaries for each library; one compiled in Debug Mode (with /MTd /Od) and another one compiled in Release Mode with (/MT /O2). Then pick one or the other version in my various configurations. This was fine for Cfg1 & Cfg4 (since all Runtime libraries are consistent throughout), but ran into those those linking errors for Cfg2 & Cfg3.
I understand why I get these errors. I'm just not sure how one goes about resolving these things, in what I would think would be a common scenario. Maybe Cfg3 is uncommon, but I would think Cfg1,2 & 4 are.
Thanks for your inputs.
EDIT
I didn't really think I needed to add this information because I wanted to keep my question short(er). But if it can help clarify my goal, I'll add this up.
This is for a Realtime simulator. I just can't run every single library in a typical Debug configuration, as I would not be able to maintain Realtime. I seldom need to Debug the Common Libraries because they're mostly related to Server/IO tasks. The Exe libs mostly contain math/thermodynamics and is where I mostly spend my time. However, 1 Exe lib contains reactor neutronics, which involved heavy calculations. We typically treat that one as a black-box (cryptic vendor-provided code) and I almost always want to run it using Optimized code (typical Release settings).
You can not use different runtime libraries in the same process without some special considerations (e.g. using a DLL or so with no CRT object in the interface to make them entirely seperate) without either link errors or risking runtime issues if CRT objects are passed between.
You can mix most of the general optimisation options within a module with the notable exception with link time code generation that must be the same for all objects. The release runtime libraries are also generally usable for debugging as long as your own code is not optimised.
To easily switch you will want a solution configuration for each case you want (so 4). You can make one project configuration be used by multiple solution configurations if you do not want some that are duplicates but it must follow the previously mentioned limitations, and can confuse things like output directory. You can also use property sheets to share settings between multiple projects and configurations.
I've done similar using predefined macros for either the output directory path or the target filename.
For example, I use $(Platform)_$(Configuration) which expands to Win32_Debug or Win32_Release.
You can use environment variables as well. I haven't tried using preprocessor macros yet.
Search the internet for "MSDN Visual Studio predefined macros $(Platform)".
So this is how I ended up getting what I wanted.
Assuming I'm using the static Runtime libraries, I think I'll keep the typical Debug/Release (/MTd and /MT, respectively) libraries for my Common Libraries and create 3 sets of libraries for my Exe's:
Exe1Lib1Release: Typical Release Configuration
Exe1Lib1Debug: Typical Debug Configuration
Exe1Lib1DebugMT: Non-optimized code with debugging info, but using the MT Runtime libraries
Cfg1:
Will use the typical Debug libraries all around
Cfg2 & Cfg3:
Will use the typical Release libraries for the Common Libraries, and the Exe1Lib1DebugMT for the Exe's libraries
Cfg4:
Will use the typical Release libraries all around.
EDIT
Actually, Cfg2 & Cfg3 settings are more accurately represented by:
Cfg2:
Will use the typical Release libraries for the Common Libraries, and the Exe1Lib1DebugMT for the Exe's libraries
Cfg3:
Will use the typical Release libraries for the Common Libraries, and a combination of Release and Exe1Lib1DebugMT for the Exe's libraries

How do you package all link dependencies into a single Linux static library?

I'm publishing a multi-platform library and it's working everywhere except Linux. There's probably a way to do what I need, but I'm not seeing it and hoping someone here can help.
My library consists of two projects, 'subLibrary.lib' and 'library.lib' on Windows (for example). I build 'subLibrary' then link it into 'library' for distribution. Consumers only have to link with 'library' to get everything, including what's in 'subLibrary'.
Every other platform works the same -- I publish one giant 'library.a' that contains everything and consumers link with it, not having to know about any of the lower-level dependencies.
When building a standard executable on Linux, my link command must specify not only 'library.a' but also every dependency of it. This is because intermediate libraries leave symbols unresolved until later.
What I want to do is make it the same as the other platforms so the consuming executable only has to link with 'library.a' and that library contains everything it needs.
I know this will make the library larger, but it's the only way to ensure dependency resolution and build time for everyone.
On unix a library is simply a collection of all the object files. You can add more files to the library with AR, but you do need to be careful of file name conflicts.

CMake find_package not handling multi-configurations

We're using Jenkins 2.60.2 and CMake 3.9.1 to automate our build system. This all works well for multiple versions of build tools, architectures and debug/release targets (if ALL configurations have been built and installed, so both Debug AND Release).
A Debug-only configuration that uses find_package() typically ignores the CMAKE_BUILD_TYPE at discovery. Internally the scripts search for file and libraries and store the locations in variables. At the end of the script, the variables are scanned for _NOTFOUND strings, which is the result of a file or library not found in all the reference paths/hints. So essentially a find_package() will fail if the Release lib can not be found, and mark the whole package as not installed properly, even though the build is only strictly interested in the Debug target.
Typically the XXXConfig.cmake files use a call to find_package_handle_standard_args(.. PATH_TO_LIB) that scans for _NOTFOUND strings in the path variables to the libraries. These variables typically get set to _NOTFOUND by earlier calls to find_library(PATH_TO_LIB libname ..). For more information I refer to the CMake docs.
The user can indeed tag debug libraries with 'debug' and release libs with 'optimized', but this does not seem to help during the lib discovery and is only used during linking.
Anyone knows how to handle this properly?
Kind regards
This is one of the unfortunate shortcomings of the classic use of find_package.
Note that find_package also allows a different mode of operation, based on config file packages, which is well-suited to address this particular problem, but will require some changes to your build system. You will need config scripts for all your libraries (CMake can generate them for you if the libraries are themselves also built by CMake; if not, this can be a bit of a hassle), and depending targets will refer to those libraries via imported targets instead of variables (which usually makes things way easier for those depending targets). I would strongly recommend you adopt this as the long-term solution.
If for some reason you cannot do this, you will have to modify your find scripts. A common technique is to search for debug and release binaries separately, but then combine the find libraries from those calls into a single variable (together with the debug and optimized specifiers) and then have that variable as an argument to find_package_handle_standard_args. That way, as long as one of the two is found, your find script will be happy, although you might not be able to build all possible configurations in the end. Alternatively, you can also skip the call to find_package_handle_standard_args altogether and manually implement your own logic for detecting whether the library was found. As you can see from the manpage for that function, it does mostly boilerplate stuff and can be easily replaced by a more flexible, handwritten implementation if necessary.

Including additional libraries in OpenCV after compilation

When manually compiling the OpenCV library, one has to select what he/she wants to include by specifying in CMake all the things to include. If, for instance, I would like to include an additional library (for example CUDA support), can I just compile that separately or do I have to recompile the entire library? If former is the answer, how do I do this?
Let's go with CUDA as an example. Library's some dll and lib files will have some dependices to CUDA and some won't. When you use Cmake to configure and generate make files it does create this files with your supplied configuration, CUDA on or off. So but later you want to change this configuration and recompile it again. This is what make is for. When you want to change something within the library and not want to compile it from beginnig.
So you should use Cmake again to generate new make files with your new configuration. You should use the same folders of first compilation to reduce required compile time. So when you change the configuration and generate new make files, it will have probably less compiling time than compiling the all library, because not every library has dependicies with new configuration.
But there is an important issue here. CUDA is a highly dependent library. When I checked source code there are defines which indicates if cuda is on or off. So in this case a change in CUDA configuration affects so much. If you ask me not just for CUDA for all other configuration changes, use a new fresh folder for new configuration and compilation. Because when you encounter a problem you would be at least sure that you have no compilation problems.

Compile the Python interpreter statically?

I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. libc.a not libc.so).
I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using Freeze.py, but is there an alternative so that it can be done in one step?
I found this (mainly concerning static compilation of Python modules):
http://bytes.com/groups/python/23235-build-static-python-executable-linux
Which describes a file used for configuration located here:
<Python_Source>/Modules/Setup
If this file isn't present, it can be created by copying:
<Python_Source>/Modules/Setup.dist
The Setup file has tons of documentation in it and the README included with the source offers lots of good compilation information as well.
I haven't tried compiling yet, but I think with these resources, I should be successful when I try. I will post my results as a comment here.
Update
To get a pure-static python executable, you must also configure as follows:
./configure LDFLAGS="-static -static-libgcc" CPPFLAGS="-static"
Once you build with these flags enabled, you will likely get lots of warnings about "renaming because library isn't present". This means that you have not configured Modules/Setup correctly and need to:
a) add a single line (near the top) like this:
*static*
(that's asterisk/star the word "static" and asterisk with no spaces)
b) uncomment all modules that you want to be available statically (such as math, array, etc...)
You may also need to add specific linker flags (as mentioned in the link I posted above). My experience so far has been that the libraries are working without modification.
It may also be helpful to run make with as follows:
make 2>&1 | grep 'renaming'
This will show all modules that are failing to compile due to being statically linked.
CPython CMake Buildsystem offers an alternative way to build Python, using CMake.
It can build python lib statically, and include in that lib all the modules you want. Just set CMake's options
BUILD_SHARED OFF
BUILD_STATIC ON
and set the BUILTIN_<extension> you want to ON.
Using freeze doesn't prevent doing it all in one run (no matter what approach you use, you will need multiple build steps - e.g. many compiler invocations). First, you edit Modules/Setup to include all extension modules that you want. Next, you build Python, getting libpythonxy.a. Then, you run freeze, getting a number of C files and a config.c. You compile these as well, and integrate them into libpythonxy.a (or create a separate library).
You do all this once, for each architecture and Python version you want to integrate. When building your application, you only link with libpythonxy.a, and the library that freeze has produced.
You can try with ELF STATIFIER. I've been used it before and it works fairly well. I just had problems with it in a couple of cases and then I had to use another similar program called Ermine. Unfortunately this one is a commercial program.