I am successfully using Boost Python to build a series of Python libraries. These libraries are built conditionally, depending on the settings the user specifies at build time (via CMake).
Now what I would like to do is to merge them all together into a single library, which would contain a series of modules (one per old library) only if they were needed.
So for example, if before I had:
A.so # Always built
B.so # Compiled if B was set
C.so # Compiled if C was set
Now I'd like to have:
MyLib.so # Always built
---
import MyLib
MyLib.A # always works
MyLib.B # works only if MyLib was compiled with B set
MyLib.C # works only if MyLib was compiled with C set
I already know how to create namespaces with Boost Python (via class_), but I'm not sure how I could setup the project so that this final result was possible.
With CMake I can conditionally add files to compile, but I don't know how to define the MyLib module in C++ so that I can add parts to it in separate files.
For now I've added some ifdefs inside the exporting functions which limit the exports based on defines created in CMake.
It's not bad although I'd have preferred to keep the code clean from this, but for now it's my only solution.
Related
What would the best way to replace global variables and instead move towards using target properties to control things like building tests in included libraries from the main applications CMakeLists.txt file?
Background
We have a c++ application, A, that uses, amongst others, inhouse libraries: lib B and lib C, that are included as git submodules in A:s repository. All respositories (a, B and C) share a common structure internal structure with all of them using Gtest as a unit test framework. This leads to all projects having a "unit_tests" target as well as a "gtest" target.
As a consequence, when including lib B and lib C in A:s CMakeLists.txt using add_subfolder() there is a conflict, as CMake requires target names to be unique and there is a total of 3 "unit_tests" targets in these 3 repos. There are other test targets as well, but they are so far unique. Renaming the test targets to b_unit_tests and a_unit_test would cure this, but doesnt feel right, and we would also need to rename the gtest targets to a_gtest, b_gtest...
Currently we have solved it by having global CMake variables B_BUILD_TESTS, C_BUILD_TESTS that are set to false in A:s CMakeLists.txt and control the inclusison of test in B and Cs CMakeLists.txt files using add_submodule(unit_tests). This has not been a real problem as we don't want to build and run unit tests for lib B and C when building app A. Doing so would have been mostly a waste of time.
Having watched Daniel Pfeifers ccpnow talk and read other blog posts on using CMake in a declarative way, I've started rewriting our CMake system with modern CMake practices in mind. So now the use of setting a global flag for every included library feels like an anti pattern I would like to avoid. Ideally I feel that tests would be best controlled by setting a property on an included targets. Something like below.
add_subdirectory(B)
set_target_properties(B PROPERTIES BUILD_TESTS false)
target_link_libraries(A PRIVATE B)
Ideally this would still include bs test targets, only make them not depend on b:s main target. But this seems very difficult to do without renaming all of bs targets with a b_ prefix. Technically b_unit_tests is still a different target than b, though from the context of A you would think of it as being part of target B.
Unfortunately I can't get the above to work as is seems like CMake already has a set of pre-defined properties for targets and adding new one doesn't seem fully supported. There is define_property and set_property, but from what I can see you cant use them to define properties on targets.
Suggestion how to rewrite the CMakeLists.txt in a more modern way with clear separation and configuration without resorting to global variables are greatly appreciated.
It seems that for modern CMake, libraries should export their paths and sources and in the export process you can add a namespace. Though I don't really get if this should also apply to libraries that are included as submodules and built together with the application? Is this the route I should go with libs A and
Why don't just rename test targets so that they wouldn't conflict with each other? You can also rule them with single BUILD_TESTS variable and set it to ON or OFF by default depending on how a library is built - as a standalone project, or as part of other one.
When defining a target in meson, you can declare dependencies on external projects using the following syntax:
zdep = dependency('zlib', version : '>=1.2.8')
exe = executable('zlibprog', 'prog.c', dependencies : zdep)
This checks the standard include locations, which works well on Linux, but not so well on other platforms.
Is there a way to add additional include and library paths for meson to check when declaring dependencies?
As the documentation says: dependency() "Finds an external dependency [...] with pkg-config if possible and with library-specific fallback detection logic otherwise."
So, if you mean to set PKG_CONFIG_PATH, you can do that as usual:
$ export PKG_CONFIG_PATH=/wherever/your/installed/dir/is/
$ meson ....
Or, you can use back-end specific variables, ie. BOOST_ROOT. Check the doc for more info.
If you ment to find other libraries not using pkg-config, you can add a dirs keyword argument to point to the directory your libraries are in.
For example, my program uses 2 external modules like below image.
There are module A and module B.
Module B uses and includes module A inside it; A's headers, library and data(=model).
But their version is different; the version of A in B is 3.6, but the latest version of A is 3.8
My program includes both modules, with my Makefile like below.
However, I got compile error or, my program get segment faults at runtime.
g++ -I$(A_PATH)/include -I$(B_PATH)/include \
-L$(A_PATH)/lib -L$(B_PATH)/lib \
-Wl,-rpath,$(A_PATH)/lib -Wl,-rpath,$(B_PATH)/lib \
…
I'd like to use the latest module A in my program, then,
what is the best way to use these modules in my Makefile?
Use different directories. One for module A v3.8 that your code'll use. Another directory for module B and under that have sub-diretory for module A v3.6. You can have Makefiles in each of these directories as well as one for you source code (in a separate directory).
In our workflow, we can have a module A that is composed of several header files, module A not producing any binary (side note: it will obviously be used by other modules, that include some of the headers from module A to produce binaries).
A good example would be a header-only library, for which CMake 3 introduces a good support thanks to the notion of INTERFACE library (see this SO answer, and CMake's documentation of the feature).
We can make an interface library target out of module A:
add_library(module_A INTERFACE)
That gives us all the nice features of CMakes targets (it is possible to use it as another target's dependency, to export it, to transitively forward requirements etc.)
But in this case, the headers in module A do not show up in our IDE (Xcode, yet we expect it to be the same with most/every other IDE).
This proves to be a major drawback in the workflow, since we need the files composing module A to be shown in the IDE for edition. Is it possible to achieve that ?
Several months down the line, I did not find a way to directly list the header files for an INTERFACE library.
Since the question still has some views, here is what I ended up doing (i.e. what appears like the lesser hack currently available).
Imagine module A is a header only library. In the CMakeLists.txt declaring its target:
# Define 'modA_headers' variable to list all the header files
set(modA_headers
utility.h
moreUtilities.h
...)
add_library(moduleA INTERFACE) # 'moduleA' is an INTERFACE pseudo target
#
# From here, the target 'moduleA' can be customised
#
target_include_directories(moduleA ...) # Transitively forwarded
install(TARGETS moduleA ...)
#
# HACK: have the files showing in the IDE, under the name 'moduleA_ide'
#
add_custom_target(moduleA_ide SOURCES ${modA_headers})
I do not accept this answer, since I expect further releases of CMake to offer a more semantically correct approach, which will then be accepted : )
You can use the new target_sources command in CMake 3.1.
add_library(moduleA INTERFACE)
target_include_directories(moduleA INTERFACE ...)
target_sources(moduleA INTERFACE
${CMAKE_CURRENT_SOURCE_DIR}/utility.h
${CMAKE_CURRENT_SOURCE_DIR}/moreUtilities.h
)
It is also transitive.
http://www.cmake.org/cmake/help/v3.1/command/target_sources.html#command:target_sources
The limitation of not being able to export targets which have INTERFACE_SOURCES has been lifted for CMake 3.3.
I'm currently working to upgrade a set of c++ binaries that each use their own set of Makefiles to something more modern based off of Autotools. However I can't figure out how to include a third party library (eg. the Oracle Instant Client) into the build/packaging process.
Is this something really simple that I've missed?
Edit to add more detail
My current build environment looks like the following:
/src
/lib
/libfoo
... source and header files
Makefile
/oci #Oracle Instant Client
... header and shared libraries
Makefile
/bin
/bar
... source and header files
Makefile
Makefile
/build
/bin
/lib
build.sh
Today the top level build.sh does the following steps:
Runs each lib's Makefile and copies the output to /build/lib
Runs each binary's Makefile and copied the output to /build/bin
Each Makefile has a set of hardcoded paths to the various sibling directories. Needless to say this has become a nightmare to maintain. I have started testing out autotools but where I am stuck is figuring out the equivalent to copying /src/lib/oci/*.so to /build/lib for compile time linking and bundling into a distribution.
I figured out how to make this happen.
First I switched to a non recursive make.
Next I made the following changes to configure.am as per this page http://www.openismus.com/documents/linux/using_libraries/using_libraries
AC_ARG_WITH([oci-include-path],
[AS_HELP_STRING([--with-oci-include-path],
[location of the oci headers, defaults to lib/oci])],
[OCI_CFLAGS="-$withval"],
[OCI_CFLAGS="-Ilib/oci"])
AC_SUBST([OCI_CFLAGS])
AC_ARG_WITH([oci-lib-path],
[AS_HELP_STRING([--with-oci-lib-path],
[location of the oci libraries, defaults to lib/oci])],
[OCI_LIBS="-L$withval -lclntsh -lnnz11"],
[OCI_LIBS='-L./lib/oci -lclntsh -lnnz11'])
AC_SUBST([OCI_LIBS])
In the Makefile.am you then use the following lines (assuming a binary named foo)
foo_CPPFLAGS = $(OCI_CFLAGS)
foo_LDADD = libnavycommon.la $(OCI_LIBS)
ocidir = $(libdir)
oci_DATA = lib/oci/libclntsh.so.11.1 \
lib/oci/libnnz11.so \
lib/oci/libocci.so.11.1 \
lib/oci/libociicus.so \
lib/oci/libocijdbc11.so
The autotools are not a package management system, and attempting to put that type of functionality in is a bad idea. Rather than incorporating the third party library into your distribution, you should simply have the configure script check for its existence and abort if the required library is not available. The onus is on the user to satisfy the dependency. You can then release a binary package that will allow the user to use the package management system to simplify dependency resolution.