I want to provide my C++ project with a Python interface. Technically, I have decided to use Cython for wrapping the C++ code. Over time, the entire project is meant to become a Python extension module, but at first, this is highly experimental. Gradually, C++ classes need to be exposed to Python.
My question is how to best organize files and build configurations so that Cython-generated and human-written C++ code do not get mixed and the Python extension module is cleanly built seperate from the other targets.
I imagine a directory structure like this for the source files, and some build directory for Cython.
Project/
src/
*.h
*.cpp
cython/
Project.pyx
setup.py
Basically I have 3 folders :
CPROJECT, The C++ library : producing a libcproject.so shared object
CYPROJECT, The cythonized Python extension : producing the cyproject.so using Cython
DEPENDENCIES, The dependencies : where I copy external requirements for both projects
In 1. I build the C++ extension (compiled with gcc - -shared, -fPIC compile options) that will be exposed to python and that the CYPROJECT relies on to expose features to Python. As a post processing command, the resulting .so is copied into DEPENDENCIES/libcproject/ (as well as the include files). This way the library is, of course, usable independently in a pure C++ project as well.
In 2. I make use of 3 sub-folders :
adapters : which mainly contains C++ additional classes (often classes derived from the ones provided by libcproject.so). Those are usually classes that are enhanced with functionalities specific to Cython requirements (such as storing the PyObject * C version of a targeted Python version - inherited from object - of a given class and the reference counting management, via Py_XINCREF and Py_DECREF, ...).
pyext : where are stored all the Cython hand written .pyx files.
setup : containing the setup.sh script (for setting up the dependencies paths and calling the python setup.py build_ext --inplace for generating the final cyproject.so (to be added to the PYTHONPATH) and cyproject.pyx.
So what's in the setup sub-folder ?
Here is a sample code for setup.sh :
export PYTHONPATH=$PYTHONPATH:../../../DEPENDENCIES/Cython-0.18
export PATH=$PATH:../../../DEPENDENCIES/libcproject:../../../DEPENDENCIES/Cython-0.18/bin
# Note the `../../../DEPENDENCIES/libcproject`...
CC="gcc" \
CXX="g++" \
python setup.py build_ext --inplace
And here an example of setup.py (mainly to demonstrate how the additional adapters are compiled):
import sys
import os
import shutil
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
# Cleaning
for root, dirs, files in os.walk(".", topdown=False):
for name in files:
if (name.startswith("cyproject") and not(name.endswith(".pyx"))):
os.remove(os.path.join(root, name))
for name in dirs:
if (name == "build"):
shutil.rmtree(name)
# Building
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [
Extension("cyproject",
sources=["cyproject.pyx", \
"adapter/ALabSimulatorBase.cpp", \
"adapter/ALabSimulatorTime.cpp", \
"adapter/ALabNetBinding.cpp", \
"adapter/AValueArg.cpp", \
"adapter/ALabSiteSetsManager.cpp", \
"adapter/ALabSite.cpp", \
],
libraries=["cproject"],
language="c++",
extra_compile_args=["-I../inc", "-I../../../DEPENDENCIES/python2.7/inc", "-I../../../DEPENDENCIES/gsl-1.8/include"],
extra_link_args=["-L../lib"]
extra_compile_args=["-fopenmp", "-O3"],
extra_link_args=[]
)
]
)
And finally, the main .pyx, that links all the hand written .pyxs of the cython part together [cyproject.pyx] :
include "pyext/Utils.pyx"
include "pyext/TCLAP.pyx"
include "pyext/LabSimulatorBase.pyx"
include "pyext/LabBinding.pyx"
include "pyext/LabSimulatorTime.pyx"
...
Note : All the files generated by Cython remains in this setup folder, well separated from the hand written stuffs (adapters and pyext), as expected.
In 3. Using a separated DEPENDENCIES folder allows to keep things well separated (in case I would move the CYPROJECT - and its dependencies - in some other environment).
All of this to give you an overview (a pertinent one, I hope) on how one can organize that sort of project.
Related
I have been given a static library libExample.a together with a bunch of C++ headers, which I need to use in an iOS app. The binary is fat, containing objects for iphoneos-arm64/e and iphonesimulator-x64.
I have done some research on the subject and came to the conclusion, that using a XCFramework would be the best thing to do. Still, I feel completely out of my depth, since this is my first time trying anything of the sort.
What I have done so far
1. Creating the XCFramework from library files
lipo -extract architectures from fat binary
xcodebuild -create-xcframework -library LIB-arm64.a -headers HEADERS -library ...
Importing the XCFramework into my Swift project didn't yield any usable modules. Also the folder was missing a lot of the files, I've seen in examples. It seemed like the wrong way to go about it, so I tried...
2. Creating a Framework and then a XCFramework from it
Files
Create new Objective-C Framework project ExampleFramework
Pull all my headers and the fat binary into the project
Add all my headers to the ExampleFramework.h umbrella header
Create the following ExampleFramework.modulemap:
framework module ExampleFramework {
umbrella header "ExampleFramework.h”
link "Example"
export *
module * { export * }
}
Settings
General:
1.1. Add libExample.a to Frameworks & Libraries
Build Phases:
2.1. Make all headers public
2.2. Add libExample.a to Link Binary with Libraries
2.3. Add libExample.a to Copy Bundle Resources
Build Settings:
3.1. Skip Install : No
3.2. Build Libraries for distribution : Yes
3.3. Module map file: ExampleFramework/ExampleFramework.modulemap
3.4. Defines modules: Yes
3.5. Compile Sources As: Objective-C++
I then archived the framework, for iphoneos and iphonesimulator respectively:
xcodebuild archive \
-scheme "ExampleFramework" ONLY_ACTIVE_ARCH=NO \
-archivePath "path/to/ExampleFramework_${SDK}.xcarchive" \
-sdk ${SDK} \
SKIP_INSTALL=NO \
BUILD_LIBRARIES_FOR_DISTRIBUTION=YES
...and generated a XCFramework from the outputs:
xcodebuild -create-xcframework \
-framework "path/to/ExampleFramework_iphoneos.xcarchive/Products/Library/Frameworks/ExampleFramework.framework"
-framework ...
-output "path/to/ExampleFramework.xcframework"
Build Errors
The folder structure I got from this looked promising, so I tried adding the XCFramework to my iOS project. The module was being found now, but XCode stopped compiling at the first #include, saying it wasn't able to find stdexcept, from which I concluded it was missing the C++ standard library headers.
I tried setting Header Search Paths in Build Settings. Looking for the right path I found a couple, that looked relevant to me:
/Applications/Xcode_13.2.1.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1
/Applications/Xcode_13.2.1.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/usr/include
/Applications/Xcode_13.2.1.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/usr/include
Depending on which one I choose and whether I clean before building, I'm getting a bunch of different errors (it seems random). Mainly:
Unknown type name 'namespace'
and lots and lots of others, that lead to me believe XCode doesn't recognize the headers as C++.
Or:
Cyclic dependency on module 'Darwin' : Darwin -> std -> Darwin
I really have no idea at this point, so any solutions or suggestions pointing me into the right direction, would me much appreciated. I'd also be open to some completely different approach, I just want to use the library somehow.
Thanks!
For reference from module.modulemap
framework module GoogleAppMeasurement {
umbrella header "GoogleAppMeasurement-umbrella.h"
export *
module * { export * }
link framework "Security"
link framework "SystemConfiguration"
link "c++"
link "sqlite3"
link "z"
}
I cannot make local include paths work in the Meson build system.
This C++ inclusion works correctly:
#include </cygdrive/c/Users/user/project/Third-Party/eigen/Eigen/Dense>
This one does not:
#include "Third-Party/eigen/Eigen/Dense"
fatal error: Eigen/Dense: No such file or directory
In the Meson build file, I tried to add Eigen's path, without success:
# '.' will refer to current build directory
include_dirs = include_directories('include', '.', '../project/Third-Party/eigen')
This is the project tree structure:
project
meson.build
src
meson.build
example.h
example.cpp
Third-Party
eigen (headers only lib)
Eigen
Note: with CMake I do not have this issue.
For dependency management, meson allows you to manually declare include_directories() in your build files. However, there is another way do handle dependencies: using dependency() command.
dependency() is a much better way to handle dependencies, because meson will build it if necessary (if dependency is a shared or a static library) and safely allows you to use includes. That means that you don't have to know where includes for dependency are located physically or care about their paths ever after. The only downside is that this kind of dependency needs it's own meson.build file.
Using dependency() command:
To actually use it, you have to write a wrap file for dependency. Or, if you are lucky enough, there is already a wrap file for you in the Wrap DB -- a community-driven database for meson wrap files. Wrap file is a config of some kind that declare where you can get a dependency and in what form. Wrap file can wrap around zip archives and git repositories.
For your given dependency, there is wrap file in Wrap DB: eigen. All you have to do is download it and place it in the subprojects directory near your meson.build. For example:
$ cd project
$ mkdir subprojects
$ wget "https://wrapdb.mesonbuild.com/v1/projects/eigen/3.3.4/1/get_wrap" \
-O subprojects/eigen.wrap
Now, not every project builds with meson. For the ones that don't, wrap file also specify a patch. Patch is used to just copy appropriate meson.build file into dependency directory (as well as any other files that would be needed for building that particular dependency with meson). Eigen wrap file contains a patch.
To find out how any particular dependency declare itself as a dependency (using declare_dependency() command), you need to investigate meson.build file in dependency source directory (although it's often just name of the dependency plus _dep, e.g. "eigen_dep"). For me, eigen directory was subprojects/eigen-eigen-5a0156e40feb. So, you search for the declare_dependency() command:
$ grep declare_dependency subprojects/eigen-eigen-5a0156e40feb/meson.build
eigen_dep = declare_dependency(
As you can see, eigen declare dependency as eigen_dep. If you want to know what exactly is declared, just scroll down the dependency meson.build file.
Now, to use that eigen_dep in your project, create a dependency object with a dependency() command. Here is a sample project that I used to compile "A simple first program" from Eigen: Getting Started:
project('example', 'cpp')
eigen_dependency = dependency('eigen', fallback: ['eigen', 'eigen_dep'])
executable('example', 'example.cpp', dependencies: eigen_dependency)
Notice arguments for the dependency() command. The first one is system-wide dependency that meson is searching for. If there is no eigen for development installed in your system, then meson uses fallback: first item in fallback is basename of the wrap file, second item is a name of declared dependency.
Then use eigen_dependency variable in whatever you build, passing it to the dependencies argument.
Using include_directories() command:
If you want to just include some files from external directory (such as your "Third-Party" directory) using include_directories() command, that directory has to be relative to the meson.build file where you use it.
To use manually declared includes, you need to call include_directories() command to get the include_directories object. Pass that object to include_directories argument in whatever you build.
Given your example, I assume that root meson.build file is a project build file. Then in that root meson.build, for example, you can write:
# File: project/meson.build
project('example', 'cpp')
eigen_includes = include_directories('Third-Parties/eigen')
executable('example', 'example.cpp', include_directories: eigen_includes)
But if you want to get eigen includes from src/meson.build, then you need to change include_directories to:
# File: project/src/meson.build
eigen_includes = include_directories('../Third-Parties/eigen')
...
I am successfully using Boost Python to build a series of Python libraries. These libraries are built conditionally, depending on the settings the user specifies at build time (via CMake).
Now what I would like to do is to merge them all together into a single library, which would contain a series of modules (one per old library) only if they were needed.
So for example, if before I had:
A.so # Always built
B.so # Compiled if B was set
C.so # Compiled if C was set
Now I'd like to have:
MyLib.so # Always built
---
import MyLib
MyLib.A # always works
MyLib.B # works only if MyLib was compiled with B set
MyLib.C # works only if MyLib was compiled with C set
I already know how to create namespaces with Boost Python (via class_), but I'm not sure how I could setup the project so that this final result was possible.
With CMake I can conditionally add files to compile, but I don't know how to define the MyLib module in C++ so that I can add parts to it in separate files.
For now I've added some ifdefs inside the exporting functions which limit the exports based on defines created in CMake.
It's not bad although I'd have preferred to keep the code clean from this, but for now it's my only solution.
For example, my program uses 2 external modules like below image.
There are module A and module B.
Module B uses and includes module A inside it; A's headers, library and data(=model).
But their version is different; the version of A in B is 3.6, but the latest version of A is 3.8
My program includes both modules, with my Makefile like below.
However, I got compile error or, my program get segment faults at runtime.
g++ -I$(A_PATH)/include -I$(B_PATH)/include \
-L$(A_PATH)/lib -L$(B_PATH)/lib \
-Wl,-rpath,$(A_PATH)/lib -Wl,-rpath,$(B_PATH)/lib \
…
I'd like to use the latest module A in my program, then,
what is the best way to use these modules in my Makefile?
Use different directories. One for module A v3.8 that your code'll use. Another directory for module B and under that have sub-diretory for module A v3.6. You can have Makefiles in each of these directories as well as one for you source code (in a separate directory).
I've gone through the example scons builds here and have found them wanting in providing a solution that fits my project.
The structure is as follows:
root/
Module_A/
include/
foo.h
bar.h
src/
foo.cpp
bar.cpp
Module_.../
Every module follows the same structure, an include folder for all the .h's and a src file for the cpps. Each module builds into a shared object. There is no executable.
Modules have cross dependencies. For instance Module_A is the logging mechanism and it is used in modules B, C, D, etc. Likewise, Module_B is the Configuration loader, which is used in several other modules. And Module_C would be the IPC module, used in almost each module listed. Lastly, Module_D is the command center and links against EVERY other module (literally).
I am interested in replacing the current setup we have of using recursive make to build the project. I am trying to build the sconstruct and SConscripts necessary to do so, but I am very new to even make, let alone scons.
I am interested in turning each Module's .cpp and .h into a .so and to have its dependencies resolved automagically as is done with make now.
In the SConscript, I currently use glob to get the *.cpps and then include the module's './include' in the CPPPATH.
I have used
env.SharedLibrary(CPPATH='./include', source= (list of the cpps))
But since this depends on other Modules, it will not work, stating the other module's functions that are used are "not declared".
How do I go about getting this kind of complex structure to build using a hierarchical scons setup?
This should be quite easy to do with SCons. You'll probably want a SConscript script at the root of every module. These will all be invoked by a SConstruct script located at the root of the entire project.
If I understand the question correctly, the problem of the dependencies between the modules can be solved by correctly specifying the include paths of all of the modules. This can be done once in an Environment created in the SConstruct, which should then be passed to the module SConscript scripts.
Here's a brief example:
Sconstruct
env = Environment()
# Notice that the '#' in paths makes the path relative to the root SConstruct
includePaths = [
'#/Module_A/include',
'#/Module_B/include',
'#/Module_N/include',
]
env.Append(CPPPATH=includePaths)
SConscript('Module_A/SConscript', exports='env', duplicate=0)
SConscript('Module_B/SConscript', exports='env', duplicate=0)
SConscript('Module_N/SConscript', exports='env', duplicate=0)
Module_A/SConscript
Import('env')
# Notice the CPPPATH's have already been set on the env created in the SConstruct
env.SharedLibrary(target = 'moduleA', source = ModuleA_SourceFiles)
Module_B/SConscript
Import('env')
# Notice the CPPPATH's have already been set on the env created in the SConstruct
env.SharedLibrary(target = 'moduleB', source = ModuleB_SourceFiles)
Module_N/SConscript
Import('env')
# Notice the CPPPATH's have already been set on the env created in the SConstruct
env.SharedLibrary(target = 'moduleN', source = ModuleN_SourceFiles)