I followed the instructions of this tutorial:
https://www.tensorflow.org/extend/adding_an_op#implement_the_gradient_in_python.
There is this comment provided: g++ -std=c++11 -shared zero_out.cc -o zero_out.so -fPIC -I$TF_INC -I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework -O2
But the linker cannot find -ltensorflow_framework (it should be a tensorflow_frameowork.so file!?)
After some research, I found following links:
https://github.com/tensorflow/tensorflow/issues/1569
https://github.com/eaplatanios/tensorflow_scala/issues/26 --> I downloaded the .jar and linked it via -l/pathto/tensorflow_framework.so, still the fatal error: tensorflow/core/framework/op_kernel.h: No such file or directory is not found.
https://github.com/tensorflow/tensorflow/issues/1270 last comment does not work and so does not help me.
I tried to search for sudo find /usr/. -name "tensorflow_framework.so" recursively but I could not find anything. Tensorflow is installed for sure via anaconda and I also cloned and compiled the repository from source.
How to find a way to include the -ltensorflow_framework?
One answer, I have found:
I have installed my python via anaconda2 and I always tried to find out TF_INC and TF_LIB when I activated my repository source activate <env>. and the could not found any ~/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow
*.so files
This time I went out every python environment with the shell command source deactivate and I typed the following command
python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())'
Now, I got a different path: ~/anaconda2/lib/python2.7/site-packages/tensorflow, where the lib libtensorflow_framework.so is located.
In my case, the file libtensorflow_framework.so.1 existed inside my TF_LIB directory instead of libtensorflow_framework.so. In order to solve this issue, I had to create a symbolic link as follows:
ln -s libtensorflow_framework.so.1 libtensorflow_framework.so
Source: Tensorflow NotFoundError: libtensorflow_framework.so: cannot open shared file or directory
tensorflow_framework is not used before Tensorflow 1.4.1
When you call python from the shell make sure you are calling the right one:
TF_LIB = $(shell python -c 'import tensorflow; print(tensorflow.sysconfig.get_lib())')
or
TF_LIB = $(shell python3 -c 'import tensorflow; print(tensorflow.sysconfig.get_lib())')
To be more clear:
Get the path from python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())', and there is a libtensorflow_framework.so.1 inside the directory. Say /home/.../lib/python3.7/site-packages/tensorflow_core/libtensorflow_framework.so.1
Run ln -s /home/.../lib/python3.7/site-packages/tensorflow_core/libtensorflow_framework.so.1 /home/.../lib/python3.7/site-packages/tensorflow_core/libtensorflow_framework.so
Related
I would like to compile and then use the scientific project BASS, which is distributed as c++ source code on github. I've set up a conda environment bass to hold everything related to BASS, and I'd like to compile BASS into this environment (so that if I delete the environment, it's cleanly removed).
I don't know if I should be using conda-build or make to do this. There is a Makefile distributed with the project, but I think it might have an error.
My latest try is the following: (the source files are in code/ and gsl seems to be a dependency):
conda create -n bass
conda activate bass
conda install make
conda install gsl
cd code/
make
I get the following:
gcc -c main.cpp -I../Libs/GSL/include/ -I./
main.cpp:19:10: fatal error: 'gsl/gsl_statistics.h' file not found
#include <gsl/gsl_statistics.h>
^~~~~~~~~~~~~~~~~~~~~~
1 error generated.
make: *** [Makefile:11: main.o] Error 1
My questions:
should I be doing this with make?
do I need gcc/g++/cxx-compiler installed in my conda env?
is the Makefile correct where it says "-I../Libs/GSL/include/"
Thank you!
Here's a very basic Conda recipe for the package. Make a folder (say recipe) and put the following two files in it:
meta.yaml
package:
name: bass
version: 0.1 # this is totally arbitrary (there are no versions)
source:
git_url: https://github.com/judyboon/BASS.git
requirements:
build:
- {{ compiler('cxx') }}
host:
- gsl
run:
- gsl
about:
home: https://github.com/judyboon/BASS
license: GPL-3.0-only
license_file: COPYING
license_family: GPL
build.sh
#!/bin/bash
## change to source dir
cd ${SRC_DIR}/code
## compile
${CXX} -c main.cpp *.cpp -I./ ${CXXFLAGS}
## link
${CXX} *.o -o BASS -lgsl -lgslcblas -lm ${LDFLAGS}
## install
mkdir -p $PREFIX/bin
cp BASS ${PREFIX}/bin/BASS
You can then build this with conda build ., run from in that directory.
Installing
After successful build, you can create an environment with this package installed using
conda create -n foo --use-local bass
Since I'm on an Intel architecture, and this tool uses CBLAS, I would further specify to use MKL:
conda create -n foo --use-local bass blas=*=*mkl
Now if you activate the environment foo (that's an arbitrary name), the software will be on PATH, under the binary BASS.
Additional Notes
I verified this works on osx-64 platform with the conda-forge channel prioritized.
The Makefile accompanying the code isn't very generic. Here we just have it compile all .cpp files and subsequently link all .o files.
There aren't any releases on the repository, so the versioning in the meta.yaml is arbitrary.
The build.sh defines the final output binary as BASS - that could be changed, and differs from the original Makefile which outputs the binary as main.
Based on what you write, the correct way is to build a conda recipe. By writing a recipe the conda knows what to delete, when you delete the environment.
The recipe is a text file (the meta.yaml file), where you write some metadata describing the package, define dependencies and write a short build script that executes the make file and installs the binaries to correct location. Defining the compilation environment may not be trivial and you should follow the documentation for the package you are trying to install. If there are problems with the package code (that includes the makefile) that's something conda can't help you with.
See the documentation on how to write the recipe:
https://docs.conda.io/projects/conda-build/en/latest/concepts/recipe.html
When you've got the recipe complete, then you would run the conda build and conda install -c [path to the build] [the package name] (I'm writing this because it took me ages to realize how to correctly install a local package.)
Having some issues, now I have read the following:
hello world python extension in c++ using boost?
I have tried installing boost onto my desktop, and, done as the posts suggested in terms of linking. I have the following code:
#include <boost/python.hpp>
#include <Python.h>
using namespace boost::python;
Now I have tried linking with the following:
g++ testing.cpp -I /usr/include/python2.7/pyconfig.h -L /usr/include/python2.7/Python.h
-lpython2.7
And I have tried the following as well:
g++ testing.cpp -I /home/username/python/include/ -L /usr/include/python2.7/Python.h -lpython2.7
I keep getting the following error:
/usr/include/boost/python/detail/wrap_python.hpp:50:23: fatal error: pyconfig.h: No such
file or directory
# include <pyconfig.h>
I don't know where I am going wrong. I do have boost.python installed, there's just a problem linking?
I just had the same error, the problem is g++ can't find pyconfig.h(shocking, I know). For me this file is located in /usr/include/python2.7/pyconfig.h so appending -I /usr/include/python2.7/ should fix it, alternatively you can add the directory to your path with:
export CPLUS_INCLUDE_PATH="$CPLUS_INCLUDE_PATH:/usr/include/python2.7/"
You can also add this to your .bashrc and it will be added whenever you start your shell next(you will have to reopen your terminal to realize the changes).
You can find your own python include path by using find /usr/include -name pyconfig.h, in my case this returns:
/usr/include/python2.7/pyconfig.h
/usr/include/i386-linux-gnu/python2.7/pyconfig.h
There two possible causes for this symptom: 1. you don't have python-dev installed. 2. you have python-dev installed and your include path is incorrectly configured, which the above posting provide a solution. In my case, I was installing boost, and it is looking for the pyconfig.h header file that is missing in my ubuntu:
The solution is
apt-get install python-dev
In other linux flavors, you have to figure out how to install python header.
If you have a .c file (hello.c) and you want to build an libhello.so library, try:
find /usr/include -name pyconfig.h
[out]:
/usr/include/python2.7/pyconfig.h
/usr/include/x86_64-linux-gnu/python2.7/pyconfig.h
then use the output and do:
gcc -shared -o libhello.so -fPIC hello.c -I /usr/include/python2.7/
If you're converting from cython's .pyx to .so, try this python module, it will automatically build the .so file given the .pyx file:
def pythonizing_cython(pyxfile):
import os
# Creates ssetup_pyx.py file.
setup_py = "\n".join(["from distutils.core import setup",
"from Cython.Build import cythonize",
"setup(ext_modules = cythonize('"+\
pyxfile+".pyx'))"])
with open('setup_pyx.py', 'w') as fout:
fout.write(setup_py)
# Compiles the .c file from .pyx file.
os.system('python setup_pyx.py build_ext --inplace')
# Finds the pyconfig.h file.
pyconfig = os.popen('find /usr/include -name pyconfig.h'\
).readline().rpartition('/')[0]
# Builds the .so file.
cmd = " ".join(["gcc -shared -o", pyxfile+".so",
"-fPIC", pyxfile+".c",
"-I", pyconfig])
os.system(cmd)
# Removing temporary .c and setup_pyx.py files.
os.remove('setup_pyx.py')
os.remove(pyxfile+'.c')
I had a similar experience when building boost for centos7. I was not able to find pyconfig.h on my system only pyconfig-64.h.
After searching around I found that you need to install python-devel to get pyconfig.h
For CentOS do this: yum install python-devel. Then try again.
In my case, I had to create a soft link in my dir /usr/include/
ln -s python3.5m python3.5
the probleme was that i was using python 3.5 but only the python3.5m directory was existing so it wasn't able to find the pyconfig.h file.
In case you have multiple Python installations, the sysconfig module can report the location of pyconfig.h for a given installation.
$ /path/to/python3 -c 'import sysconfig; print(sysconfig.get_config_h_filename())'
/path/to/pyconfig.h
Sorry for such a general title, but i'm not quite sure what exactly i'm missing or what i'm doing wrong. My aim is to build a python extension using boost.python under cygwin and avoiding boost.build tools, that is using make instead of bjam. The latter way was working for me quite good, but now i want to do it this way. I solved many issues by googling and looking for similar topics and that helped me to figure out some tricks and to move forward. Yet at the last step there seems to be some problem. I'll try to describe all my steps in some details in hope that this post may be useful to others in the future and also to better describe the setup.
Because i was not quite sure about the original (from various cygwin repositories) installations of both python and boost i decided to install them from scratch in my home directory, so here is what i do:
first install python. I'll skip the details for this, it is more-or-less straightforward.Important for the latter description is just the path:
/home/Alexey_2/Soft/python2.6 - this is PYTHONPATH, also included in PATH
working on boost:
a) unpack boost source into
/home/Alexey_2/Soft/boost_1_50_0 - this is BOOST_ROOT
b) making bjam. first go into directory:
/home/Alexey_2/Soft/boost_1_50_0/tools/build/v2
next, invoke bootstrap.sh this eventually creates b2 and bjam executables in this directory. In .bash_profile add this directory to PATH, so we can call bjam. Here and after each future edits of .bash_profile i restart cygwin to make the changes
come to the effect
c) still in the
/home/Alexey_2/Soft/boost_1_50_0/tools/build/v2
directory - edit user-config.jam, to let bjam know which python to use. So in my case i only add one line:
using python : 2.6 : /home/Alexey_2/Soft/python2.6/bin/python2.6 : /home/Alexey_2/Soft/python2.6/include/python2.6 : /home/Alexey_2/Soft/python2.6/bin ;
in the lib-path (last entry) i put /home/Alexey_2/Soft/python2.6/bin because it contains libpython2.6.dll
d) ok. now we can make boost-python libraries. go to BOOST_ROOT directory and execute command
bjam --with-python toolset=gcc link=shared
this creates necessary libraries (cygboost_python.dll and libboost_python.dll.a) and places them into
/home/Alexey_2/Soft/boost_1_50_0/stage/lib
building python extension.
here is my simple test program (actually part of the example code)
// file xyz.cpp
#include <boost/python.hpp>
#include <iostream>
#include <iomanip>
using namespace std;
using namespace boost::python;
class Hello
{
std::string _msg;
public:
Hello(std::string msg){_msg = msg;}
void set(std::string msg) { this->_msg = msg; }
std::string greet() { return _msg; }
};
BOOST_PYTHON_MODULE(xyz)
{
class_<Hello>("Hello", init<std::string>())
.def("greet", &Hello::greet)
.def("set", &Hello::set)
;
}
And here is the Makefile:
FLAGS= -fno-for-scope -O2 -fPIC
CPP=c++
# BOOST v 1.50.0
p1=/home/Alexey_2/Soft/boost_1_50_0
pl1=/home/Alexey_2/Soft/boost_1_50_0/stage/lib
# PYTHON v 2.6
p2=/home/Alexey_2/Soft/python2.6/include/python2.6
pl2=/home/Alexey_2/Soft/python2.6/bin
I=-I${p1} -I${p2}
L=-L${pl1} -lboost_python -L${pl2} -lpython2.6
all: xyz.so
xyz.o: xyz.cpp
${CPP} ${FLAGS} ${I} -c xyz.cpp
xyz.so: xyz.o
${CPP} ${FLAGS} -shared -o xyz.so xyz.o ${L}
clean:
rm *.o
rm xyz.so
Some comments:
library paths are set and i compile against proper libraries (see more: compile some code with boost.python by mingw in win7-64bit ).
The link above explains why it is important to configure user-config.jam - i did it in step 1c.
To avoid possible problems (as mentioned in above link and also in Cannot link boost.python with mingw (although for mingw) ) with boost.python libraries liked statically i use
link=shared
as the argument to bjam (see 1d)
As explained here: MinGW + Boost: undefined reference to `WSAStartup#8' the libraries with which one wants to compile something should be listed after object files, that is why we have:
${CPP} ${FLAGS} -shared -o xyz.so xyz.o ${L}
and not
${CPP} ${FLAGS} -shared ${L} -o xyz.so xyz.o
And here is a piece of my .bash_profile (eventually), where i define environmental variables:
# Python
export PATH=/home/Alexey_2/Soft/python2.6/bin:$PATH
export PYTHONPATH=/home/Alexey_2/Soft/python2.6
export LD_LIBRARY_PATH=/home/Alexey_2/Soft/python2.6/lib:/home/Alexey_2/Soft/python2.6/bin:$LD_LIBRARY_PATH
# Boost
export BOOST_ROOT=/home/Alexey_2/Soft/boost_1_50_0
export LD_LIBRARY_PATH=/home/Alexey_2/Soft/boost_1_50_0/stage/lib:$LD_LIBRARY_PATH
export PATH=/home/Alexey_2/Soft/boost_1_50_0/stage/lib:$PATH
# bjam
export PATH=/home/Alexey_2/Soft/boost_1_50_0/tools/build/v2:$PATH
Finally, to the problem. With the above setup i was able to successfully build the python extension object file:
xyz.so
However, when i test it by simple script:
# this is a test.py script
import xyz
the ImportError comes:
$ python test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
import xyz
ImportError: No module named xyz
It has been noted that the reason for such a problem may be the wrong python executable being used, but this is not the case:
$ which python
/home/Alexey_2/Soft/python2.6/bin/python
what is expected (note python is a symbolic link to python2.6 from that directory)
Here is one more useful piece of information i have:
$ ldd xyz.so
ntdll.dll => /cygdrive/c/Windows/SysWOW64/ntdll.dll (0x76fa0000)
kernel32.dll => /cygdrive/c/Windows/syswow64/kernel32.dll (0x76430000)
KERNELBASE.dll => /cygdrive/c/Windows/syswow64/KERNELBASE.dll (0x748e0000)
cygboost_python.dll => /home/Alexey_2/Soft/boost_1_50_0/stage/lib/cygboost_python.dll (0x70cc0000)
cygwin1.dll => /usr/bin/cygwin1.dll (0x61000000)
cyggcc_s-1.dll => /usr/bin/cyggcc_s-1.dll (0x6ff90000)
cygstdc++-6.dll => /usr/bin/cygstdc++-6.dll (0x6fa90000)
libpython2.6.dll => /home/Alexey_2/Soft/python2.6/bin/libpython2.6.dll (0x67ec0000)
??? => ??? (0x410000)
I'm wondering what
??? => ??? (0x410000)
could possibly mean. May be this what i'm missing. But what is that?
Any comments and suggestions (not only about the last question) are very much appreciated.
EDIT:
Following suggestion of (twsansbury) examining the python module search path with -vv option:
python -vv test.py
gives
# trying /home/Alexey_2/Programming/test/xyz.dll
# trying /home/Alexey_2/Programming/test/xyzmodule.dll
# trying /home/Alexey_2/Programming/test/xyz.py
# trying /home/Alexey_2/Programming/test/xyz.pyc
...
# trying /home/Alexey_2/Soft/python2.6/lib/python2.6/site-packages/xyz.dll
# trying /home/Alexey_2/Soft/python2.6/lib/python2.6/site-packages/xyzmodule.dll
# trying /home/Alexey_2/Soft/python2.6/lib/python2.6/site-packages/xyz.py
# trying /home/Alexey_2/Soft/python2.6/lib/python2.6/site-packages/xyz.pyc
Traceback (most recent call last):
File "test.py", line 1, in <module>
import xyz
ImportError: No module named xyz
The first directory is wherefrom i call python to run the script. The main conclusion is that cygwin python is looking for modules (libraries) with the standard Windows extension - dll (among other 3 types), not the .so as i originally expected from the Linux-emulation-style of cygwin. So changing the following lines in the previous Makefile to:
all: xyz.dll
xyz.o: xyz.cpp
${CPP} ${FLAGS} ${I} -c xyz.cpp
xyz.dll: xyz.o
${CPP} ${FLAGS} -shared -o xyz.dll xyz.o ${L}
clean:
rm *.o
rm xyz.dll
produces xyz.dll, which can successfully be loaded:
python -vv test.py
now gives:
Python 2.6.8 (unknown, Mar 21 2013, 17:13:04)
[GCC 4.5.3] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
# trying /home/Alexey_2/Programming/test/xyz.dll
dlopen("/home/Alexey_2/Programming/test/xyz.dll", 2);
import xyz # dynamically loaded from /home/Alexey_2/Programming/test/xyz.dll
ThatImportError is generally not Boost.Python related. Rather, it normally indicates that xyz is not in the Python Module Search Path.
To debug this, consider running python with -vv arguments. This will cause python to print a message for each file that is checked for when trying to import xyz. Regardless, the build process look correct, so the problem is likely the result of either the file extension or the module is not in the search path.
I am unsure as to how Cygwin will interact with Python's run-time loading behavior. However:
On Windows, python extensions have a .pyd extension.
On Linux, python extensions have a .so extension.
Additionally, verify that the xyz library is located in one of the following:
The directory containing the test.py script (or the current directory).
A directory listed in the PYTHONPATH environment variable.
The installation-dependent default directory.
If the unresolved library shown in ldd causes errors, it will generally manifest as an ImportError with a message indicating undefined references.
I would like to call C++ functions in python which return uBLAS vector/matrices.
There is a package to do this called PyUblas,
but am having trouble getting this to work in Ubuntu.
Can anyone walk me through the steps to get this sample to work?
Also, I am somewhat confused with the installation instructions. I did not follow the instructions to install boost and numpy since I have already installed them from the Ubuntu repositories.
I guess that wasn't so hard. Here's what I did to run the small sample on the website and in test/samply.py.
After downloading and unpacking PyUblas, and having the necessary libraries installed, cd into PyUblas-VERSION
./configure.py --help
./configure.py --some-options
sudo python setup.py install
cd test/
g++ -I/usr/include/python2.7 -fPIC -g -fpic -shared sample_ext.cpp -lboost_python -lpython2.7 -o sample_ext.so
python sample.py
It's going to be difficult to help without knowing what exactly your issues are. My first guess if the tests aren't working would be that your config file is not pointing to the right directories/files.
How does boost.python deal with Python 3? Is it Python 2 only?
Newer versions of Boost should work fine with Python V3.x. This support has been added quite some time ago, I believe after a successful Google Summer of Code project back in 2009.
The way to use Python V3 with Boost is to properly configure the build system by adding for instance:
using python : 3.1 : /your_python31_root ;
to your user-config.jam file.
libboostpython needs to be built with python3 in order to do this. This doesn't work with boost 1.58 (which comes with Ubuntu 16.04), so make sure you download the latest boost distribution. I just did this with boost_1_64_0.
As mentioned above, find the file "user-config.jam" in you boost code distribution, and copy it to $HOME.
cp /path/to/boost_1_64_0/tools/build/example/user-config.jam $HOME
Then edit the python line (the last line) so that is says:
using python : 3.5 : /usr/bin/python3 : /usr/include/python3.5m : /usr/lib ;
This is correct for Ubuntu 16.04. You can use pkg-config to find the correct include directory.
user#computer > pkg-config --cflags python3
-I/usr/include/python3.5m -I/usr/include/x86_64-linux-gnu/python3.5m
And you only need the first include directory.
Then build boost from scratch. (Sorry.) I install it to /usr/local
cd /path/to/boost_1_64_0
./bootstrap.sh --prefix=/usr/local
./b2
sudo ./b2 install
Now jump into the python example directory, and build the tutorial
cd /path/to/boost_1_64_0/libs/python/example/tutorial
bjam
This will not build correctly if you have a system install of boost, because, under the hood, bjam is linking to libboostpython using the g++ parameter "-lboost". But, on Ubuntu 16.04, this will just go and find "/usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.58.0", and then the python bindings will fail to load. In fact, you'll get his error:
ImportError: /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.58.0: undefined symbol: PyClass_Type
If you want to see the g++ commands that bjam is using, do this:
user#computer > bjam -d2 -a | grep g++
g++ -ftemplate-depth-128 -O0 -fno-inline -Wall -g -fPIC -I/usr/include/python3.5m -c -o "hello.o" "hello.cpp"
g++ -o hello_ext.so -Wl,-h -Wl,hello_ext.so -shared -Wl,--start-group hello.o -Wl,-Bstatic -Wl,-Bdynamic -lboost_python -ldl -lpthread -lutil -Wl,--end-group
Here we see the problem, you need "-L/usr/includ/lib" just before "-lboost_python". So execute this to link the shared library correctly:
g++ -o hello_ext.so -Wl,-h -Wl,hello_ext.so -shared -Wl,--start-group hello.o -Wl,-Bstatic -Wl,-Bdynamic -L/usr/local/lib -lboost_python -ldl -lpthread -lutil -Wl,--end-group
You may need to rerun ldconfig (or reboot)
sudo ldconfig
And you are finally ready to go:
user#computer > python3
Python 3.5.2 (default, Nov 17 2016, 17:05:23)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import hello_ext
>>> hello_ext.greet()
'hello, world'
>>> exit()
Yes this question is super old, but I had to do something that wasn't specified in any of the answers here (though it was built off some of the suggestions), so I'll quickly jot down my entire process:
Download boost_X_Y_Z.tar.bz2 (I used boost 1.68.0)
tar --bzip2 -xf boost_1_68_0.tar.bz2 (where you want folder to be temporarily)
cd boost_1_68_0
./bootstrap.sh --with-python-version=3.6 --prefix=/usr/local
./b2
sudo ./bjam install
cp tools/build/example/user-config.jam $HOME, then modify the contents of this file to say using python : 3.6 : /usr/bin/python3 : /usr/include/python3.6m : /usr/lib ; (or whatever folders are appropriate for your environment)
Given this C++ source file BoostPythonHelloWorld.cpp:
#include <boost/python.hpp>
char const* say_hi()
{
return "Hi!";
}
BOOST_PYTHON_MODULE(BoostPythonHelloWorld)
{
boost::python::def("say_hi", say_hi);
}
And this Python script BoostPythonHelloWorld.py:
import BoostPythonHelloWorld
print(BoostPythonHelloWorld.say_hi())
It can be compiled and ran as such:
gcc -c -fPIC -I/path/to/boost_1_68_0 -I/usr/include/python3.6 /other_path/to/BoostPythonHelloWorld.cpp
gcc -shared -Wall -Werror -Wl,--export-dynamic BoostPythonHelloWorld.o -L/path/to/boost_1_68_0/stage/lib -lboost_python36 -o BoostPythonHelloWorld.so
python3 BoostPythonHelloWorld.py
The part that was different for me was -Wl,--export-dynamic BoostPythonHelloWorld.o, I had not seen that anywhere else, and I was getting a Python error concerning an undefined symbol until I added that.
Hope this helps someone down the line!
If you get "error: No best alternative for /python_for_extension" be sure to have
using python : 3.4 : C:\\Python34 : C:\\Python34\\include : C:\\Python34\\libs ;
only in user-config.jam in your home path and nowhere else.
Use double backslashes when compiling under windows with mingw (toolset=gcc) or MSVC (toolset=msvc).
Compile with cmd, not msys, and if you also have python 2.7 installed remove that from PATH in that shell.
First do
bootstrap.bat gcc/msvc
assuming you have the gcc/msvc tools available via PATH (/ for the alternatives, but use only one, or leave away)
Afterward you can also do
booststrap.sh --with-bjam=b2
in msys to generate a project-config.jam, but need to edit it to remove the "using python" and "/usr",..
Then the following
b2 variant=debug/shared link=static/shared toolset=gcc/msvc > b2.log
With static the python quickstart examples did not work for me, although I would prefer to do without the boost_python dll.
I did not try on linux, but it should be more straightforward there.
You can even specify the python distribution via
./bootstrap.sh --with-python=<path to your python binary>
e.g.
./bootstrap.sh --with-python=python3
for your system's python3 or
./bootstrap.sh --with-python=$VIRTUAL_ENV/bin/python
for the python version of your currently active virtual env python.
Refer this to know how to build boost with python. It shows the way to build with python2 with Visual Studio 10.0 (2010). But I go through the same procedure for a project that I am currently working on and it works fine with python 3.5 and Visual Studio 14.1 (2017).
If you get this error when building your python boost project, just add BOOST_ALL_NO_LIB value to Preprocessor Definitions (inside C\C++ > preprocessor tab) in your project properties.
And also, do not forget to add boost .dll files location to your system path.
When the path to Python contains spaces, you will be in for quite a ride. After a whole lot of trial and error, I finally managed to get something that works. Behold my user-config.jam (which has to be in my home directory for bjam to find it):
import toolset : using ;
using python : 3.6
: \"C:\\Program\ Files\ (x86)\\Microsoft\ Visual\ Studio\\Shared\\Python36_64\\python.exe\"
: C:\\Program\ Files\ (x86)\\Microsoft\ Visual\ Studio\\Shared\\Python36_64\\include
: C:\\Program\ Files\ (x86)\\Microsoft\ Visual\ Studio\\Shared\\Python36_64\\libs
;
The inconsistent quoting is intended and seems to be required. With this I can build boost-python and use it as Boost::python36 in my CMakeLists.txt. Still, one issue remains: I have to link to python manually viz
target_link_libraries(MyTarget
Boost::boost Boost::python36
"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Python36_64/libs/python36.lib")
target_include_directories(MyTarget PRIVATE
"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Python36_64/include")
In my case adding "Using Python : 3 etc." into user-config.jam in my home directory didn't work. I had to add the line into project-config.jam instead, which resides in the root directory of unpacked boost.
Specifically the line was:
using python : 3.9 : /usr/bin/python3 : /usr/include/python3.9 : /usr/lib ;
and the version of boost was 1_78_0