I am trying to compile a project using boost.python as documented on this page. My platform is MacOS X, i386 architecture. I am using the current version of boost, v1.55. The example provided in ${BOOST}/libs/python/example/tutorial/ compiles and works properly. However, when setting up my own project in a different directory outside of the boost root directory, I run into the following problem: when I type ${BOOST}/bjam toolset=darwin architecture=x86 address-model=32 I get the following error message:
Unable to load Boost.Build: could not find "boost-build.jam"
---------------------------------------------------------------
Attempted search from ${CURRENT_PATH} up to the root at ${SOME_OTHER_PATH} and in these directories from BOOST_BUILD_PATH and BOOST_ROOT: /usr/share/boost-build.
Please consult the documentation at 'http://www.boost.org'.
make: *** [all] Error 1
The documentation of the bjam tool is not referenced, bjam --help only returns an error message and Googling only finds this page (which doesn't talk about the problem at all) and this page (which seems to be outdated as indicated by the link at the top of the page).
Question: How do I specify the path of the boost-build.jam file? Or, alternatively, is there any other way to use boost.python with standard tools?
Update 3: The option -d4 lets bjam print verbose debugging output. If the name of the compiler is known, the output can be grepped for the compiler invocation. This can then be used to construct a "regular" Makefile. See e.g. this post for an example of how to do so (albeit it assumes that the compile and linker commands are known).
Related
I have recently installed CMake in order to write code to make use of Libbitcoin in C++ but I am having a hard time, I was trying to build the example code on GitHub here. And it haters been going terribly. I can't manage to link the library right in CMake, here is my code. I read and people were saying that I should try Autoconf but I have no idea how to even start that as I know nothing about Autoconf. I have CMake 3.16, and installed Libbitcoin with brew but alias were made in /usr/local/include for the library, I am on Mac OS X 10.15. The CMake runs fine but when running "make", it responds with:
Scanning dependencies of target CreateAddr
main.cxx:1:10: fatal error: bitcoin/bitcoin.hpp: No such file or directory
1 | #include <bitcoin/bitcoin.hpp>
| ^~~~~~~~~~~~~~~~~~~~~
Here is my CMake text:
Please all help is appreciated I am beyond lost.
It is hard to be sure without knowing the specifics of your installation, but it appears that your include directory paths may be overlapping with what is specified for the header in main.cxx. The include_directories() call tells the compiler to include headers from this directory:
/usr/local/include/bitcoin
Then, in main.cxx, you're including the file with bitcoin/bitcoin.hpp. Combining these suggests the file is located here:
/usr/local/include/bitcoin/bitcoin/bitcoin.hpp
The error states the header could not be found, so perhaps you meant to locate it here:
/usr/local/include/bitcoin/bitcoin.hpp
In that case, just remove the relative directory path from the main.cxx file, like this:
#include <bitcoin.hpp>
Also, you want to link to your libbitcoin library correctly. Using link_directories() is not recommended. Instead, you can specify the full path to your libbitcoin library directly in the call to target_link_libraries(). The library may not be located in /usr/local/include/bitcoin. With these changes, the last few lines in your CMake would look something more like this:
include_directories(/usr/local/include/bitcoin)
add_executable(CreateAddr main.cxx)
target_link_libraries(CreateAddr PUBLIC /your/path/to/libs/libbitcoin.so)
As part of a research project I'm trying to use clang 6.0.1 with Xcode 9.4.1. I've built and installed clang in a custom location (/opt/llvm-6_0_1/clang). I wrote a simple xcplugin compiler specification to integrate my clang version with Xcode.
Now I can open projects in Xcode, select my proxy compiler and use it to build instead of Apple's default clang.
There were some minor additions that I had to make to the xcplugin's xcspec file to get this to work that probably won't be interesting to most people, so I won't provide the details here unless asked.
This all works with most of the projects I've played with, but I'm running into an odd problem where an implicitly linked static library cannot be found by my copy of clang. Specifically I get this error:
ld: file not found: /opt/llvm-6_0_1/clang/Toolchains/LLVM6.0.1.xctoolchain/usr/lib/arc/libarclite_macosx.a
clang-6.0: error: linker command failed with exit code 1 (use -v to see invocation)
Note that the libarclite_macosx.a file is not explicitly included by the Xcode project. I figured it must be implicitly included, perhaps because this project enables ARC?
After pouring over the Xcode generated link command line (it's complex) I decided to look at the MyProject__dependency_info.dat file, which is passed in via the -dependency_info option. Apparently this data file (the path is defined as env var LD_DEPENDENCY_INFO_FILE) is created during the linking process, not as an input to the linker. Perhaps it exists because of a hack workaround using symlinks that I used to get a link to work (described at the end).
In any case the format appears to be binary, but I was able to see a text reference to libarclite_macosx.a in the file:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/arc/libarclite_macosx.a
After enabling the -Xlinker -v option I could see that my built clang was not searching the default toolchain lib or arc paths so I added them:
-L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib
-L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/arc
Now I can see the search paths in the verbose output, but clang still cannot find the library:
Library search paths:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/arc
I've tried adding the paths to the frameworks search paths. I also tried defining the various link path env vars. Nothing has worked.
To try to get a sense of what clang is actually doing, I used fs_usage while getting the link error:
sudo fs_usage -e -w -f filesys | grep "lib/arc"
14:11:00.461965 stat64 [ 2] /opt/llvm-6_0_1/clang/Toolchains/LLVM6.0.1.xctoolchain/usr/lib/arc>>>>>>>>>>>>>>>>>>>>>> 0.000006 ld.1421614
14:11:00.461968 stat64 [ 2] /opt/llvm-6_0_1/clang/Toolchains/LLVM6.0.1.xctoolchain/usr/lib/arc>>>>>>>>>>>>>>>>>>>>>> 0.000002 ld.1421614
Clearly clang really wants to look for this file in the installed location, not the location indicated in the -dependency_info, nor in the search paths that I'm providing.
At this stage the only way I can get a build to work is to add a symlink to Xcode's "arc" directory to my installed clang lib directory. That "works", but is fragile and nasty.
Any thoughts as to how how I can get clang find the static library where it actually lives?
I've been trying to install the nana library for c++. I've used these guides:
https://github.com/qPCR4vir/nana-docs/wiki/Installation
https://github.com/qPCR4vir/nana-docs/wiki/Install-and-use-nana-with-mingw---step-by-step
I got stuck on the part that says "Create a static linkage library solution within a IDE/build system you use, and add all the files which are placed in NanaPath/source and in all its sub directories to the project. Then compile the solution and you will get a static linkage file NanaStatic in a path similar to NanaPath/build/bin/IDEName."
I downloaded MinGW, git, and cmake like it said. I opened up the bat file, ran the "git clone" with the link, ran
cmake -G "MinGW Makefiles"
It did it's thing and finished successfully. Then I tried running "make" and it got to 6% when this showed up:
In file included from C:/Users/.../nana/verbose_prepocessor.hpp:99:0,
from C:\Users\...\nana\source\deploy.cpp:242:C/Users/.../nana/include/filesystem/filesystem.hpp:71:39: fatal error: experimental/filesystem: No such file or directory
# include<experimental/filesystem>
^
compilation terminated.
make[2]: *** [CMakeFiles\nana.dir\build.make:163: CMakeFiles/nana.dir/source/deploy.cpp.obj] Error 1
make[1]: *** [CMakeFiles\Makefile2:67: CMakeFiles/nana.dir/all] Error 2
make: *** [Makefile:129: all] Error 2
I tried using a different source of the code (git and sourceforge) and that didn't make a difference. I tried using the GUI cmake, but I had other errors with that not recognizing MinGW. I looked around for answers online, but they mostly led back to the guides I was using. I checked my GCC and G++ version with gcc/g++ --version, and they're both 6.3.0.
I'll take any suggestions/advice, thanks!
I have not used Eclipse, so I cant help with that. But I will try to help with nana:
Originaly there was no std::filesystem and nana offered one JinHao invented. With the apparition of std::experimental::filsystem candidate, an experimental filesystem in the sdt:: c++ library of some versions of some compilers we adapted the nana filesystem to be a partial implementation of that. Then nana try to configure itself to use the provided std:: (or Boost) implementation or if it is not there then nana::filesystem. It seems like MinGW have problems with filesystem, I'm not sure about that but here you can read: https://github.com/Alexpux/MINGW-packages/issues/2292
Please try to undertstand what is going on in your case an let us know about. We will then try to fix the configuration of nana to work even in that situation.
You can always simply choise to (force) use the nana implementation. Just please compile both the nana library and your project with all the same options, including what filesytem you use. For example adding -DNANA_CMAKE_NANA_FILESYSTEM_FORCE=True to your cmake or define NANA_FILESYSTEM_FORCE in your built system (or IDE).
I am trying to compile tensorflow with a custom clang/llvm toolchain and using clang's native libc++ (instead of borrowing Gcc's stdlibc++).
It looks like bazel plain assumes that every clang will use Gcc's libraries because I get these errors:
$ bazel build --cxxopt=-std=c++11 --cxxopt=-stdlib=libc++ tensorflow:libtensorflow.so
INFO: Found 1 target...
INFO: From Compiling
external/protobuf/src/google/protobuf/compiler/js/embed.cc [for host]:
external/protobuf/src/google/protobuf/compiler/js/embed.cc:37:12:
warning: unused variable 'output_file' [-Wunused-const-variable]
const char output_file[] = "well_known_types_embed.cc";
^
1 warning generated.
ERROR: /home/hbucher/.cache/bazel/_bazel_hbucher/ad427c7fddd5b68de5e1cfaa7cd8c8cc/external/com_googlesource_code_re2/BUILD:11:1: undeclared inclusion(s) in rule '#com_googlesource_code_re2//:re2':
this rule is missing dependency declarations for the following files included by 'external/com_googlesource_code_re2/re2/bitstate.cc':
'/home/hbucher/install/include/c++/v1/stddef.h'
'/home/hbucher/install/include/c++/v1/__config'
I tried to hack into tools/cpp/CROSSTOOL inside bazel as some posts suggested to add the line
cxx_builtin_include_directory: "/home/hbucher/install/include/c++/v1"
but to no avail, it does not seem to make any difference.
Then I tried to follow a bazel tutorial to create a custom toolchain. The text does not help much because they are actually writing a cross tool while what I am trying to do is to tweak the existing host rules and somehow bazel seems to undo every attempt I try to tweak its parameters.
I have got to the point that is currently in my github repository https://github.com/HFTrader/BazelCustomToolchain
However it does not compile and I cannot even figure out how to start debugging this message.
$ bazel build --crosstool_top=#hbclang//:toolchain tensorflow:libtensorflow.so
.....................
ERROR: The crosstool_top you specified was resolved to
'#hbclang//:toolchain', which does not contain a CROSSTOOL file. You can
use a crosstool from the depot by specifying its label.
INFO: Elapsed time: 2.216s
I have appended these lines to my tensorflow/WORKSPACE
new_local_repository(
name="hbclang",
path="/home/hbucher/BazelCustomToolchain",
build_file = "/home/hbucher/BazelCustomToolchain/BUILD",
)
I have asked this question on bazel's google groups but they redirected me to stackoverflow. At this point I am about to give up.
Have someone attempted to do this or I'm breaking ground here?
Thank you.
Solved. Not in the intended way but it works for me.
export INSTALL_DIR="$HOME/install"
export CC=$INSTALL_DIR/bin/clang
export CXX=$INSTALL_DIR/bin/clang++
export CXXFLAGS="-stdlib=libc++ -L$INSTALL_DIR/lib"
export LDFLAGS="-L$INSTALL_DIR/lib -lm -lrt"
export LD_LIBRARY_PATH="/usr/lib:/lib/x86_64-linux-gnu/:$INSTALL_DIR/lib"
git clone https://github.com/tensorflow/tensorflow.git tensorflow-github
cd tensorflow-github
mkdir build-tmp && cd build-tmp
cmake ../tensorflow/contrib/cmake/
make -j4
Easy as 1-2-3 with cmake
[2020-05-24: Edit to make the answer up to date.]
TLDR: To build a project with Bazel with a specific Clang binary, and with libc++, this works for me (where INSTALL_DIR is where I've installed llvm):
CC="$INSTALL_DIR/bin/clang" \
BAZEL_CXXOPTS="-stdlib=libc++:-isystem$INSTALL_DIR/include" \
BAZEL_LINKOPTS="-stdlib=libc++" \
BAZEL_LINKLIBS="-L$INSTALL_DIR/lib:-Wl,-rpath,$INSTALL_DIR/lib:-lc++:-lm" \
bazel test //...
Background:
You can use --repo_env option, e.g. --repo_env=CC=clang, to put these defaults into your project- or system-wide .bazelrc.
This approach uses Bazel's C++ toolchain autoconfiguration which doesn't attempt to declare all the toolchain inputs in BUILD files. This is to simplify the configuration for the user. Therefore whenever you modify the C++ toolchain in a way that Bazel cannot know about (rebuild llvm etc.), you have to run bazel clean --expunge to flush the cache and rerun the autoconfiguration the next time.
The robust solution to specifying C++ toolchain in Bazel is to use the CcToolchainConfigInfo. See the documentation at https://docs.bazel.build/versions/master/tutorial/cc-toolchain-config.html and https://docs.bazel.build/versions/master/cc-toolchain-config-reference.html.
I'm trying (for a few days now) to build a DLL generated from C++ code with boost/python to be used by python. I am a Student from Germany and had mostly to do with Java until now (I wrote some basic OpenGL and gimp filter stuff before in C++). So pardon me in advance for bad english or C++ beginner mistakes. I mean, programming with Java really is a lot more comfortable in comparison to C++. But enough of the skirmish.
The error:
LINK : fatal error LNK1104: File "boost_python-vc110-mt-gd-1_53.lib" could not be openend
My presets:
-using MS Visual Studio 2012 (11.0)
-using boost_1_53_0
-using python2.7 (I heard 3.3 may cause some Problems)
What I did:
Installed python and added it to PATH. Then created a new empty project in VS and a class file "Test.cpp" with following content as described on the boost tutorial page:
char const* greet()
{
return "hello world";
}
#include <boost/python.hpp>
BOOST_PYTHON_MODULE(Test)
{
using namespace boost::python
def("greet", greet);
}
Then came the new part for me, in VS Project Properties:
Configuration Properties > General > Configurationtype > Dynamic Library (.dll)
C/C++ > General > Addition Includedirectories > C:[..]\boost_1_53_0
Linker > General > Additional Library Directories > C:[..]boost_1_53_0\stage\lib
From the error I am assuming i did something wrong with Linker or Include. I also changed the Linker > General > Additional Library Directories to boost_1_53_0\libs because i wasn't sure, but the same error occured. And yes, I correctly included python. I am also not sure if i have to put something else beside python into Linker > Input for boost.
Then I build boost with bjam with no options except msvc-11.0 to be sure to have everything i need (though I read that boost/python doesn't need an extra build) and still got the same error. Can someone help me? I would love to have a step by step description of what to do. I am really despairing of this.
Btw.: I had the same error as this guy a few days before Linker error LNK1104 with 'libboost_filesystem-vc100-mt-s-1_49.lib' then stopped working on it and as I started again I got my brand new error (I can't tell you how this happened).
Since it is looking for a static library, add BOOST_PYTHON_STATIC_LIB flag, go to VS properties -> Preprocessor -> Processor definition, add BOOST_PYTHON_STATIC_LIB flag.
You need to create a "user-config.jam" file that indicates where the python headers and libs can be found by Boost.Build. You can create it in your boost_1_53_0/ directory with the following contents:
# Configure specific Python version.
using python : 2.7
: C:/Python27/python.exe
: C:/Python27/include #directory that contains pyconfig.h
: C:/Python27/libs #directory that contains python27.lib
: <toolset>msvc ;
Then from that boost_1_53_0/ directory you need to invoke b2 like this in order to build the missing library:
b2 toolset=msvc-11.0 --with-python variant=debug runtime-debugging=on link=shared --user-config=user-config.jam stage
(although I would recommend b2 toolset=msvc-11.0 --with-python --user-config=user-config.jam --build-type=complete stage so you can get in one step all the configurations that you might need in the future)
Once you have the libraries you need to add the directories to Visual Studio ( both to boost and to python).
Once you have successfully built the module you need to rename it to Test.pyd (exact name you used in BOOST_PYTHON_MODULE. If you have the python and Boost.Python libraries in your PATH or in your current directory you will be able to use the script in the tutorial:
import Test
print Test.greet()
and get the familiar "hello world".
Note that I'm very thankful for your tries but none of your answers helped. A fellow student then gave the hint for the right answer to me and some steps are really easy, others I don't understand, but it works now.
First Problem was: The new boost 1.53.0 does not work with Python27 or older. I then linked it with Python33 and had the build error removed.
But of course the build version didn't work without an error. As I tried to start my helloboost.py which imports from the .pyd built by VisualStudio and invokes the greet method, the following error occured:
ImportError: DLL load failed: The specified module could not be found.
As i checked the hello_ext.pyd with the dependency walker and wildly copy pasted around, I found out it needs the boost_python-vc110-mt-gd-1_53.dll (probably depending on what you need and built with bjam before) in the same folder. It worked then. Maybe someone can explain why nowhere was explained that I need this dll in the same folder as the pyd (or did I miss something? Is it just because I made a mistake before?)
Anyhow, I'm very glad it works now and hope it helps other people.
You probably will have worked this out by now - however:
When a .exe looks for a .dll to load no path is specified. Therefore a .dll must be in the search path for the file.
Also: I was trying to build 1.49 libs for Visual Studio 2013 - and kept getting the LNK error from my project. I don't know who suggested it on stackoverflow but someone\something gave me the idea to copy build system from a more recent boost which knows how to make .libs for more recent environments. (thank you)
I had to copy the boost build system from a 1.58 after running bootstrap in 1.58, copy b2, bjam and boost-build.jam to the earlier boost folder root to replace the same named files there. Also you will need to copy the later tools\build folder to support the build system.
Noting here in the hope it might help someone else in a similar cituation I found myself in.
See: Search Path Used by Windows to Locate a DLL