I ran the following command:
nvcc -arch=sm_70 foo.cu -o predatorPrey -I $BOOST_ROOT -L $BOOST_LIBRARY_PATH -lboost_timer
And got the following compilation error:
boost/include/boost/core/noncopyable.hpp(42): error: defaulted default constructor cannot be constexpr because the corresponding implicitly declared default constructor would not be constexpr
A google search led me here.
All hope seemed lost until this guy used a workaround.
Though, as a junior programmer, I don't understand what he means by
Built boost from source with g++11 open solved the problem
Does that mean rebuilding boost from scratch? How is it different from building boost by default?
So what are actually the workarounds to use both and CUDA in the same project?
For host code usage:
The only general workaround with a high probability of success when building a 3rd party library with the CUDA toolchain is to arrange your project in such a way that the 3rd party code is in a file that ends in .cpp and is processed by the host compiler (e.g. g++ on linux, cl.exe on windows).
Your CUDA code (e.g. kernels, etc.) will need to be in files with filenames ending in .cu (for default processing behavior).
If you need to use this 3rd party code/library functionality in your functions that are in the .cu file(s), you will need to build wrapper functions in your .cpp files to provide the necessary behavior as callable functions, then call these wrapper functions as needed from your .cu file(s).
Link all this together at the project level.
It may be that other approaches can be taken if the specific issue is analyzed. For example, sometimes updating to the latest version of the 3rd party library and/or CUDA version, may resolve the issue.
For usage in device code:
There is no general compatibility approach. If you expect some behavior to be usable in device code, and you run into a compile error like this, you will need to address the issue specifically.
General suggestions may still apply, such as update to the latest version of the 3rd party library you are using, and/or latest CUDA version.
Related
I want to use the HDF5 C++ bindings in a project build with CMake. So I do the usual:
find_package (HDF5 REQUIRED COMPONENTS CXX)
target_link_libraries(foo PUBLIC ${HDF5_LIBRARIES})
target_include_directories(foo PUBLIC ${HDF5_INCLUDE_DIRS})
This used to work till our cluster (HPC) was upgraded.
Now I get errors during linking:
function MPI::Win::Set_name(char const*): error: undefined reference to 'MPI_Win_set_name'
function MPI::Win::Set_attr(int, void const*): error: undefined reference to 'MPI_Win_set_attr'
Although the versions of HDF5 did not change, the new one seems to require linking against MPI which CMake does not tell me/does automatically.
Am I missing anything? Is the CMake FindHDF5 module flawed or am I required to link against MPI manually when HDF5_IS_PARALLEL was set? How is it possible, that I now need to link MY application against mpi?
Some checks I did:
ldd on both hdf5 libraries shows libmpi
there is no -lmpi on either system for my app
HDF5 1.10.1 is used on both, both build against OpenMPI 2.1.2 with GCC 6.4.0
mpicxx -show shows different output: The new one includes -lmpi_cxx, the old not.
h5c++ -show seems to be the same (some other paths of course)
TL&DR: When HDF_IS_PARALLEL is true, one needs to link against MPI even when not using it.
The HDF5 compiler wrapper calls the MPI compiler wrapper which would add that automatically but the CMake module does not follow this path. Read further for how I found that which might help for similar issues.
I found the solution by isolating the compiler commands invoked to the bare minimum. Using grep I found references to MPI_* already in the object file of the cpp source file. This eliminated the possibility of a relation to the libraries linked, so only differences in includes were possible. I compared the new and old HDF5 include directories with diff -qr and found them to be the same.
Being sure it was the headers I examined the preprocessed files (g++ -E). I compared the new vs old one with vimdiff after some extra steps (replaced header include paths that changed from old to new system to keep the clutter to a minimum) Searching for mpi I found the only difference to be the inclusion of mpi_cxx. This is done by mpi.h which was also easily visible from the preprocessed output.
Checking the MPI installation on both systems confirmed, that new one was build with mpi_cxx support (C++ bindings for MPI added in MPI2 and removed in MPI3, but still optionally available as it seems) and the old one without.
As the C headers only have declarations no references land in the sources but the C++ bindings have definitions. This caused the references to land in the object file and later failed to resolve during linking.
Searching around all I found was "Parallel HDF5 IO requires MPI" but nothing related to CMake. https://www.hdfgroup.org/HDF5/release/cmakebuild.html is also pretty sparse about this and does not mention HDF5_IS_PARALLEL (they do mention that they don't provide that find module).
Given that, I ended up adding an interface target, set the includes and libraries from hdf5, check for HDF_IS_PARALLEL and add the mpi includes and libraries to that target.
Let's say I have created and compiled a simple program using the MinGW 64 (g++ compiler). Running this program on my computer and looking in Process Explorer for what DLL files the program is using I find (among many others):
libgcc_s_seh-1.dll
libstdc++6.dll
libwinpthread-1.dll
These are the only ones that reside under my MinGW installation folder. The rest of the DLL files used reside under C:\Windows.
Question 1:
Are the MinGW DLL files the MinGW C++ runtime libraries (so to speak)? Do they serve the same purpose as for example msvcrXXX.dll (XXX = version of Microsoft runtime library).
Question 2:
If I want to run the application on a different computer which does not have MinGW installed, is it sufficient to include those DLL files listed above (i.e. placing them in the same folder as my executable) to have it run on the other computer (we assume the other computer is also a 64-bit Windows machine). If yes, does this mean we basically ship the MinGW C++ runtime with our executable. If no, why?
libstdc++6.dll is the C++ standard library, like you said.
libwinpthread-1.dll is for C++11 threading support. MinGW-W64 has two possible thread variants: Either use the native Windows functions like CreateThread, but C++11 stuff like std::thread won´t be available then; or include this library and use the C++11 classes (too).
Note that to switch the thread model, you´ll need to reinstall MinGW. Just removing the DLL and not using the C++11 stuff won´t work, the DLL will be required nonetheless with your current install.
libgcc_s_seh-1.dll is something about C++ exception handling.
Yes, it should be sufficient to deliver the DLLs too
(or use static linking and deliver only your program file).
For complicated projects where you're not exactly sure which DLL files need to be included to distribute your application, I made a handy dandy Bash script (for MSYS2 shells) that can tell you exactly what DLL files you need to include. It relies on the Dependency Walker binary.
#!/usr/bin/sh
depends_bin="depends.exe"
target="./build/main.exe" # Or wherever your binary is
temp_file=$(mktemp)
output="dll_list.txt"
MSYS2_ARG_CONV_EXCL="*" `cygpath -w $depends_bin` /c /oc:`cygpath -w $temp_file` `cygpath -w $target`
cat $temp_file | cut -d , -f 2 | grep mingw32 > $output
rm $temp_file
Note that this script would need to be modified slightly for use in regular MSYS (the MSYS2_ARG_CONV_EXCL and cygpath directives in particular). This script also assumes your MinGW DLL files are located in a path which contains MinGW.
You could potentially even use this script to automatically copy the DLL files in question into your build directory as part of an automatic deploy system.
You may like to add the options -static-libgcc and -static-libstdc++ to link the C and C++ standard libraries statically and thus remove the need to carry around any separate copies of those.
I used ntldd to get a list of dependencies.
https://github.com/LRN/ntldd
I'm using msys2 so i just installed it with pacman. Use that and then copy all the needed dependencies
There are several major challenges to distributing compiled software:
Compiling the code for all target processors (remember, when it comes to compiled code, you need to produce separate downloads/distributions for each type of instruction set architecture).
Ensuring that the builds are reproducible, consistent, and can be easily correlated with a specific version of the code (and versions of the dependencies).
Ensuring that the build output is self-contained and includes all of its dependencies within it (so that it is not dependent on any other installations that happen to exist on just your system).
Making sure that your code is built and distributed regularly, with updates distributed automatically so that -- in the event of security issues -- you can push out new patched versions.
For convenience and to increase reach, it is nice for non-savvy users to have a prebuilt version that they can install. However, I would recommend sharing the source code as a first step.
Most of these requirements are fairly non-trivial to hit and often require automating not only build process, but also automating the instantiation / configuration of VMs in which the build should take place. However, there are open source projects that can help... for example, check out Gitian.
In terms of bullet point #3, the key thing here is to use static linking... while this does make the binary you distribute much larger (because its dependencies are now baked into the output), it also makes your binary isolated from the version of the libraries on the system (avoiding "dependency hell").
Point #4 is very tricky, but thankfully there are also opensource tools to help here, as well such as cloudup, which provides a way to add auto-updating capability to your application distribution.
I have a rather complex SCons script that compiles a big C++ project.
This gcc manual page says:
The compiler performs optimization based on the knowledge it has of the program. Compiling multiple files at once to a single output file mode allows the compiler to use information gained from all of the files when compiling each of them.
So it's better to give all my files to a single g++ invocation and let it drive the compilation however it pleases.
But SCons does not do this. it calls g++ separately for every single C++ file in the project and then links them using ld
Is there a way to make SCons do this?
The main reason to have a build system with the ability to express dependencies is to support some kind of conditional/incremental build. Otherwise you might as well just use a script with the one command you need.
That being said, the result of having gcc/g++ optimize as the manual describe is substantial. In particular if you have C++ templates you use often. Good for run-time performance, bad for recompile performance.
I suggest you try and make your own builder doing what you need. Here is another question with an inspirational answer: SCons custom builder - build with multiple files and output one file
Currently the answer is no.
Logic similar to this was developed for MSVC only.
You can see this in the man page (http://scons.org/doc/production/HTML/scons-man.html) as follows:
MSVC_BATCH When set to any true value, specifies that SCons should
batch compilation of object files when calling the Microsoft Visual
C/C++ compiler. All compilations of source files from the same source
directory that generate target files in a same output directory and
were configured in SCons using the same construction environment will
be built in a single call to the compiler. Only source files that have
changed since their object files were built will be passed to each
compiler invocation (via the $CHANGED_SOURCES construction variable).
Any compilations where the object (target) file base name (minus the
.obj) does not match the source file base name will be compiled
separately.
As always patches are welcome to add this in a more general fashion.
In general this should be left up to the program developer. Trying to compile all together in an amalgamation may introduce unintended behaviour to the program if it even compiles in the first place. Your best bet if you want this kind of optimisation without editing the source yourself is to use a compiler with inter-process optimisation like icc -ipo.
Example where an amalgamation of two .c files would not compile is for example if they use two identical static symbols with different functionality.
I am trying to make my autotools project in C++ link against library, that originates as C library (libsomelib.so), but also has bindings to c++ (libsomelib++.so). I ma trying to use PKG_CHECK_MODULES to check if this package is installed, and use autotools to link against it. However both libs come in one package (c++ version requires configure flag), and have only one .pc file, in which independently of configuration settings there is only line
Libs: -L${libdir} -lsomelib
without any mentioning of ++ version. There is also no separate ++.pc file that i noticed at other programs. Therefore automatic linking against ++ version is impossible. I thought about manually adding -lsomelib++ to linking flags, but that's rather ugly (and it will not work if somebody compiled it without --with-cxx flag). I could also test for it's existence by AC_SEARCH_LIBS, but since it's C++ library it's not so straightforward.
Is missing ++.pc file mistake of package distributor or is it some deeper idea, and i don't know how to use it?
If somebody is really qurious i will say that package in question is ossp-uuid.
Yes, the missing ++.pc usually hints towards an omission on behalf of the packager.
BTW: If simple (DCE) UUIDs are sufficient, you could consider e2fsprogs/util-linux's libuuid (in case you run this OS).
I made a program on Mac OS X using OpenGL and dynamically linking libpng. I'm now trying to port it to Windows. Whenever I try to compile and link my ported program in Borland it gives me this error and about 10 more that are the same, but with a different '_png_create_read_struct':
Error: Unresolved external '_png_create_read_struct' reference from C:\PROGRAMMING\PNGTEST.OBJ
I assume it's because I have not properly set up libpng with Borland C++ 5.5.1 for Win32. I've put png.h and pngconf.h into the include folder into C:\Borland\BCC55\Include, and I have put libpng12.dll.a, libpng13.a, libpng13.dll.a, libpng.a, libpng.dll.a, libpng12.def, libpng.def, libpng12.la, and libpng.la into C:\Borland\BCC55\Lib (there is probably no need for them all, but as a noob I have no idea which ones are needed and not).
Do I need to put a libpng.obj file in there too? And if so how would I make/get one? I have tried using makefile.bc32 to set up libpng, yet that gives me a missing separator error.
Here are my command-line options:
bcc32 -tW pngtest.cpp -lpng
I include png.h in my code. What am I doing wrong or is there an even better way to load images with alpha that doesn't need libpng, or even a better compiler to get for Windows?
You're probably better off with the MinGW compiler than Borland. Borland is not well supported any longer.
You could also download DevC++ and see if it has a libpng package in its addon mechanism.
DevC++ is an IDE that uses the MinGW C/C++ compiler.
That said, if you feel you must use BCC, you'll either have to
a) Build libpng with Borland. This is the best solution if you're going to use borland.
b) Use, I think, Impdef to create an import library from libpng.dll. You'll find impdef.exe or imp(something).exe in the borland bin directory.
Note that some libraries will not work with impdef as there is static code linked to the dll that causes it to fail without the proper runtime.
First of all, I would not have "polluted" the BC55 installation with third-party libraries; it will make moving the project to other build environments much more difficult. It would have been better to place them in a folder within your project.
Secondly do you know that the export library you are attempting to link is built for BC55? The .a extension suggests a GNU library (Borland libraries conventionally use .lib extension), in which case it would not link with BC55 which uses a different object file format. If this is the case you will need to rebuild the library as you attempted to do, so I suggest that you should really be asking a question about the problem you had with doing just that. I wonder whether the makefile is written for Borland make or GNU make, since they have differing syntax?
The command line option -lpng might be correct for GCC (where it will link libpng.a), but is meaningless to BCC. The -l option merely passes the text that follows to the linker. The linker command line, requires that the complete name be passed, and if no extension is provided, .lib is added implicitly.
You should probably just use coff2omf to convert the library. The DLL files are almost certainly in "Microsoft" COFF format.
See COFF2OMF.EXE, the Import Library Conversion Tool.