I have a C++/CUDA project based on CMake. Currently I'm using CMake 3.11 and CUDA 9.0 and I'm reading that now CUDA is a first-class language in CMake, so I can simply add .cu files and it will automatically call the NVCC compiler to take care of them. Before this, we had to use find_package(CUDA) and so on, but now this is a deprecated feature.
Now the question: how do we compile normal .cpp files that still make use of the CUDA host libraries? cudaMalloc, cudaFree, etc. My solution so far is to set_source_files_properties(file.cpp PROPERTIES LANGUAGE CUDA) but I don't feel this is the right thing to do. Since this file doesn't contain device code, it should be compiled with the GCC compiler including and linking to the CUDA libraries.
Another issue with this approach is that it propagates very quickly to the rest of the project. Say a header file that defines a struct containing a dims3 variable, then every .cpp file that #includes this header will need to be considered as a CUDA file.
how do we compile normal .cpp files that still make use of the CUDA host libraries?
With your regular C++ compiler. There is no need to treat them as LANGUAGE CUDA.
I recently made the switch to CMake's native CUDA support in this repository of mine. I now have:
add_executable(device_management
EXCLUDE_FROM_ALL examples/by_runtime_api_module/device_management.cpp)
add_executable(execution_control
EXCLUDE_FROM_ALL examples/by_runtime_api_module/execution_control.cu)
Both of these use the runtime API (actually, they use my C++'ified wrappers, which use the runtime API - but most of the wrappers is header-only), but the second one has a kernel in it, so I made it a .cu and it gets compiled by CUDA. The .cpp builds and runs fine by plain old GCC.
Related
I ran the following command:
nvcc -arch=sm_70 foo.cu -o predatorPrey -I $BOOST_ROOT -L $BOOST_LIBRARY_PATH -lboost_timer
And got the following compilation error:
boost/include/boost/core/noncopyable.hpp(42): error: defaulted default constructor cannot be constexpr because the corresponding implicitly declared default constructor would not be constexpr
A google search led me here.
All hope seemed lost until this guy used a workaround.
Though, as a junior programmer, I don't understand what he means by
Built boost from source with g++11 open solved the problem
Does that mean rebuilding boost from scratch? How is it different from building boost by default?
So what are actually the workarounds to use both and CUDA in the same project?
For host code usage:
The only general workaround with a high probability of success when building a 3rd party library with the CUDA toolchain is to arrange your project in such a way that the 3rd party code is in a file that ends in .cpp and is processed by the host compiler (e.g. g++ on linux, cl.exe on windows).
Your CUDA code (e.g. kernels, etc.) will need to be in files with filenames ending in .cu (for default processing behavior).
If you need to use this 3rd party code/library functionality in your functions that are in the .cu file(s), you will need to build wrapper functions in your .cpp files to provide the necessary behavior as callable functions, then call these wrapper functions as needed from your .cu file(s).
Link all this together at the project level.
It may be that other approaches can be taken if the specific issue is analyzed. For example, sometimes updating to the latest version of the 3rd party library and/or CUDA version, may resolve the issue.
For usage in device code:
There is no general compatibility approach. If you expect some behavior to be usable in device code, and you run into a compile error like this, you will need to address the issue specifically.
General suggestions may still apply, such as update to the latest version of the 3rd party library you are using, and/or latest CUDA version.
I have a project involving both C++ and CUDA code, and specifically - binaries whose objects originate only from C++ code (compiler with a regular C++ compiler) and binaries whose objects originate only from CUDA code (that is, .cu files compiled with nvcc).
The thing is, the C++-originated targets still make some CUDA API calls, and thus depend on the CUDA libraries.
Now, for linking the CUDA-originated binaries, I don't need to mention the CUDA libraries; it links fine, but for the C++-originated libraries, I do need them.
How can I tell cmake to link_libraries only for my C++-originated targets?
Or - am I thinking about this problem the wrong way?
Note: I'm using CMake >= 3.8 with native CUDA support, so I don't use the cuda_-prefixed commands.
You should probably have distinct targets defined, created e.g. by add_library or add_executable and then use target_link_libraries(target_name [PRIVATE | INTERFACE | PUBLIC]library).
As a general guideline you shouldn't operate on directory level (link_libraries, include_directories, etc.) it is proposed e.g. by Daniel Pfeifer in this awesome talk.
My single C++ file includes many header files from various libraries such as MKL and template libraries. Compiling this single file every time takes a long time due to the many include statements. I tried separating the include statements in a single file, making an object file out of it, and linking it during the compilation of the main file, but it seems the compiler doesn't recognize the definitions of the objects and functions that are defined in the header files.
So, how can I save the compilation time of these header files?
Precompiled headers:
What you are doing does sound like it would benefit heavily from precompiled headers (pch's). The intel compiler does support pch's, as you can see here:
https://software.intel.com/en-us/node/522754.
Installing tools without root permissions
You can still experiment with ccache to see if that will help your situation -- it can make a huge difference in cases where units do not need to be recompiled, which happens surprisingly often. The path to doing this is to install ccache locally. Generally this can be done by downloading sources to a directory where you have write access (I keep an install folder in my home directory) , and then following the build directions for that project. Once the executable are built, you'll have to add the path for the executable to your path -- by doing
export PATH=$PATH:the-path-to-your-compiled-executables.
in BASH. At that point ccache will be available to your user.
A better explanation is available here: https://unix.stackexchange.com/questions/42567/how-to-install-program-locally-without-sudo-privileges
CMAKE, cotire, etc
One final point -- I haven't read the docu for the intel compiler's pch support, but pch support generally involves some manual code manipulation to instruct the compile which header to precompile and so on. I recommend looking in to managing your build with CMAKE, and using the cotire plugin to experiment with pch.
While I have gripes about CMAKE from a software engineering perspective, it does make managing your build, and especially experimenting with various ways to speed up your build, much much easier. After experimenting with pch's, and ccache, you could also try out ninja instead of make and see if that gets you any improvement.
Good luck.
I have searched Google but haven't found quite a direct answer to my queries.
I have been reading C++ Primer and I'm still quite new to the language, but despite how good the book is it discusses the use of the standard library but doesn't really describe where it is or where it comes from (it hasn't yet anyway). So, where is the standard library? Where are the header files that let me access it? When I downloaded CodeBlocks, did the STL come with it? Or does it automatically come with my OS?
Somewhat related, but what exactly is MinGW that came with Cobeblocks? Here it says
MinGW is a C/C++ compiler suite which allows you to create Windows executables without dependency on such DLLs
So at the most basic level is it just a collection of "things" needed to let me make C++ programs?
Apologies for the quite basic question.
"When I downloaded CodeBlocks, did the STL come with it?"
Despite it's not called the STL, but the C++ standard library, it comes with your c++ compiler implementation (and optionally packaged with the CodeBlocks IDE).
You have to differentiate IDE and compiler toolchain. CodeBlocks (the Integrated Development Environment) can be configured to use a number of different compiler toolchains (e.g. Clang or MSVC).
"Or does it automatically come with my OS?"
No, usually not. Especially not for Windows OS
"So, where is the standard library? Where are the header files that let me access it?"
They come with the compiler toolchain you're currently using for your CodeBlocks project.
Supposed this is the MinGW GCC toolchain and it's installed in the default directory, you'll find the libraries under a directory like (that's what I have)
C:\MinGW\lib\gcc\mingw32\4.8.1
and the header files at
C:\MinGW\lib\gcc\mingw32\4.8.1\include\c++
"So at the most basic level is it just a collection of "things" needed to let me make C++ programs?"
It's the Minimalist GNU toolchain for Windows. It usually comes along with the GCC (GNU C/C++ compiler toolchain), plus the MSYS minimalist GNU tools environment (including GNU make, shell, etc.).
When you have installed a C++ implementation you'll have something which implements everything necessary to use C++ source files and turn them into something running. How that is done exactly depends on the specific C++ implementation. Most often, there is a compiler which processes individual source file and translates them into object files which are then combined by a linker to produce an actual running program. That is by no means required and, e.g., cling directly interprets C++ code without compiling.
All this is just to clarify that there is no one way how C++ is implemented although the majority of contemporary C++ implementations follow the approach of compiler/linker and provide libraries as a collection of files with declarations and library files providing implementations of these declarations.
Where the C++ standard library is located and where its declarations are to be found entirely depends on the C++ implementations. Oddly, all C++ implementations I have encountered so far except cling do use a compiler and all these compilers support a -E option (although it is spelled /E for MSVC++) which preprocesses a C++ file. The typically quite large output shows locations of included files pointing at the location of the declarations. That is, something like this executed on a command line yields a file with the information about the locations:
compiler -E input.cpp > input.ii
How the compiler compiler is actually named entirely depends on the C++ implementation and is something like g++, clang++, etc. The file input.cpp is supposed to contain a suitable include directive for one of the standard C++ library headers, e.g.
#include <iostream>
Searching in the output input.ii should reveal the location of this header. Of course, it is possible that the declarations are made available by the compiler without actually including a file but just making declarations visible. There used to be a compiler like this (TenDRA) but I'm not aware of any contemporary compiler doing this (with modules being considered for standardization these may come back in the future, though).
Where the actual library file with the objects implementing the various declarations is located is an entirely different question and locating these tends to be a bit more involved.
The C++ implementation is probably installed somehow when installing CodeBlocks. I think it is just one package. On systems with a package management system like dpkg on some Linuxes it would be quite reasonable to just have the IDE have a dependency on the compiler (e.g., gcc for CodeBlocks) and have the compiler have a dependency on the standard C++ library (libstdc++ for gcc) and have the package management system sort out how things are installed.
There are several implementations of the C++ standard library. Some of the more popular ones are libstdc++, which comes packaged with GCC, libc++, which can be used with Clang, or Visual Studio's implementation by Microsoft. They use a licensed version of Dinkumware's implementation. MinGW contains a port of GCC. CodeBlocks, an IDE, allows you to choose a setup which comes packaged with a version of MinGW, or one without. Either way, you can still set up the IDE to use a different compiler if you choose. Part of the standard library implementation will also be header files, not just binaries, because a lot of it is template code (which can only be implemented in header files.)
I recommend you read the documentation for the respective technologies because they contain a lot of information, more than a tutorial or book would:
libstdc++ faq
MinGW faq
MSDN
When manually compiling the OpenCV library, one has to select what he/she wants to include by specifying in CMake all the things to include. If, for instance, I would like to include an additional library (for example CUDA support), can I just compile that separately or do I have to recompile the entire library? If former is the answer, how do I do this?
Let's go with CUDA as an example. Library's some dll and lib files will have some dependices to CUDA and some won't. When you use Cmake to configure and generate make files it does create this files with your supplied configuration, CUDA on or off. So but later you want to change this configuration and recompile it again. This is what make is for. When you want to change something within the library and not want to compile it from beginnig.
So you should use Cmake again to generate new make files with your new configuration. You should use the same folders of first compilation to reduce required compile time. So when you change the configuration and generate new make files, it will have probably less compiling time than compiling the all library, because not every library has dependicies with new configuration.
But there is an important issue here. CUDA is a highly dependent library. When I checked source code there are defines which indicates if cuda is on or off. So in this case a change in CUDA configuration affects so much. If you ask me not just for CUDA for all other configuration changes, use a new fresh folder for new configuration and compilation. Because when you encounter a problem you would be at least sure that you have no compilation problems.