Building CUDA object files using cmake - build

I got the following setup. I'm going to extend a framework written in C++ using MPI and other Stuff using CUDA. The Project uses cmake for building. I would like to avoid using a library for my extensions and build object files from my cuda sources. Afterwards I would like to link these object object files and some other files compiled with other compilers.
Does anyone have a clue on hwo to achieve that?
I had a look at http://code.google.com/p/cudpp/wiki/BuildingCUDPPwithCMake for getting an overview on how to use CUDA with cmake but this solution uses a library as well.

It is possible to compile object files with the CUDA support that comes with newer versions of cmake. You use the cuda_compile command. See below.
# CMakeLists.txt for G4CU project
project(test-cuda-thrust-gdb)
# required cmake version
cmake_minimum_required(VERSION 2.8)
# packages
find_package(CUDA)
# nvcc flags
set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-gencode arch=compute_20,code=sm_20)
cuda_compile(HELPER_O helper.cu)
cuda_compile(DRIVER_O driver.cu OPTIONS -G)
cuda_add_executable(driver ${HELPER_O} ${DRIVER_O})
If you need more information have a look at the FindCUDA.cmake file.

Related

Setting Various compilers in CMake for creating a shared library

I am looking to set various compilers for different folders in my project, which should compile to a shared library.
The project structure is as follows -
/Cuda
a.cu
b.cu
c.cu
header.cuh
/SYCL
a.cpp
b.cpp
c.cpp
header.h
main.cpp
test.cpp
All the files under the Cuda folder must be compiled by nvcc, and the files under the SYCL folder by a specific compiler that is present at a path in the system. All the files outside these folders (namely main.cpp and test.cpp) are normal C++ code and use the headers present in these two folders and must be compiled with GCC.
How do I go about writing the CMake for such a project structure(which aims to be a shared lib).
Edit - The project needn't have only one dedicated CMake. My approach was as follows -
Each Folder(Cuda and SYCL) can have their dedicated CmakeLists.txt which would specify the compiler and the various flags to go with it.
A master CMake outside the folder can use the add_subdirectory command. And this is where I get stuck, I am not sure what to do next, how to link these two folders with the main and the test files.
CMake allows one compiler per language, so simply writing this is enough:
cmake_minimum_required(VERSION 3.20)
project(example LANGUAGES CXX CUDA)
add_subdirectory(Cuda)
add_subdirectory(SYCL)
You can separately set the C++ and CUDA compilers by setting CMAKE_CXX_COMPILER and CMAKE_CUDA_COMPILER at the configure command line.
$ cmake -S . -B build -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CXX_COMPILER=g++ -DCMAKE_CUDA_COMPILER=nvcc
Also, I want to clear up this misconception:
Each Folder(Cuda and SYCL) can have their dedicated CmakeLists.txt which would specify the compiler and the various flags to go with it.
The CMakeLists.txt file should not attempt to specify the compiler. It's tricky to do correctly, can't always be done (especially in the add_subdirectory case) and unnecessarily restricts your ability to switch out the compiler. Maybe you have both GCC 10 and 11 installed and want to compare the two.
Similarly, you should not specify flags in the CMakeLists.txt file that aren't absolutely required to build, and you should always check the CMake documentation to see if the flags you're interested in have been abstracted for you. For instance, CMake has special handling of the C++ language standard (via target_compile_features) and CUDA separable compilation (via the CUDA_SEPARABLE_COMPILATION target property).
The best solution, as I have detailed here, is to set optional flags via the *_FLAGS* variables in a preset or toolchain.

Linking to Tensorflow as an external lib to C++ application

Are there any supported alternatives to the approach presented on the Tensorflow C++ guide that instead allows you to separately build Tensorflow and link your C++ application to it as an external library?
I've managed to compile libtensorflow_cc.so and link to it. But where can I get valid header-files. Just grabbing the header-files from source gives errors and I have to manually adjust the paths, which I'd rather avoid.
I use this https://github.com/FloopCZ/tensorflow_cc repository and add find_package(TensorflowCC REQUIRED) in cmake and there is no issue with include file and not with lib target_link_libraries(project_name TensorflowCC::Shared)
or check that you include file path is correct to the path to you TF installation

Code parsing not working with CUDA, Clion and CMake

I have a project divided in modules, here is a dummy example:
root
CMakeLists.txt
modules
utils
CMakeLists.txt
src
util_file.cpp
cuda
CMakeLists.txt
src
cuda_file.cu
If I edit the cuda_file.cu with CLion, all the symbols are unresolved (even the includes from standard library) by CLion. All the code completion/creation features are then of course gone (among other things). The problem seems to be that whenever you create a library or an executable with only CUDA files, Clion becomes stupid and doesn't parse or resolve anything anymore.
There is two workarounds I've found but they are not friendly or "clean" to use :
add an empty .cpp file to the directory and add it to the add_library() CMake line.
switch to another library or executable target that has .cpp files (like utils in my dummy example). But then when you want to compile or execute you have to switch again to cuda target (or some subtarget like test_cuda for test units) and then switch back again to continue coding or debugging, etc...
Here is the CMakeLists.txt from the cuda module with the workaround:
cmake_minimum_required(VERSION 3.5)
message(STATUS "Configuring module cuda")
# Build module static library
FILE(GLOB CUDA_SRCS
${CMAKE_CURRENT_SOURCE_DIR}/src/*.cpp)
FILE(GLOB CUDA_CU_SRCS
${CMAKE_CURRENT_SOURCE_DIR}/src/*.cu)
FILE(GLOB CUDA_CU_HDRS
${CMAKE_CURRENT_SOURCE_DIR}/include/*.cuh)
cuda_compile(cuda_objs ${CUDA_CU_SRCS} ${CUDA_CU_HDRS})
add_library(cuda STATIC ${CUDA_SRCS} ${cuda_objs})
# because only .cu files, help cmake detect C++ language
set_target_properties(cuda PROPERTIES LINKER_LANGUAGE CXX)
Is there a way to avoid CLion derping when resolving links to other headers and libraries ?
I've already added .cu and .cuh files as C/C++ code in CLion options and tried using JETBRAINS_IDE define option as explained in another similar post, but those two problems are not the same.
It seems like without the intervention of Jetbrains to add official CUDA support, the most I could get out of the combo CLion + CMake + CUDA was achieved by:
adding .cu and .cuh as C++ files in CLion. This allows Clion to recognize cuda code as C++ code and color it correctly.
adding an empty dummy .cpp file to the cuda source directory if it is only filled by .cu files (one of my "dirty" hacks from my question). I could not find better. This allows Clion to not completely derp. A simple thing like recognizing cstdio doesn't work without this "hack" and CLion is basically an enhanced notepad.
using when possible CMake 3.8+ that officially supports CUDA as a language and use the new "cuda aware" add_library() instead of the old macro defined cuda_add_library(). This can avoid problems in the future in case of deprecation.
in the CMakeLists of the cuda module (or main CMakeLists if only one), include the path to the include directory of cuda to allow Clion to "see" the cuda headers. CLion can then propose to you to include them so that CLion resolves correctly CUDA API calls like cudaMalloc() or cudaFree(). This is only needed for CLion as the CUDA compiler doesn't need this includes to compile properly (cuda.h, cuda_runtime.h, ...).
use this answer to create a "clion helper" header file, so that it doesn't derp on symbols like __device__ or __global__.
I think that if Jetbrains starts adding support for CUDA, not also will it remove the need to add this dummy file, but it will probably also resolve all other things listed.
Here is the link to the nvidia blog with examples about the official cuda language support in CMake and new "cuda aware" add_library() : https://devblogs.nvidia.com/parallelforall/building-cuda-applications-cmake/
As of the version 2020.1 of CLion, CUDA projects are now officially supported
https://blog.jetbrains.com/clion/2020/04/clion-2020-1-cuda-clang-embedded/

How do I write system-independent code when there are paths involved?

Say I am creating a project that uses a certain library and I have to provide the path for that library while linking. In the command line or makefile I might have:
g++ ... -L/path/to/mylibrary
I'm also going to send this project to someone else who wants to use it. The path on their system might not necessarily be the same as mine. They could be using a different file path all together.
How do I make sure that the path to the library works for both my computer and the recipient of my project?
This is the role of a build system or build configuration tool. There are many of those around. The main one is probably CMake as it has a very extensive feature set, cross-platform, and widely adopted. There are others like Boost.Jam, autoconf, and others.
The way that these tools will be used is that they have automated scripts for looking into the file-system and finding the headers or libraries that you need, i.e., the dependencies required to compile your code. They can also be used to do all sorts of other fancy things, like checking what features the OS supports and reconfiguring the build as a consequence of that. But the point is, you don't hard-code any file-paths into the build configuration, everything is either relative to your source folder or it is found automatically by the build script.
Here is an example CMake file for a project that uses Boost:
cmake_minimum_required (VERSION 2.8)
project (ExampleWithBoost)
find_package(Boost 1.46 COMPONENTS thread program_options filesystem REQUIRED)
# Add the boost directory to the include paths:
include_directories(SYSTEM ${Boost_INCLUDE_DIR})
# Add the boost library directory to the link paths:
link_directories(${Boost_LIBRARY_DIRS})
# Add an executable target (for compilation):
add_executable(example_with_boost example_with_boost.cpp)
# Add boost libraries to the linking on the target:
target_link_libraries(example_with_boost ${Boost_LIBRARIES})
The find_package cmake function is simply a special script (specialized for Boost, and installed with CMake) that finds the latest version of boost (with some minimal version) installed on the system, and it does so based on the file-name patterns that the library uses. You can also write your own equivalents of find_package, or even your own package finders, using the functions that CMake provides for searching the file system for certain file-name patterns (e.g., regular expressions).
As you see, the build configuration file above only refer directly to your source files, like "example_with_boost.cpp", and it's only relative to the source folder. If you do things right, the configuration scripts will work on virtually any system and any OS that CMake supports (and that the libraries you depend on support). This is how most major cross-platform projects work, and when you understand how to work with these systems, it's very powerful and very easy to use (in general, far easier to use and trouble-free than build configurations that you do by point-and-click within IDE menus like in Visual Studio).
You can use premake that generates cross platform makefiles: Visual Studio, Gcc and others
http://industriousone.com/what-premake
CMake is another alternative
http://www.cmake.org/
I'm not sure if there's a single universal way of doing this, but people often provide different config files and let the main Makefile detect which one to include: linux.make, darwin.make, cygwin.make etc.
there are also various tools like CMake that allow to automate this, but all in all it's just scripting that hides the system-dependency from the developer.

How to add files to Eclipse CDT project with CMake?

I'm having problem getting the source and header files added into my Eclipse CDT project with CMake. In my test project (which generates and builds fine) I have the following CMakeLists.txt:
cmake_minimum_required(VERSION 2.6)
project(WINCA)
file(GLOB WINCA_SRC_BASE "${WINCA_SOURCE_DIR}/src/*.cpp")
file(GLOB WINCA_SRC_HPP_BASE "${WINCA_SOURCE_DIR}/inc/*.hpp")
add_library(WINCABase ${WINCA_SRC_BASE} ${WINCA_SRC_HPP_BASE})
This works fine but the resulting Eclipse project files contains no links to the source or header files. Anyone knows why? Are there any other cmake command I have to use to actually add the files into the project?
I realize it's been a while since you've post this, but fwiw, it work's for me fine with CMake 2.6 or 2.7 (trunk) versions, generating for Eclipse/Ganymede. What I do is first run
cmake -G "Eclipse CDT4 - Unix Makefiles" /path/to/src
which generates the Eclipse project files as well as the makefiles, then "Import Project" in Eclipse.
Works beautifully...
sly
I use CMake 2.4, not 2.6 but in 2.4 they specifically warn against using GLOBs to find the files to build.
This is because it will notice if new files are added or deleted, so it will not be able to figure out the dependencies.
If you have to explicitly add the files to your CMakeLists.txt then this file will be newer than the makefiles and the cache files. So CMake will know to regenerate them.
If the files are added with a glob no files CMake knows about change with you add new files so CMake doesn't know that it has to regenerate the makefiles etc. This is the same for regular makefiles and Visual Studio projects.
Unless the CMake 2.6 docs explicitly says it is ok to add files like this I would avoid it. It is not that hard to manage the source files in cmake. How often do you add new files?
The problem I had was I made an "in-source" build instead of an "out-of-source" build. Now it works fine, and it was actually lots of info on this on the Wiki but somehow I misunderstood it.